text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Morphological/Dynamic Instability of Directional Crystallization in a Finite Domain with Intense Convection : This study is devoted to the morphological/dynamic instability analysis of directional crystallization processes in finite domains with allowance for melt convection. At first, a linear instability theory for steady-state crystallization with a planar solid/liquid interface in the presence of convection was developed. We derived and analyzed a dispersion relation showing the existence of morphological instability over a wide range of wavenumbers. This instability results from perturbations arriving at the solid/liquid interface from the cooled wall through the solid phase. Also, we showed that a planar solid/liquid interface can be unstable when it comes to dynamic perturbations with a zero wavenumber (perturbations in its steady-state velocity). A branch of stable solutions for dynamic perturbations is available too. The crystallizing system can choose one of these branches (unstable or stable) depending of the action of convection. The result of morphological and dynamic instabilities is the appearance of a two-phase (mushy) layer ahead of the planar solid/liquid interface. Therefore, our next step was to analyze the dynamic instability of steady-state crystallization with a mushy layer, which was replaced by a discontinuity interface between the purely solid and liquid phases. This analysis showed the existence of dynamic instability over a wide range of crystallization velocities. This instability appears in the solid material at the cooled wall and propagates to the discontinuity interface, mimicking the properties of a mushy layer. As this takes place, at a certain crystallization velocity, a bifurcation of solutions occurs, leading to the existence of unstable and stable crystallization branches simultaneously. In this case, the system chooses one of them depending of the effect of the convection as before. In general, the crystallizing system may be morphologically/dynamically unstable when it comes to small perturbations arriving at the phase interface due to fluctuations in the heat and mass exchange equipment (e.g., fluctuations in the freezer temperature). Introduction The dynamics of an interfacial boundary (crystallization front) separating the solid and liquid phases is responsible for the formation and growth of solid phase protrusions, dendritic crystals, absorption and redistribution of impurities and finally for the development of different microstructures (for example, cellular, banded, mixed-type and irregular structures) in the crystallized material [1][2][3][4][5][6][7]. In order to control the process of microstructure development by means of the physical and operating parameters of the crystallization process, the laws controlling the formation of one or another type of structure as well as the transitions between various types of structural formations must be fully established. One of the processes influencing the structuring of materials is the fluctuations in various physical quantities (e.g., temperature fluctuations always existing in nature; hydrodynamic fluctuations in the melt flow rate caused by convection, natural causes or the type of laboratory setup; fluctuations in the impurity concentration in liquid caused by hydrodynamic fluctuations or impurity supply from the outside; mechanical fluctuations in the entire crystallizing system caused by seismic processes in nature or the natural background of human activity in a laboratory setup; etc.). These fluctuations result in perturbations in the temperature and concentration fields as well as hydrodynamic perturbations in the fluid velocity, leading to perturbations in the solid/liquid interphase boundary (the crystallization front). The development of such morphological or dynamic perturbations is facilitated by the inhomogeneities in the temperature and concentration fields, convection and thermal or constitutional supercooling. As a result, such perturbations completely modify the crystallization process and lead to the formation of different microstructures (for example, the morphological instability of the solid/liquid interface is responsible for the development of cellular structures; its dynamic instability leads to the formation of banded structures, and the evolution of the dendritic forest in general generates a twophase (mushy) layer) [8][9][10][11][12][13]. Note that the linear analysis of morphological stability with applications for crystallization problems was first used by Mullins and Sekerka [14][15][16]. Then, their technique was extended to describe the different features of crystallization phenomena in refs. [17][18][19][20][21][22][23][24]. Namely, the effects of (1) anisotropic liquid-solid interfaces [17], (2) fluctuating and periodic growth rates [18,19], (3) transient solidification [20], (4) selfsimilar growth [21], (5) rapid crystallization [22], (6) a mushy layer with a changeover from instability [23] and (7) convective instability with a forced flow [24] were investigated. In addition, on the one hand, convection can equalize temperature and concentration distributions in a liquid, and on the other hand, convective flows or cells can create cold or hot spots in certain areas of the solid/liquid interface [25,26]. As this takes place, in colder spots, the conditions of the preferential growth of solid-phase protrusions are created; i.e., a morphological instability can be developed. At present, the effect of convection on morphological/dynamic instability has been weakly investigated due to the strong nonlinearity of mathematical models, the variety of convective flows and the different boundary conditions at the phase interface in the presence of convection. For instance, a simplified mode of the heat and mass transfer laws was applied to investigate the effect of a plane-parallel fluid current on the interfacial boundary stability in ref. [27]. The authors of refs. [28][29][30] applied the boundary layer techniques to omit small nonlinear terms and simplify the model equations. The spatially periodic fluid currents were considered in ref. [31] to model the localized morphological patterns. A mathematical model in an unbounded area of space relying on conductive heat and mass transfer laws was applied to study the effects of fluid currents in refs. [32,33]. This theory showed that the dispersion relation and the neutral stability curve in the presence of convection depend significantly on the extension rate at the solid/liquid interface. In this study, we substantially develop the morphological/dynamic stability theory in the presence of convective flows. First of all, our theory is built in a limited region of space, which allows us to model perturbations coming from the solid and liquid phases (e.g., from the solid boundaries of laboratory facilities or natural processes). Secondly, two types of instabilities are investigated below: morphological and dynamic. Initially, a quasistationary crystallization process with a planar solid/liquid interface separating the purely solid and liquid phases is considered. As a result of the development of the morphological instability, the planar crystallization front breaks up and a two-phase (mushy) layer between the purely solid and liquid phases is formed. This conclusion is confirmed by experiments in which aqueous solutions of isopropanol were cooled and crystallized from above in the presence of convection [34]. The mushy layer may be unstable when it comes to dynamic perturbations in the constant crystallization rate. Therefore, we further analyzed the dynamic instability of the quasistationary crystallization process with a two-phase layer. This analysis showed that dynamic perturbations are unstable and lead to fluctuations in the rate of directional crystallization. Directional Crystallization with a Planar Solid/Liquid Interface Let us first consider the steady-state directional crystallization of a binary liquid (melt or solution) with a planar solid/liquid interface along the spatial coordinate ζ caused by temperature gradients in the solid and liquid phases. The process under consideration is triggered by the cooling of a liquid from above and controlled by an intense convection in the liquid (Figure 1a). Using the coordinate system moving with a constant velocity u s together with the solid/liquid interface, we have the following heat transfer equations in both the phases: Here, θ l and θ s are the temperatures in the liquid and solid phases, respectively; τ is the time; v is the vector of the fluid velocity; D l and D s are the thermal diffusivities in the liquid and solid phases, respectively; u s is the crystallization velocity; and Σ (τ) is the solid/liquid interface coordinate (Σ (τ) = 0 when considering the unperturbed steady-state crystallization scenario). Given sufficiently intense convection in the liquid to equalize the concentration distribution therein, we do not consider the process of impurity diffusion [34]. At the phase interface, the temperatures from both sides of the solid/liquid interface are equal to the sum of the phase transition temperature θ * of the pure substance and the interface curvature term ΓH. In addition, the difference in thermal fluxes defines the crystallization heat L V released at the interface [34,35], i.e., where Γ = θ * γ/L V is the Gibbs coefficient, H is the interface curvature, γ is the solid/liquid interfacial energy, L V is the latent heat parameter, n is the normal vector of the interface, c l is the specific heat, θ ∞ is the temperature of the liquid far from the crystallization front, k s is the thermal conductivity of the solid and j(θ l ) is the convective heat flux transferred from the liquid to the solid phases. Note that H = 0 in the case of a planar solid/liquid interface and H ≈ ∇ 2 Σ in the case of small morphological perturbations (linear theory). The convective heat flux j(θ l ) (J m −2 s −1 ) is given by [34,35] where λ is a dimensionless constant, is the coefficient of thermal expansion and g (m s −2 ) is the acceleration due to gravity. Let us especially note that α can be a function of θ l , i.e., α(θ l ) = 10 −4 (2.25 + 0.15θ l ) for the isopropanol solution [34]. Note that Equation (5) was justified in ref. [35] based on the general principles of convective heat transfer, and the parameter λ was found for the isopropanol solution by a particular experiment in ref. [34]. Morphological Instability of a Planar Solid/Liquid Interface with Intense Convection in Liquid Let us assume that heat transfer predominates in the direction of the spatial axis ζ. In this case, the steady-state temperature profiles in the liquid and solid phases are given by θ lo = θ lo (ζ) and θ so = θ so (ζ) (the subscript "o" denotes the steady-state solutions). The boundary condition (4) enables us to find the steady-state temperature gradient in a solid at ζ = 0: where θ int = θ * for the planar solid/liquid phase interface. The heat transfer Equations (1) and (2) allow us to obtain the second derivatives at ζ = 0 (here we use the no-slip condition v = 0 at ζ = 0) in the form of where G l = dθ lo /dζ at ζ = 0. Now we morphologically perturbed the planar solid/liquid interface as Here, ξ and η are the Cartesian coordinates directed perpendicular to the solidification axis ζ; k ξ and k η are the perturbation wavenumbers along these directions, respectively; i is the imaginary unit; and ω is the amplification rate (frequency) of the perturbations. The phase interface perturbation Σ is in accordance with the temperature perturbations in the solid and liquid, i.e., θ s = θ sA E exp(β s ζ) and θ l = θ l A E exp(β l ζ), where Σ A , θ sA and θ l A designate the perturbation amplitudes and β s and β l are the perturbation amplification/damping coefficients found below. The linear theory under consideration implies that |θ s | θ so and |θ l | θ lo . The next step is to expand the boundary conditions (3) and (4) in a Taylor series in the vicinity of the unperturbed solid/liquid interface ζ = 0. Keeping in mind only the linear terms of the perturbations, we arrive at the following expressions at ζ = 0 (for details, see Appendix A): where k 2 h = k 2 ξ + k 2 η and j = dj/dθ l at θ l = θ int , i.e., The heat and mass transfer Equations (1) and (2) lead to the following equations at ζ = 0: where V ζ = −∂v ζo /∂ζ at ζ = 0 represents the extension rate (v ζo denotes the steady-state ζ component of the fluid velocity). Combining expressions (9) and (12), we come to β l in the form of An important point is that the signs ± in the expressions for β s and β l define the direction of the perturbation amplification/damping. So, for example, if a perturbation appears in the solid phase (ζ < 0) at some distance from the phase interface ζ = 0, it grows/decays in cases of positive/negative β s . When dealing with the liquid phase (ζ > 0), we have a similar behavior. Namely, a perturbation originating at some distance ζ > 0 in the liquid propagates to the phase interface in the −ζ direction and decays/grows if there is a positive/negative β l . In real laboratory setups or natural processes, this corresponds to a limited region of the crystallization process: the solid walls are located at certain distances ζ < 0 in the solid material and ζ > 0 in the liquid phase. Now combining Equations (8) and (9), we find the dispersion relation (ω(k h )) as follows: where Figure 2a shows the dispersion curves calculated according to the dispersion relation (15) for the isopropanol solution experimentally studied in ref. [34]. Analyzing Equation (15), we found only real solutions. As is easily seen, the directional crystallization process in the presence of intense convection is morphologically unstable (ω increases with an increasing wavenumber k h and decreasing steady-state velocity u s ). Such a behavior follows from ω > 0 calculated in a wide range of wavenumbers k h . As this takes place, all points shown in Figure 2a are described by β s < 0. Physically, it means that a perturbation in the solid phase temperature appearing at some distance from the crystallization front at ζ < 0 (e.g., a temperature fluctuation arising on the cooled wall) decreases with an increasing ζ and perturbs the solid/liquid interface at ζ = 0. Note that β s < 0 and β l < 0 describe the damped perturbations propagating from the cooled boundary at ζ < 0 into the liquid phase at ζ > 0. It is important to emphasize that the revealed morphological instability is a consequence of the perturbations appearing on the cooled wall at ζ < 0. In the case of an infinite crystallization domain, such a solution does not exist due to the requirement of damping perturbations at ζ → −∞. As a result of this mechanism, morphological perturbations develop in the liquid at the phase interface, and a mushy layer filled with dendrite-like structures appear, as was demonstrated in experiments [34]. In addition to the morphological perturbations where k h = 0, dynamic perturbations where k h = 0 may exist in the crystallizing system, which represent the perturbations in the steady-state crystallization velocity u s . Figure 2b shows the amplification rate versus u s at different temperatures θ ∞ . We found two branches of the solution, one branch of dynamic instability (ω > 0, the solid line in Figure 2b) and the other one of stability (ω = 0, the dashed line in Figure 2b). Note that the root ω = 0 of Equation (15) can be easily found analytically for dynamic perturbations where k h = 0. In addition, Σ = Σ A and dΣ /dτ = 0, i.e., the system has zero velocity perturbation and crystallizes with an unperturbed steady-state rate u s . Which of these two branches of solution will be realized depends on whether convection amplifies (by creating irregularities in the temperature and concentration fields) or attenuates (by equalizing the temperature and concentration) the dynamic perturbations near the solid/liquid interface. The answer to this question depends on the nature of the convective currents and the arrangement of the crystallizing facility. Note that the mushy layer often occurs in various geophysical phenomena of ice freezing and magma solidification, as well as in metallurgical and chemical processes of the equilibrium and nonequilibrium crystallization of melts and solutions [36][37][38][39][40][41]. This explains the need to develop a theory of dynamic stability of such processes under the influence of convection. When a mushy layer has appeared between purely solid and liquid phases, small temperature perturbations can produce a new oscillatory crystallization scenario when the mushy layer is dynamically unstable and oscillates near its steady-state crystallization velocity. Such a process substantially changes the impurity distribution in solid material and leads to the phenomenon of layered impurity liquation appearing as a result of the dynamic oscillations of a mushy layer. To describe this effect within the framework of intense convection in liquid, we develop the aforementioned model and analyze it against small dynamic perturbations. To simplify the matter, we replace a real mushy layer with the discontinuity interface between purely solid and liquid phases that reflect the properties of a real mushy layer by means of a new boundary condition. This condition represents the equality of the phase-transition temperature gradient and the liquid-phase temperature gradient at the discontinuity interface. In addition, this condition defines the fact that no supercooling exists ahead of the discontinuity surface (the mushy layer) in the liquid. Such an analysis of dynamic stability is carried out below in the spirit of previously developed theories without convection [42,43]. Dynamic Instability of a Discontinuity Solid/Liquid Interface Mimicking the Properties of a Mushy Layer First of all, we consider a narrow quasiequilibrium mushy layer appearing ahead of the planar solid/liquid interface. The latent heat of solidification completely compensates for the supercooling in a mush, which is replaced by a discontinuity interface between the purely solid and liquid phases [44,45] (Figure 1b). Since impurities accumulate in the interdendritic spacing of a mushy layer, where convection is retarded and cannot equalize the impurities, it is necessary to include the impurity concentration C l in the mathematical model of directional crystallization. Thus, the process model consists of the heat-conduction Equations (1) and (2) and an impurity-diffusion equation in the liquid phase (diffusion in the solid material is traditionally neglected): where D C is the diffusion coefficient and Σ (τ) represents the dynamic perturbations in the discontinuity interface that replaces a mushy layer (Σ (τ) = 0 when dealing with the steady-state crystallization scenario). Since the phase transition temperature is dependent on the impurity concentration, the boundary condition (3) should be rewritten as follows: where the function f (C l ) is determined from the phase diagram ( f (C l ) = mC l for the linear phase diagram, with m being the equilibrium liquidus slope). The heat balance condition (4) is satisfied at the discontinuity interface Σ (τ). To close the mathematical model, we have the following quasiequilibrium condition of the mushy layer, which is replaced by a discontinuity interface [44,45]: where f (C l ) = d f /dC l and f = m in the case of the linear liquidus equation. Thus, the mathematical model of directional crystallization with a mushy layer that is replaced by a discontinuity interface consists of the heat and mass transfer Equations (1), (2) and (18) as well as the boundary conditions (4), (19) and (20). In the case of steady-state solidification, Σ (τ) = 0 and the expressions (6) and (7) take place. What is more, the impurity diffusion Equation (18) gives the following at ζ = 0: The subscript "o", as before, designates the steady-state solutions. Since the mushy layer can be dynamically unstable as a whole object when its steadystate velocity oscillates around u s , let us analyze the dynamic stability of crystallization with a discontinuity surface below (Figure 2b). In this case, the perturbations have the same form with E = exp(ωτ) and C l = C l A E exp(βζ), where β is the perturbation amplification/damping coefficient for the impurity concentration and C l A stands for the amplitude of its perturbations. Now substituting perturbations into the boundary conditions (4), (19) and (20), we obtain four equations at ζ = 0 for the perturbation amplitudes. As before, Equations (8) and (10) take place. The other two equations are as follows: where f = 0 in the case of the linear liquidus equation. Equations (1), (2) and (18) enable us to express β l (ω), β s (ω) and β(ω) as follows: As before, the signs of β l , β s and β determine the direction of the perturbations propagating to the discontinuity interface from the solid or liquid phases. Now eliminating the perturbation amplitudes from Equations (8), (10), (22) and (23), we arrive at the following expression for ω: The sign of ω (or the real part of ω) in Equation (27) defines the process stability/instability when it comes to small dynamic perturbations (stability occurs at ω < 0 and instability occurs at ω > 0). Analyzing expression (27) for the isopropanol solution, we found only the real roots of this equation. They are shown in Figure 3 in the plane ω(u s ) for different temperatures θ ∞ . As is easily seen, the amplification rate can be either positive or negative for any given value of θ ∞ . Moving from left to right along the horizontal axis u s , we see that at first, ω is positive (solid line) up to the bifurcation point of the solution, which is marked by the symbol "black square, " (Figure 3a). Then, at the bifurcation point, the solution splits into two branches marked with dashed lines (the unstable branch continues in the region ω > 0 and the stable branch jumps into the region ω < 0). Then, both of these branches remain in their own semiplanes. As this takes place, the stable branch (ω < 0) asymptotically approaches zero from below (Figure 3b). An important point is that β s < 0, β l < 0 and β < 0 for all points in Figure 3. As before, it means that a perturbation appearing at the cooled wall at ζ < 0 propagates across the discontinuity interface to the liquid phase and leads to dynamic perturbations for a single solution to the problem (the solid line in Figure 3a). When the crystallization velocity u s is to the right of the bifurcation point, there are two possibilities: (i) the process jumps into the stable region ω < 0 or (ii) the process stays in the unstable region ω > 0. Which path the system chooses depends on convection, which can either (i) equalize the temperature and concentration fields in the liquid and lead to stability or (ii) create local irregularities in these fields near the discontinuity surface and lead to instability. In other words, the perturbation coming from the cooled wall can either attenuate (ω < 0, stability) or enhance (ω > 0, instability) due to convection. In the latter case, the discontinuity interface becomes dynamically unstable, and the quasistationary crystallization process with constant velocity u s is destroyed. Conclusions In summary, we study how small morphological/dynamic perturbations influence the directional crystallization process in a finite domain with allowance for vigorous convection in liquid. First of all, we investigate whether the planar solid/liquid interface moving in a steady-state manner is stable when it comes to small morphological perturbations in the temperature and concentration fields, as well as to perturbations in the fluid flow velocity. To perform this, we derive the dispersion relation defining the amplification rate (the frequency of perturbations) as a function of the perturbation wavenumber and other system parameters. Analyzing this relation for the isopropanol solution, we conclude that ω is positive in a broad range of wavenumbers. This solution corresponds to the negative values of the perturbation amplification/damping coefficients β s and β l . The negative signs of these coefficients show that the morphological perturbations propagate along the crystallization direction ζ. Namely, when the temperature fluctuates on a cooled wall (at ζ < 0) at some distance from the phase interface, this perturbation propagates to the liquid phase and makes the solid/liquid interface morphologically unstable. In addition, we show that the planar solid/liquid interface can be unstable when it comes to dynamic perturbations with a zero wavenumber (perturbations in the steady-state crystallization velocity u s ) in a broad range of u s . However, the unstable branch of solutions simultaneously coexists with the stable one. As this takes place, the crystallizing melt/solution chooses one of them depending on the influence of convection, which can either strengthen or weaken a dynamic perturbation coming from the solid phase (cooled wall). This morphological/dynamic instability evolves with time and leads to the formation of a two-phase (mushy) layer between the purely solid material and liquid (see the experiments in ref. [34]). The crystallization process then takes place in the presence of this mushy layer. To study the stability of such a process when it comes to dynamic perturbations (perturbations in the steady-state crystallization velocity with a mush), we carry out a linear dynamic stability analysis with convection. The result of this theory, where a mushy layer is replaced by a discontinuity interface reflecting its properties, is the equation for the amplification rate as a function of the process and physical parameters. Analyzing this equation, we see that the dynamic perturbations can evolve from solid to liquid material. Namely, if a perturbation appears on the cooled wall at ζ < 0, it decreases and propagates to the liquid phase at ζ > 0 with β s < 0, β l < 0 and β < 0. This perturbation can evolve with time and lead to dynamic instability in a wide range of crystallization velocities u s . In addition, a bifurcation of solutions occurs at a certain velocity u s , and the system has two branches of solutions (unstable and stable) that coexist simultaneously. The system chooses one of them depending on convection, which strengthens or weakens the perturbations in cases of instability and stability, respectively. As a result, the discontinuity interface mimicking a mushy layer can be unstable when it comes to small dynamic perturbations in the crystallization velocity; i.e., the steady-state process with mush becomes broken and a more complex crystallization scenario with an unsteady velocity and variable mushy layer thickness can occur. Let us summarize the main assumptions used in the theory under development below. (1) The steady-state distributions of the temperature and impurity concentration depend on only one spatial coordinate ζ directed along the crystallization process. (2) The convective heat flux is defined by using the experimentally valid expression (5). (3) Mass transfer in the liquid phase is described by a convective-type equation of impurity diffusion (18), which is valid for slow local-equilibrium crystallization processes (at u s V D , where V D is the solute diffusion velocity in bulk liquid [47,48]). (4) The temperature of the phase transformation at the solid/liquid interface does not depend on atomic kinetics, which is the case for nonhigh crystallization velocities (u s V D ). (5) All the thermophysical parameters of the solid and liquid phases are assumed to be constant near the phase transformation temperature. (6) The morphological/dynamic perturbations are sufficiently small compared to the characteristic steady-state solutions, which allow us to expand the required functions in the Taylor series near the solid/liquid interface. As a result of this work, the main conclusion can be summarized as follows. A planar solid/liquid interface (mushy layer) can be unstable when it comes to small morphological/dynamic perturbations in the presence of vigorous convection in liquids when considering a finite crystallization domain. The evolution of instability is schematically illustrated in Figure 4. As this takes place, the key factor here is the finite process domain with solid walls, which is characteristic of any real crystallization phenomenon occurring in nature, laboratory or industrial facilities. Namely, the origin of perturbations occurs on solid walls due to fluctuations in the heat and mass transfer facility, which lead to morphological/dynamic instability. If we were to carry out this study in an infinite domain, we would be forced to discard the solutions found from the conditions of the bounded temperature and concentration perturbations at an infinite distance from the solid/liquid interface in both the phases. This allows us to conclude that the finiteness of the crystallization domain is one of the main factors influencing the morphological/dynamic instability. This means that a number of theories of stable/unstable crystallization with convection (and perhaps even in its absence) need to be revised to take a limited crystallization domain into account, such as the theory of the stable growth mode of a dendritic crystal tip, which allows for the selection of the tip velocity depending on the tip curvature and supercooling [49][50][51][52]. Another example is the morphological stability of the ice/ocean interface when it comes to morphological perturbations considering a finite depth of liquid [53][54][55]. v ζ = v ζo + ∂v ζo ∂ζ Σ + v ζ = 0, at ζ = 0, or v ζ = V ζ Σ , at ζ = 0. By substituting the perturbations here, we come to expression (14) at ζ = 0.
6,732.6
2023-08-18T00:00:00.000
[ "Physics", "Materials Science" ]
A mock redshift catalogue of the dusty star-forming galaxy population with intrinsic clustering and lensing for deep millimetre surveys We present a new cosmologically motivated mock redshift survey of the Dusty Star-Forming Galaxy population. Our mock survey is based on the Bolshoi– Planck dark-matter halo simulation and covers an area of 5.3 deg 2 . Using a semi-empirical approach, we generate a light cone and populate the dark-matter haloes with galaxies. Infrared properties are assigned to the galaxies based on theoretical and empirical relations from the literature. Additionally, background galaxies are gravitationally lensed by dark-matter haloes along the line-of-sight assuming a point-mass model approximation. We characterize the mock survey by measuring the star formation rate density, integrated number counts, redshift distribution, and infrared luminosity function. When compared with single-dish and interferometric observations, the predictions from our mock survey closely follow the compiled results from the literature. We have also directed this study towards characterizing one of the extragalactic legacy surveys to be observed with the TolTEC camera at the Large Millimeter Telescope: the 0.8 sq. degree Ultra Deep Survey, with expected depths of 0.025, 0.018 and 0.012 mJy beam − 1 at 1.1, 1.4 and 2.0 mm. Exploiting the clustering information in our mock survey, we investigate its impact on the effect of flux boosting by the fainter population of dusty galaxies, finding that clustering can increase the median boosting by 0.5 per cent at 1.1 mm, 0.8 per cent at 1.4 mm and, 2.0 per cent at 2.0 mm, and with higher dispersion. INTRODUCTION A key piece to unravel the formation and evolution of galaxies is the study of the dusty star-forming galaxy (DSFG) population.These galaxies are highly obscured by dust, preventing the estimation of total star formation rates (SFR) at UV-optical wavelengths.However, at submillimetre wavelengths, we can observe the emission of dust grains heated by stars.The camera SCUBA in the James Clerk Maxwell Telescope (JCMT) discovered a subset of DSFG known as submillimeter galaxies (SMG, e.g., Smail et al. 1997;Barger et al. 1998;Hughes et al. 1998).These galaxies are found mainly at high redshifts, with a peak at ∼ 2 − 3 (Chapman et al. 2003;Aretxaga et al. 2003;Chapman et al. 2005;Aretxaga et al. 2005;Stach et al. 2018;Franco et al. 2020;Gómez-Guijarro et al. 2022), and they exhibit high infrared luminosities ( IR > 10 12 L ⊙ , Smail et al. 1997;Hughes et al. 1998;Magnelli et al. 2013;Gruppioni et al. 2020;Casey et al. 2013), corresponding to extreme SFRs of hundreds and even thousands of M ⊙ yr −1 (e.g.Smail et al. 1997;Hughes et al. 1998;Barger et al. 2014;da Cunha et al. 2015).For a review on this population see (Casey et al. 2014).The SFR density (SFRD) is highly obscured at ≲ 3, with a contribution from DSFGs reaching approximately fifty per cent at ≈ 3 (Bourne et al. 2017; Dudzevičiūtė ★ E-mail<EMAIL_ADDRESS>al. 2020).However, at ≳ 4 their contribution is still not well constrained since the samples of DSFGs at these high redshifts remain incomplete (e.g.Casey et al. 2014;Gruppioni et al. 2020;Khusanova et al. 2020;Zavala et al. 2021;Fudamoto et al. 2021;Algera et al. 2022).Measuring reliable photometric or spectroscopic redshifts of DSFGs often requires counterpart associations across UV to midinfrared wavelengths.However, detecting high− galaxies at these wavelengths becomes challenging due to their flux densities rapidly dropping with redshift.Additionally, the relatively poor angular resolution and sensitivity of the single-dish (sub-)mm telescopes further complicate the association between the DSFGs and their UV/optical counterparts.The single-dish telescopes observe only the most luminous galaxies in large areas, e.g., SCUBA/SCUBA-2 on the JCMT (Geach et al. 2017), MAMBO on IRAM (Bertoldi et al. 2007), SPT (Vieira et al. 2010), and AzTEC on ASTE (Aretxaga et al. 2011).Interferometric telescopes as ALMA have shown that a considerable fraction of these luminous galaxies are actually the result of blending fainter sources within the larger beam of single-dish telescopes (e.g.Karim et al. 2013;Simpson et al. 2015;Stach et al. 2018): some cases being the result of projected sources in the line-of-sight, while others have been confirmed to be physically associated galaxies potentially tracing over-dense regions of the Universe.Due to their high angular resolution and sensitivity, interferometers allow more detailed studies of DSFGs.However, compared to single-dish observations, interferometric surveys are limited to mapping relatively smaller areas of the sky (hundreds of square arcminutes, e.g.Lindner et al. 2011;Fujimoto et al. 2015;Dunlop et al. 2017;Franco et al. 2018).To obtain a complete sample of DSFGs on both small and large scales and to understand their role not only in the SFRD (Madau & Dickinson 2014;Dunlop et al. 2017;Magnelli et al. 2019;Gruppioni et al. 2020) but also in their physical evolution, there is a need for a link between single-dish and interferometric telescopes. The new camera TolTEC1 (Wilson et al. 2020) installed in the 50m Large Millimeter Telescope Alfonso Serrano (LMT2 , Hughes et al. 2020) observes at three different wavelengths simultaneously (1.1, 1.4 and 2.0 mm), with angular resolutions of 5.0, 6.3 and 9.5 arcsec.TolTEC will image a series of Extragalactic Legacy Surveys, two of which have already been defined: the Ultra Deep Survey (UDS) and the Large Scale Structure Survey (LSS) (Montaña et al. 2019). The UDS survey will be sensitive to typical star forming galaxies (SFR > 10 M ⊙ yr −1 and IR > 10 11 L ⊙ ) with 1 rms depths of 0.025, 0.018 and 0.012 mJy beam −1 at 1.1, 1.4 and 2.0 mm.The goal is to study the history of stellar mass build-up and metal production in massive galaxies during the last 13 Gyr of cosmic time (Montaña et al. 2019).The total survey area will cover ∼ 0.8 deg 2 , observing the CANDELS fields in UDS, GOODS-S, and a 0.5 sq.degree area in COSMOS.On the other hand, the LSS survey will cover a larger area of the sky to investigate the spatial distribution of the starforming galaxies through the cosmic web.It will study galaxies with SFR > 100 M ⊙ yr −1 and IR > 10 12 L ⊙ at 1.1, 1.4 and 2.0 mm with depths of 0.25, 0.18 and 0.12 mJy beam−1 at 1 in rms.The observed areas will be of 3.5 to 10 deg 2 for target individual fields and a total of 40-60 deg 2 for the full survey (Montaña et al. 2019).These two extragalactic legacy surveys will close the gap between single-dish and interferometric observations.Works based on N-body cosmological simulations and different methods to populate galaxies within the dark matter halos (e.g., Cowley et al. 2014;Béthermin et al. 2017;Popping et al. 2020) or in hydrodynamics simulations (e.g., Lovell et al. 2021), have attempted to recreate the IR properties of the DSFGs and consequently reproduce the integrated number counts.These simulations allow us to understand better the DSFG population, and can be used to study the impact of observational effects (flux boosting, confusion noise, source blending, completeness, cosmic variance, etc.) on the detection of galaxies and different observables, e.g., the measured number counts (Cowley et al. 2014;Béthermin et al. 2017) and angular correlation functions (Cowley et al. 2016(Cowley et al. , 2017)).The flux boosting effect is present when unresolved neighbouring galaxies fall within the same telescope beam and increase the measured flux density.The flux boosting corrections to individual galaxies are commonly estimated through simulations that randomly distribute the galaxies in the observed area (e.g., Coppin et al. 2005Coppin et al. , 2006;;Geach et al. 2017;Zavala et al. 2017;Chen et al. 2022).These latter simulations do not consider the clustering of the DSFGs.However including clustering potentially has an impact on the flux boosting effects since it depends on the likelihood of having nearby galaxies. In this paper, we present a new mock redshift survey of the DSFG population.This mock survey incorporates clustering properties and the gravitational lensing magnification caused by the dark-matter haloes in the line-of-sight on background galaxies, as it is built on a dark-matter cosmological simulation.As discussed above, previous mock surveys have overlooked both of these effects.We take advantage of this new mock redshift survey to characterize the TolTEC/UDS extragalactic legacy survey and to explore the impact that the clustering of DSFGs might have in the determination and correction of flux density boosting effects. In section 2 we describe our procedure to generate our mock redshift survey of the DSFG population.In section 3 we characterize our mock redshift survey and compare it with the SFRD, redshift distribution, number counts, and the luminosity function of DSFGs.In Section 4 we show our predictions for the UDS extragalactic legacy survey of TolTEC and we discuss the different observational effects present in this survey.In section 5 we analyze the impact in the corrections of flux boosting when the clustering of galaxies is taken into account.Finally, we present our conclusions in the Section 6. SIMULATION AND MOCK REDSHIFT SURVEY During the past years, the Simulated Infrared Dusty Extragalactic Sky (SIDES, Béthermin et al. 2017) has been the largest (2 deg 2 ) publicly available simulation of the DSFG population.SIDES accurately reproduces the FIR-mm number counts of the DSFGs and, since it is based on the Bolshoi-Planck cosmological simulation, it therefore includes their clustering properties.SIDES has been widely used to compare and analyze observations of the DSFG population (Béthermin et al. 2020;Chen et al. 2022;Bing et al. 2023;Traina et al. 2024) and has recently been updated to predict their emission line properties (CONCERTO, Béthermin et al. 2022).The 2 deg 2 covered by SIDES, however, restricts its use to the analysis of relatively small area surveys, limits the predictions of cosmic variance effects, and misses the brightest and more extreme SMGs.The larger areas of future millimetre wavelength surveys, particularly those to be conducted with the TolTEC camera, has motivated the development of the new larger area (5.3 deg 2 ) mock redshift survey presented in this work. In this section we describe the procedure followed to build our mock redshift survey of the DSFG population.First we describe the construction of a lightcone of dark matter haloes and how these haloes are populated with galaxies.We latter explain how starforming galaxies are identified within the mock redshift survey, and the process followed to assign their infrared properties.Finally, we estimate gravitational lensing magnifications and the observed flux densities for each galaxy.In Appendix A1 we highlight the main differences between our model (described in §2.1 - §2.4) and that from the SIDES simulation. Lightcone We use the Bolshoi-Planck cosmological simulation (Klypin et al. 2016;Rodríguez-Puebla et al. 2016b), which has a side length of 250ℎ −1 Mpc, with 2048 3 dark matter particles and a mass resolution of 1.55×10 8 ℎ −1 M ⊙ .The periodic conditions of the Bolshoi-Planck simulation allows us to build a lightcone, covering a wide range of redshifts ( ≲ 7) by repeating the volume of the box in the direction as many times as necessary.The projection of the Bolshoi-Planck box results in a 5.3 deg 2 area.This size is large enough to generate multiple synthetic observations of the maps expected for the UDS.The maximum number necessary to cover a square area, with a side length , is given by: where Integer(x) rounds the value of to the nearest integer.Thus, for a square area of 5.3 deg 2 repetitions = 26, the corresponding comoving volume covered by the lightcone is ≈ 1.15 × 10 5 ℎ −3 Gpc 3 up to redshift 7. Due to the box repetitions, the light-cone projection on the sky can present false radial patterns.To avoid this, we include random displacements of the boxes no greater than 10% of the length of the box in one of the axis as well as several random rotations and permutations of each box.A visual inspection of the lightcone projection shows these are satisfactory solutions.To establish a connection between the galaxies and their host dark-matter haloes, we use an updated version to the semi-empirical galaxy-halo connection developed by Rodríguez-Puebla et al. (2016a, 2017).The general idea is to match the halo number density and the galaxy number density through a particular combination of their properties while at the same time modelling the mass assembly and star formation history of the galaxies.Here we use the maximum circular velocity attained along the main progenitor branch of the halo merger trees ( peak ) and the stellar mass ( ★ ) of the galaxies to make the connection.The above is well known that reproduces the auto-correlation functions of galaxies as a function of stellar mass (see e.g., Reddick et al. 2013;Dragomir et al. 2018;Calette et al. 2021) and when dividing into star-forming and quiescent galaxies (Kakos et al. 2024). SFRs are determined following the growth in mass of the host haloes and their associated galaxies through the merger trees of the Bolshoi-Planck simulation, dividing the stellar mass growth into two components: 1) − star formation and 2) − star formation due to the merger of smaller galaxies.The history of stellar mass growth is calibrated to reproduce the evolution of the stellar mass function of all galaxies from ∼ 0.1 to ∼ 10, the fraction of quiescent galaxies from ∼ 0.1 to ∼ 4, and the mean of the star-forming main sequence of galaxies from ∼ 0.1 to ∼ 8 (Rodríguez-Puebla et al. in prep.).In this way, each dark-matter halo and subhalo in our lightcone hosts a galaxy with a defined ★ and SFR. It is important to note that the semi-empirical approach does not introduce any modelling of astrophysical processes (as in the case of semi-analytical models) or recipes to physically link halo evolution with galaxy evolution.The approach is based on a continuous statistical matching of the simulated halo and observed galaxy distributions over a wide redshift range.As a result, the average stellar mass growth (in-situ and ex-situ) and the SFR history of galaxies bound to their dark matter haloes in the simulation are predicted.However, despite the empirical nature of the approach, it contains some underlying assumptions, for example, that the IMF (Initial Mass Function) is universal.Remarkably, after introducing observed systematic and statistical uncertainties in the galaxy-halo connection modelling, the integration in time of the SFR histories of galaxies in the mock catalogues is fully consistent with the evolution of the galaxy stellar mass function, not being necessary to invoke a -and mass-dependent IMF. Figure 1 shows the predicted stellar-to-halo mass ratio at different redshifts, where this ratio quantifies the time-integrated efficiency of star formation in halos.As seen, the stellar-to-halo mass ratio is on average independent of redshift.In detail, however, it peaks around halo ∼ 10 12.0 M ⊙ at ∼ 0.25, increases slightly with up to halo ∼ 10 12.5 at ∼ 3, and then it decreases to smaller masses at even higher redshifts.At masses below the peak, the role of star formation feedback is supposed to be crucial, while at higher masses, AGN feedback and virial-shocked gas cooling are supposed to be relevant. Selection of Star-Forming Galaxies For the purposes of this work, it is necessary to make a selection of galaxies that are actively forming stars.To achieve this, we initially calculate the specific star formation rate (sSFR = SFR/ ★ , a measure of the star formation per unity mass) for each galaxy within our mock catalogue.Subsequently, we apply the criterion of Pacifici et al. (2016) to preselect star-forming galaxies.This criterion parametrizes the redshift evolution of the sSFR threshold between star-forming and quiescent galaxies as: where () is the age of the Universe in Gyr at redshift and sSFR min is in Gyr −1 .We calculate sSFR min for each galaxy in our mock catalogue and compare this value with its intrinsic sSFR.If sSFR > sSFR min the object is selected as a candidate star-forming galaxy.We then group our mock galaxies in redshift and stellar mass bins, calculate the mean and standard deviation of the sSFR per bin, and make a linear fit to these values.The selected starforming galaxies will be those that are above the fit minus 0.5 dex.For a better selection, the mean values are recalculated using this last selected subset of galaxies, and a new linear fit is performed.This sigma clipping process continues until the linear fit converges and no longer exhibits significant variation (Δ < 0.05).The last fit defines the main sequence of our star-forming galaxies. Figure 2 shows the sSFR- ★ plane for four different redshift bins, and the corresponding main sequences obtained following the procedure described above.We also include a projected density map (number of galaxies per bins of stellar mass and sSFR) corresponding to the galaxies selected as star-forming.For comparison, we show the main sequences by: Speagle et al. (2014), where the authors investigate its evolution in terms of ★ , SFR and time (up to ∼ 6), using a compilation of 25 studies from the literature; Popesso et al. (2022), who followed the same procedure as Speagle et al. (2014) but with a more extensive collection of studies including SFR indicators from the UV, mid-infrared, and far-infrared; Cardona-Torres et al. ( 2023) who used the HST-CANDELS data to constrain the main sequence in the Extended Groth Strip and characterize a sample of faint SMGs selected at 450 and 850 m from the SCUBA-2 Cosmology Legacy Survey.Our predictions are in good agreement with the main sequences by Speagle et al. (2014) and Popesso et al. (2022), although the latter one drops at ★ > 10 11 M ⊙ , a region that is not well constrained by our mock catalogue.Our main sequence is steeper at = 0.3 − 0.6 than that of Cardona-Torres et al. ( 2023), consistent at = 1.2 − 1.5, and slightly shallower at = 2.5 − 3.0. Infrared Properties of the Galaxies In this subsection we describe the methodology followed to establish the infrared emission of the galaxies.Throughout the paper we adopt a Chabrier (2003) IMF. Dust-obscured SFR and Infrared Luminosities To determine the fraction of SFR enshrouded by dust ( obs = SFR IR /SFR) for galaxies at 0.5 < < 2.5, we adopt the empirical relation by Whitaker et al. (2017), who derived obs for a mass complete sample of star-forming galaxies with ★ ∼ 10 8.7 -10 11 M ⊙ , selected from the 3D-HST treasury program and Spitzer/MIPS 24 m maps in the CANDELS fields, finding a strong dependence between obs and ★ and a negligible evolution with .For > 2.5 galaxies we use the results from Dunlop et al. (2017), where the authors combined ALMA 1.3 mm observations of the Hubble Ultra Deep Field and HST data in the UV to trace the IR and total SFRD as a function of .We make a linear fit to their SFRD and SFRD IR and estimate obs as the ratio between both fits (SFRD IR /SFRD).This is later scaled to match the obs predictions from Whitaker et al. (2017) at = 2.5 for the different stellar masses.Finally, for galaxies at ≤ 0.5 we extrapolate the relation of Whitaker et al. (2017), with an upper limit of obs = 0.75 for ★ > 10 10 M ⊙ galaxies, motivated by the analytical approximation from Rodríguez-Puebla et al. (2020).The resulting recipe to estimate obs is: (5) where = (1.96± 0.14)×10 9 and = -2.277± 0.007.We use obs from equations ( 3)-( 6) to estimate SFR IR (= obs • SFR) and derive infrared luminosities for our galaxies following the relation of Kennicutt (1998) scaled for a Chabrier (2003) We impose a 2000 M ⊙ yr −1 upper limit to the obscured SFRs.While in large area surveys there are galaxies with apparent flux densities that suggest SFRs larger than this value, follow-up observations have confirmed they are gravitationally amplified systems (e.g., Zavala et al. 2018;Vieira et al. 2010;Negrello et al. 2010;Vieira et al. 2013;Harrington et al. 2017).The intrinsically brightest galaxies found in these surveys have equivalent SFRs close to the limit we adopt (e.g., Karim et al. 2013;Barger et al. 2014). Table 1 summarizes the general properties of our mock redshift survey. Dust Temperature Once IR has been estimated, the corresponding dust temperature ( dust ) of the galaxies can be inferred.Casey et al. (2018) used a compilation of dusty star-forming galaxies to model the relation between the intrinsic IR luminosity of galaxies and the observed peak of their SED: where 0 = 102.8± 0.4 m, t = 10 12 L ⊙ and = −0.068± 0.001.We follow the above relation to associate the intrinsic IR of our galaxies with their corresponding peak and, finally, estimate their ′ dust using an approximation to Wien's displacement law: The ′ dust obtained with equation ( 9) must be corrected for heating effects due to the Cosmic Microwave Background (CMB).The temperature of the CMB at > 4 approaches and can exceed the temperature of the dust grains at z = 0 in the galaxies, rising their temperature.This increase in ′ dust boosts the dust continuum emission which implies that the flux densities of galaxies will be higher than without considering this effect.Following da Cunha et al. (2013) the correction in the temperature of the galaxies is given by: where ′ dust is the dust temperature of the galaxy in absence of the CMB, CMB = 2.725 K is the CMB temperature at = 0 and = 1.8 is the dust spectral emissivity index (value adopted from Casey et al. 2018). Observed Flux Density To estimate the observed flux density ( ) of our mock galaxies we assume that their spectral energy distribution (SED) is well modeled by a modified black body given by: This work where () = (/ 0 ) E is the optical depth, 0 = 3 THz is the frequency at which the modified black body changes from optically thick to optically thin and () is the Planck function.The observed flux density of our galaxies at frequency is estimated through the relation: where is the intrinsic SED of galaxies (equation 11), IR is their infrared luminosity, is their redshift and L is the luminosity distance.The total infrared luminosity is defined as the integral of the SED from 8 to 1000 m. Gravitational Lensing To introduce gravitational lensing effects in our mock redshift survey, we follow Narayan & Bartelmann (1996) and consider amplifications produced by dark-matter haloes in the line of sight (under a pointmass model approximation) on the background galaxies. In order to identify halo-galaxy alignments, we set a search radius b = 5 arcsec, with origin at the position of each halo.We find that background sources at angular distances larger than this value will have amplifications smaller than 20 per cent for the most massive haloes ( halo > 10 14 M ⊙ ), and therefore do not have a strong impact in our results.For each halo with identified background galaxies, we can calculate its corresponding Einstein radius assuming a pointmass approximation (Narayan & Bartelmann 1996): with the gravitational constant, the speed of the light, and d , ds and s the angular distances between observer and lens, lens and source, and observer and source, respectively.The angular separation () of the lensed images is given by: with l being the angular separation between the optical axis between the observer and the lens, and the true source position.Combining equations ( 13) and ( 14), the magnification of the image is: with the total magnification given by = | + | + | − |. TESTS OF THE MOCK CATALOGUE In order to characterize the performance of our mock redshift survey, we measure the predicted cumulative number counts, star-formation rate density, total infrared luminosity function and redshift distribution, and compare them with observational results. Number Counts Figure 3 shows the number counts estimated from our 5.Given the current uncertainties and the dispersion between the different observational results, our mock catalogue accurately reproduces the measured number counts.At the faint end ( 1.1 ≲ 0.1 mJy), our predictions fall below the 1.2 mm ALMA results from Fujimoto et al. (2015).Their measurements, however, were derived from 120 pointings (mainly archival data) towards different astronomical sources, potentially biasing their results.Furthermore, the surveyed area available to estimate the number counts at 1.1 ≲ 0.1 mJy was ∼ 2 sq.arcmin, which may introduce sampling variance uncertainties.On the other hand, at 1.1 ∼ 0.1 mJy, the 1.2 mm ALMA number counts from González-López et al. ( 2020 2022) identified faint sources around ALMA calibrator maps (ALMACAL project).Both studies are based on the analysis of small-area individual maps, and are also potentially affected by sampling variance effects3 . The bright end of the number counts has been mainly traced by single-dish observations, e.g. the 5 sq.degreeSCUBA-2 Cosmology Legacy Survey including seven independent fields (0.07 -2.22 sq.deg; Geach et al. 2017), the 1.6 sq.degree AzTEC JCMT-ASTE survey of seven blank-fields (0.08 -0.72 sq.deg) and 3.1 sq.degree towards 37 potentially overdense regions (Scott et al. 2012), and the recent 0.32 sq.degree NIKA2 Cosmological Legacy Survey including GOODS-N and COSMOS.Interferometric observations have explored this bright end through follow-up observations of the brightest sources detected by single-dish telescopes.For example, Simpson et al. (2015) presented 870m ALMA pointed observations of 30 galaxies selected at 850m from the SCUBA-2 Cosmology Legacy Survey, finding that 61 per cent of the selected sources were the result of blending two or more fainter DSFGs.Karim et al. (2013) reached a similar conclusion through 870m ALMA follow-up observations of 122 SMGs selected from the Large Apex BOlometer CAmera (LABOCA) Extended Chandra Deep Field South submillimetre survey (LESS), with all of the brightest sources ( 870 ≳ 12 mJy) comprising emission from multiple fainter galaxies.At the bright end ( 1.1 ≳ 5 mJy), our mock catalogue starts to deviate from the general trend of the observed number counts and from our predictions if no gravitational lensing effects are introduced.This transition therefore indicates where gravitational lensing effects starts to have a strong impact on the measured number counts, although a small contribution from intrinsically bright galaxies is also expected given the larger area of our mock redshift survey compared to the observational measurements (i.e., observing larger areas increases the probability of finding brighter sources and strongly lensed galaxies; Clements et al. 2010).The bottom panel of Figure 3 shows the ratio between our mock redshift survey with and without lensing: at 1.1 > 5 mJy, the number counts start to be dominated by gravitationally lensed galaxies, which can increase the un-lensed number counts by a factor of up to 26 at 1.1 ∼ 10 mJy. The number counts from our mock redshift survey are also in good agreement with those from SIDES (Béthermin et al. 2017) at 1.1 ≳ 0.05 mJy.The difference at lower flux densities can be the result of the different mass resolution between the mock catalogues: SIDES includes galaxies with ★ ≥ 10 8 M ⊙ whereas our new redshift survey has a resolution of ★ ≥ 10 8.75 M ⊙ .It is worth noting that while the Bolshoi-Planck simulation resolves haloes down to peak ∼ 60 km/s, which corresponds to haloes of ★ ∼ 4 × 10 7 M ⊙ , the scatter (the product of intrinsic evolution plus random errors) around the ★ − peak relation allows for a conservative completeness in stellar mass terms down to ★ ≥ 10 8.75 M ⊙ . In order to explore the relative contribution of galaxies with different stellar masses to the overall population of DSFGs, Figure 4 shows the 1.1, 1.4, and 2.0 mm number counts measured from our mock redshift survey for different ★ bins.As expected, a general trend is seen for the three wavelengths, with the faint end (e.g. 1.1 ≲ 0.1 mJy) being dominated by the lower mass log( ★ /M ⊙ ) = 9.5 − 10.0 population, and more massive galaxies increasingly contributing towards larger flux densities.Interestingly, the interval 1.1 ∼ 1 − 10 mJy, which has been mainly probed by typical single-dish surveys, has an almost equivalent contribution by the three stellar-mass bins in the range log( ★ /M ⊙ ) = 9.5 − 11.0, with a small contribution from log( ★ /M ⊙ ) = 11.0 − 11.5 galaxies and the most massive log( ★ /M ⊙ ) > 11.5 population having a negligible contribution to the number counts at all flux densities.Finally, Figure 4 indicates that the 1.1 ∼ 0.1 mJy detection limit of the TolTEC Ultra Deep Legacy Survey will allow us to probe the relatively low mass (9.5 ≲ log( ★ /M ⊙ ) ≲ 10) population of DSFG, and study their properties in a sample of a few tens of thousands of galaxies. SFR History The total SFRD (IR+UV) as a function of measured from our mock redshift survey is presented in Figure 5, showing the relative contributions from the IR and UV (including the effects of dust attenuation) SFR.Our predictions are consistence with different observational results and models from the literature (e.g., Zavala et al. 2021), in particular with the fact that the dust-obscured star formation activity dominates the star formation history of the Universe for the last ∼ 12.4 Gyr (i.e.since ≈ 4.5).For comparison, Figure 5 Redshift Distribution To test the redshift distribution of DSFGs predicted by our mock catalogue, we reproduce, in terms of depth and total area, three recent millimeter wavelength surveys that probe the faint and bright population of DSFG: Brisbin et al. (2017), Gruppioni et al. (2020), andFranco et al. (2020).For each of these surveys, we generate ten independent realizations and estimate the average redshift distribution.Figure 6 presents our results and the comparison is discussed below.As an additional reference, the redshift distribution measured on the complete 5.3 sq.degree mock redshift survey and using the total area of SIDES are also shown. As it has been mentioned, Gruppioni et al. (2020) studied a sample of 56 serendipitously detected sources in the ALPINE survey of high redshift ( = 4.4 − 5.9) galaxies.ALPINE reached a sensitivity equivalent to that expected for the TolTEC UDS survey ( 1.1 ∼ 0.1 mJy), and the total area where serendipitous sources were detected (i.e.excluding the central region of the high- target) corresponds to 25 sq.arcmin.Gruppioni et al. (2020) estimated a median redshift of med = 2.3 ± 0.3 for their sample, and our mock redshift surveys predict med = 2.41±0.12(a similar value is measured over the whole 5.3 sq.deg mock redshift survey).While these median redshifts are in good agreement, it should be noted that the results from Gruppioni et al. (2020) may be biased by groups of galaxies gravitationally lensing the ALPINE targets or sources associated to these high- systems.These potential biases, combined with sampling variance effects due to the small survey area, can impact the overall shape of the observed redshift distribution.On the other hand, the SIDES simulation predicts a larger number of low- DSFGs resulting in a lower median redshift of med = 2.1. In the case of Franco et al. (2020), the authors took advantage of 3.6 and 4.5 m Spitzer/IRAC and 3 GHz Very Large Array data to extend the catalogue of sources from the 69 sq.arcmin GOODS-ALMA survey (Franco et al. 2018), reducing the detection limit down to 1.1 ≥ 0.63 mJy.The final catalogue includes 35 DSFGs with photometric and spectroscopic redshifts in the range = 0.65 − 4.73.Interestingly, including the fainter population, identified by means of the near-IR and radio data, decreased the median redshift of the sample from med = 2.73 to med = 2.40.In our case, the mean distribution of our mock redshift surveys has a median value of med = 2.67 ± 0.11, and med = 2.74 for the complete mock redshift survey.Our predictions are therefore slightly higher but still consistent with the observational results, particularly with the original catalogue of 1.1 ≥ 0.88 mJy ALMA sources.The SIDES mock catalogue, on the other hand, shows a larger population of lower redshift galaxies and predicts med = 2.44, in better agreement with the extended catalogue of Franco et al. (2020).As in the case of Gruppioni et al. (2020), the redshift distribution shows sensitivity to variations around the peak value due to the small map sizes, where the cosmic variance dominates. Finally, Brisbin et al. (2017) presented ALMA 1.25 mm follow-up observations of 129 sources detected with 1.1 ≳ 3.5 mJy in the 0.72 sq.degree AzTEC/ASTE maps of COSMOS (FWHM ∼ 30 arcsecond).The high sensitivity (1 1.25 ∼ 0.15 mJy) and angular resolution of the ALMA observations resulted in the detection of 152 galaxies, 143 of them with either spectroscopic or accurate photometric redshifts, and with a median redshift med = 2.48 ± 0.05. Our mock redshift survey predicts a considerably higher value of med = 2.95 ± 0.09 while the SIDES mock catalogue predicts a closer value of med = 2.66. Infrared luminosity function To measure the infrared luminosity function in our mock redshift survey we directly count the number of galaxies in IR and bins.Figure 7 shows the results compared to different observational studies.We find that our mock redshift survey predicts more luminous galaxies than the infrared luminosity function from Koprowski et al. (2017), which is based on the ∼1.5 sq.degree SCUBA-2 Cosmology Legacy Survey (Geach et al. 2017) and the deep 1.3 mm ALMA observations of the HUDF (covering 4.5 arcmin 2 ; Dunlop et al. 2017) to constrain the faint end.Gruppioni & Pozzi (2019), however, suggest that the single SED template used by Koprowski et al. (2017) to estimate luminosities may have biased their results.Furthermore, the 850 m SCUBA-2 observations are less sensitive to 'warm' galaxies, a population that tends to dominate the bright end of the luminosity function, as suggested by 70, 100 and 160 m Herschel observations (Gruppioni et al. 2013).On the other hand, compared to our predictions, the luminosity function derived from the analysis of serendipitous detections around the ALPINE targets (Gruppioni et al. 2020), shows a systematic excess of luminous galaxies ( IR ≥ 10 12 L ⊙ ) through all redshift bins.Towards fainter luminosities, however, their curve flattens and starts falling below our predictions, particularly at < 2.5.Finally, Rodríguez-Puebla et al. ( 2020) used a compilation of several observational results at < 4.2 (see Table 2 of that work) to constrain analytical models of the infrared luminosity function.Their compilation includes data from Spitzer (24 and 70m), AKARI (90m), Herschel (70, 100, 160, 250, 350 and 500m), and SCUBA-2 (450m).The luminosity function predicted by our mock redshift survey is in very good agreement, given the uncertainties and dispersion of the data, with this multi-wavelength compilation. To better constrain the evolution of the luminosity function, larger statistical samples are still required, particularly at the higher redshifts (i.e. > 2.5).In §4.3 we take advantage of our mock redshift survey to make predictions on the overall improvement that the TolTEC UDS will represent in our understanding of the evolution of the luminosity function. Clustering characterization To characterize the clustering properties of the DSFG in our mock redshift survey we estimate their angular two-point auto-correlation function ().This statistical tool measures the excess probability of finding a pair of galaxies separated by an angular scale , compared to a uniform non clustered distribution.To compute we use the numeric estimator presented in Landy & Szalay (1993): where , and are the number of galaxy pairs in the clustered, uniform and cross-matched distributions respectively.To improve the efficiency of the calculations, considerably reducing computing time, the algorithm of He ( 2021) was adapted to analytically estimate .We also include integral constraint corrections (Roche & Eales 1999), which are particularly important to consider when is measured at angular scales of order of the map size. ★ > 10 10.5 M ⊙ ) identified in the 2 sq.degree COSMOS field using a machine-learning technique trained with deep SCUBA-2 and ALMA data.Due to the large mapped area, sample size, and available optical/IR photometric redshifts, this currently represents the most statistical robust clustering measurements of the DSFG population.Figure 8 compares the angular two-point auto-correlation functions measured by An et al. (2019) in the = 2 − 3 redshift bin, and that of ★ > 10 10.5 M ⊙ sources in a 2 sq.degree area of our mock redshift survey.The uncertainties in our measurements are estimated through a bootstrap analysis of 50 maps drawn from the larger 5.3 sq.degree mock redshift survey, therefore taking into account cosmic variance effects due to the finite size of the survey area.As a complementary comparison, we estimate the clustering of DSFG from the SIDES simulation, using 1 sq.degree maps for the bootstrap analysis due to the smaller size of the simulation. Figure 8 shows that the clustering properties of the DSFG in our mock redshift survey are in good agreement with those measured for the SMGs and SFGs identified in the COSMOS field.We further fit a power-law of the form () = −0.8 to the different data sets, and find equivalent clustering amplitudes (within the uncertainties) between them: = 4.38 ± 0.67 for our mock catalogue, and = 4.66 ± 0.59 and 5.52 ± 0.38 for the SFGs and SMGs samples in An et al. (2019).As mentioned above, our error bars are larger since they are estimated from 50 realizations of 2 sq.degree maps and hence consider cosmic variance.A similar agreement is found for the other two redshift bins ( = 1 − 2 and = 3 − 5) presented in An et al. (2019), although affected by smaller statistics at the higher redshifts. PREDICTIONS FOR THE UDS/TOLTEC SURVEY In this section we take advantage of the mock redshift survey developed in §2 to make predictions of the constrains that the TolTEC UDS will make on the number counts, infrared luminosity function, and redshift distribution of the DSFG population.The UDS will consist of deep (1 1.1 ≈ 0.025 mJy) observations on a larger 0.5 sq.degree area of COSMOS and smaller (≳ 200 arcmin 2 ) maps towards other CANDELS fields.We adopt these two map sizes and, for each area, generate 10 independent realizations in order to provide mean values and sampling variance for each observable.Considering a 4 detection limit, the UDS will be sensitive to the faint end ( 1.1 ∼ 0.1 mJy) of the number counts, probing a regime that has only been sampled by interferometers (e.g.Fujimoto et al. 2015;Hatsukade et al. 2016;González-López et al. 2020;Zavala et al. 2021;Chen et al. 2022).Although these observations benefit from higher angular resolutions, and therefore a reduced confusion noise that makes them sensitive to a fainter population, their relatively small areas (≲ 100 sq.arcminutes) have limited the studied samples to ≲ 150 sources.Most of these surveys are therefore potentially affected by sampling variance effects.With a ∼ 30 times larger area, the UDS will detect thousands to tens of thousands of DSFGs with 1.1 = 0.1 − 1.0 mJy, allowing to impose strong constrains in this faint regime of the number counts. UDS/TolTEC number counts On the other hand, although the total area of the UDS will not be able to impose statistically robust constrains on the brightest and more extreme population of SMGs ( 1.1 ≳ 10 mJy), it will sample and constrain the range of flux densities probed by typical single-dish surveys ( 1.1 ∼ 1 mJy).Furthermore, the improved angular resolution (FWHM 1.1 ≈ 5 arcsecond) of the TolTEC observations will reduce the impact of source-blending in the measured number counts, an effect that has been confirmed to bias high the measurements from lower angular resolution (FWHM> 10 arcsecond) single-dish observations (e.g.Karim et al. 2013;Brisbin et al. 2017;Barger et al. 2022).The UDS will therefore provide an important link between the faintest population of DSFGs probed by the deepest small-area interferometric surveys (e.g.Fujimoto et al. 2015;González-López et al. 2020), and the brighter, gravitationally lensed SMGs identified in larger-area but lower angular resolution single-dish observations (particularly at the longer wavelengths, e.g.Vieira et al. 2010). The cosmic variance in the UDS/TolTEC surveys In Figure 10 we present the number counts at 1.1 mm for ten independent areas of 200 arcmin 2 and 0.5 deg 2 (representative of the smaller and larger areas considered for the UDS) along with the number counts for the total area of the mock redshift survey.We also add the residual fraction ( N ) obtained by subtracting the total number counts from each realization and normalizing by the total.Our mock surveys predict that, for the smaller 200 arcmin 2 maps, cosmic variance can introduce small relative variations of N < 0.3 at 1.1 < 2 mJy, systematically increasing with flux density and reaching N ∼ 2 at 2 mJy < S 1.1 < 7 mJy.For higher flux densities the variation can be more significant, N ∼ 45.In the case of the 0.5 sq.degree map, the The shaded area represent the 1 dispersion from ten realizations, and the vertical lines indicate the expected 4 detection limit of the UDS.As an additional reference, we include the total number counts of the mock redshift survey (black solid line).The empty squares and circles are observations from interferometric and single-dish telescopes, respectively.The solid circles correspond to SPT measurements, known to be dominated by a gravitationally lensed population of galaxies (Vieira et al. 2010).TolTEC/UDS will place strong constrain to the number counts between 1.1 ∼ 0.1 to a few mJy, linking the results from large-area single-dish surveys sensitive to the rare and brighter SMGs down to the fainter population of DSFGs probed by small-area deep interferometic observations.cosmic variance starts to noticeable affect the counts ( N ∼ 1) at ∼10 mJy, reaching N ∼ 4.3 for the highest flux. UDS/TolTEC Luminosity Function Similar to the number counts, the different areas and depths of the TolTEC extragalactic legacy surveys will allow us to study different regions of the luminosity function.In particular, the UDS has an infrared luminosity lower limit of ∼ 10 11 L ⊙ , enabling the detection of Luminous Infrared Galaxies (LIRGs), with SFRs of dozens of M ⊙ yr −1 .Figure 11 shows our predictions of the luminosity function we expect to be sensitive to in the two characteristic areas of the UDS.The results are divided in four redshift bins, spanning from = 0.5 to = 4.5, and are compared against the same data sets included in Figure 7.The UDS will therefore constrain very well (i.e. with little cosmic variance) the shape of the luminosity function in the less luminous regime, addressing some discrepancies noted in previous studies (Koprowski et al. 2017;Gruppioni et al. 2020;Rodríguez-Puebla et al. 2020).At IR > 4 × 10 12 L ⊙ , the 200 arcmin 2 area exhibits more variance than the 0.5 deg 2 map, as it becomes challenging to capture brighter galaxies.Nevertheless, the 0.5 deg 2 area will provide more reliable luminosity function estimates up to ∼ IR ∼ 10 13 L ⊙ .While the UDS survey will contribute to probe part of this brighter end of the LF, its capabilities remain limited in this regard.More precise insights into this region of the luminosity function will be obtained with the shallower (1 1.1 ∼ 0.25 mJy) but ∼ 75 times wider LSS/TolTEC surveys. UDS/TolTEC redshift distribution Our predictions of the DSFG redshift distribution expected in the larger 0.5 sq.degree area of the UDS are presented in Figure 12.To explore the dependence of the redshift distribution with flux density, the population of DSFGs in our mock redshift survey is divided in two flux density intervals, a fainter sample with 0.1 < 1.1 [mJy] < 1.0 and a brighter sample with 1.1 ≥ 1.0 mJy.A clear difference between the redshift distributions can be seen, with the fainter population dominating at lower redshifts ( med = 2.16 ± 0.01) and brighter sources being more abundant at higher redshifts ( med = 2.81±0.01).This behavior, which is present in some of the observational results discussed in §3.3, can be attributed to the known cosmic downsizing.That is, the fact that more massive galaxies form earlier and faster than less massive ones (see e.g., Firmani & Avila-Reese 2010). The area and depth of the UDS, combined with the ∼ 5 arcsecond angular resolution of TolTEC on the 50-LMT and deep multiwavelength data available in the targeted fields, will result in a large sample of ∼ 10, 000 galaxies with robust Optical/IR counterparts and photometric redshifts.This will allow us to measure the redshift distribution of DSFGs, and study its dependence with different physical properties, at a level not yet possible with current data sets.The colored shadow areas correspond to the 1 error for the mean.The results for 5.3deg 2 is also presented (black solid line).The different areas with different depths allow us to study the luminosity function in a wide interval of luminosity.We compare our luminosity function with previous results (Koprowski et al. 2017;Gruppioni et al. 2020;Rodríguez-Puebla et al. 2020) mark as different colored symbols (brown squares, yellow circles and green triangles).The discontinuous lines with the same colors than the symbols are fits to the respective data.The vertical line indicates the detection threshold in luminosity for the UDS/TolTEC survey. THE IMPACT OF GALAXY CLUSTERING ON FLUX BOOSTING The measured flux densities of sources in a real observation can sometimes be overestimated due to several factors, including the contribution from faint unresolved sources and the influence of neighbouring sources that may be blended together within the telescope beam.This effect, commonly referred to as 'flux boosting', is known to be stronger in sources detected at low signal-to-noise ratio (S/N), as discussed in previous studies (e.g.Hogg & Turner 1998).Most studies rely on simulations to estimate and correct for flux boosting effects (e.g.Zavala et al. 2017;Geach et al. 2017;Smail et al. 2021). Most of these simulations, however, randomly distribute sources on noise maps without taking into account any clustering information, which could potentially bias the flux boosting of sources. Taking advantage of the clustering information in our mock redshift survey, we investigate its impact on the estimation of flux boosting.We measure the effect of flux boosting at the three TolTEC wavelengths (1.1, 1.4 and 2.0 mm), adopting the corresponding angular resolution (FWHM = 5.0, 6.3 and 9.5 arcsec) and expected r.m.s.noise level ( r.m.s.= 0.025, 0.018, and 0.012 mJy) of the UDS survey.We then compare our results to a mock redshift survey with randomly distributed sources. We categorize the galaxies in our mock redshift survey into two distinct groups.The first group, referred to as 'candidates', comprises galaxies with flux densities exceeding the r.m.s.noise level in the UDS (i.e., 1.1 > 0.025 mJy, 1.4 > 0.018 mJy, 2.0 > 0.012 mJy).Fainter galaxies, unlikely to be individually detected but still contributing to the background noise, are denoted as 'boosting sources' and are included in the second group. Each candidate source, with intrinsic flux density instrinsic , is convolved with a Gaussian telescope beam in order to incorporate the flux density contribution from surrounding fainter sources.The resulting flux density is denoted as blend .The noise contribution The resulting 'synthetic' flux density of each source is given by synthetic = blend + noise .Finally, the boosting factor is defined as the ratio between the synthetic and the intrinsic flux densities ( synthetic / intrinsic ).The same methodology is applied to the mock redshift survey with the galaxies randomly distributed (i.e.removing clustering information).Figure 13 shows the median boosting factor, with 1 errors estimated through a bootstraping analysis, as a function of synthetic flux density ( synthetic ) and signal-to-noise ration (S/N).Additionally to model the behavior of the boosting factor in the mock redshift survey we fit to our data a power law following Geach et al. (2017) with the form: with the fitted parameters for each data set given in Table 2.As expected, fainter galaxies (i.e.S/N ∼ 4) are more likely to be strongly boosted by unresolved neighbour sources, with median boosting factors of 1.14 +0.41 −0.23 , 1.20 +0.50 −0.27 and 1.26 +0.63 −0.30 for the clustering map for the three wavelengths.The median values of the boosting factors are very similar for both clustered and non-clustered populations at 1.1 mm and 1.4 mm.At 2.0 mm, we observe departures at S/N > 7. The larger beam size allows for a larger chance of blending sources.This is the dominant effect in the abrupt departures from the non-clustering solution seen at S/N > 60 (i.e. 2.0 > 0.72 mJy, see Figure 13), where several sources of similar brightness are blended within the same beam.At the largest S/N there are no differences in the median values, since the surface density of bright galaxies is smaller. The errors of the median values of the boosting factors are systematically larger when clustering is considered, specially for the larger S/N values.This indicates the high uncertainty to correct for boosting individual high S/N sources.This is in contrast with the findings of previous studies (Zavala et al. 2017;Geach et al. 2017;Smail et al. 2021), where only faint galaxies were observed to be significantly affected by boosting in maps without clustering. To quantify our results, Figure 13 shows the ratio between the boosting factors estimated for a non-clustered and a clustered population of DSFGs.On average, clustering increases the boosting factor by 0.5±0.1 per cent at 1.1 mm, 0.8±0.1 per cent at 1.4 mm, and 1.6±0.2 per cent at 2.0 mm.Despite being a relatively small correction, including clustering in the flux boosting determinations will systematically improve the flux density measurements, particularly for the lower angular resolution observations. 2.0mm Ratio between errors Figure 13.Left: Median boosting factor vs signal-to-noise ratio for an area of 1 deg 2 obtained at 1.1, 1.4 and 2.0 mm (top, middle, and bottom panels).The magenta dots represent the median boosting factor measured in a map including galaxy clustering, while the blue dots correspond to the median boosting factor when sources are assigned random positions.The pink and blue solid curves are a fits to the data with and without clustering.The vertical bars represent the 1 error in the medians.Middle: ratio between the median boosting factor values without and with clustering of the DSFG population.The red horizontal lines represent the median value of the ratios, with the shaded area representing the 1 error of the median.Right: ratio between positive errors in the median boosting factor. SUMMARY AND CONCLUSIONS In this paper, we present a new cosmological mock redshift survey of the DSFG population based on a lightcone built from the Bolshoi-Planck dark matter halo simulation (Klypin et al. 2016;Rodríguez-Puebla et al. 2016b).The mock redshift survey covers a total area of 5.3 sq.degree and a redshift range = 0 − 7. The merger trees of dark matter haloes are used in order to model the mass assembly and star formation histories of galaxies based on the updated version of semi-empirical model by Rodríguez-Puebla et al. (2017).We used a sigma clipping process to define a star forming main sequence from our mock survey and found that it reproduces previous determina-tions from the literature (Speagle et al. 2014;Popesso et al. 2022;Cardona-Torres et al. 2023).The infrared properties of the galaxies are assigned based on empirical determinations of the dust-obscured fraction of SFR (Whitaker et al. 2017;Rodríguez-Puebla et al. 2020) and the SFRD (Dunlop et al. 2017), the Kennicutt (1998) relation between SFR and IR , the relation between the observed wavelength peak, peak , of the IR SED and IR (Casey et al. 2018) and the Wien's displacement law to obtain the dust temperature (along with the assumption of modified black body for the SED), which are corrected by heating effects due to the CMB (da Cunha et al. 2013).The alignment between dark matter haloes and background galaxies is used to incorporate gravitational lensing magnifications (Narayan & Bartelmann 1996). This new mock redshift survey accurately reproduces the mm wavelength number counts, from the faint end ( 1.1 ∼ 0.1 mJy) probed by small area interferometric observations to the bright end ( 1.1 ∼ 10 mJy), mostly explored by larger single-dish surveys.Although the total area of the mock redshift survey is not large enough to include a large sample of the brighter and more extreme SMGs ( 1.1 > 10.0 mJy), it does reproduce the expected flattening (at 1.1 ≳ 8 mJy) due to the strongly gravitationally lensed population of galaxies.Furthermore, although the evolution of the IR luminosity functions is still not well constrained by observations, particularly at > 2.5, our mock redshift survey predicts a luminosity function that falls well within the uncertainties and dispersion of the different observational results, from = 0.5 to 4.5 and over two orders of magnitude (log[ IR /L ⊙ ] = 11 − 13). Additionally, our mock redshift survey reproduces the total and IR measurements of the SFR density from = 0−7.Compared to recent studies that have estimated the redshift distribution of faint DSFGs ( 1.1 ≳ 0.2 mJy), our mock redshift survey systematically predicts slightly (≲ 10 per cent) higher median redshift values, suggesting that we might be missing a small fraction of faint low- DSFGs or overestimating the high- population.It should be noted, however, that most of these observational results are based on relatively small (< tens of sq.arcminutes) interferometric surveys, and can therefore be affected by cosmic variance effects.Our mock redshift survey further predicts the effect of cosmic downsizing, with brighter DSFGs being more abundant in the past and a fainter population dominating at lower redshifts. It should be noted that the predictions of our mock redshift survey for the fainter population of DSFGs (i.e. 1.1 ≲ 0.05 mJy and IR ≲ 10 11 L ⊙ ) are potentially underestimated and limited by the mass resolution of the dark matter cosmological simulation (1.55 × 10 8 ℎ −1 M ⊙ ).Similarly, the brighter population (i.e. 1.1 ≳ 10 mJy and IR ≳ 10 13 L ⊙ ) is poorly sampled by the total area of the survey (5.3 sq.degree).A larger mock redshift survey ( ∼ 100 sq.degree) to probe this brighter population and its clustering properties is under development and will be presented in a forthcoming paper. Mock redshift surveys like the one presented in this work, are an essential tool for the design of future extragalactic surveys and to predict the different properties of the population of galaxies that they will probe.Furthermore, they can be used to develop and test source extraction techniques and data analysis tools, characterize different biases affecting real observations, estimate the expected confusion noise, between others.In this work we take advantage of our mock redshift survey to make predictions for the TolTEC Ultra Deep Survey ( ≈ 0.8 sq.degree, 1 1.1 ≈ 0.025 mJy) and estimate the impact of clustering in flux boosting determinations.Below, we summarize our main results regarding these questions: • The UDS will detect ∼ 24, 000 DSFGs above a 4 threshold (i.e. 1.1 ≳ 0.1 mJy).This sample will result in a comprehensive view of the 1.1, 1.4, and 2.0 mm number counts between 1.1 ∼ 0.1 and 30 mJy.The 1.1 ∼ 0.1 − 1.0 mJy regime will be strongly constrained, providing a robust connection between deep interferometric observations and the wider but shallower single-dish surveys.Towards the brighter end, cosmic variance will have a ∼ 20 per cent impact on the counts at 1.1 ∼ 5 mJy, increasing to > 40 per cent for 1.1 > 10 mJy. • The UDS sample will be dominated by relatively low mass DSFGs with ★ = 10 9.5 − 10 10.5 M ⊙ , with ∼ 17 per cent of the sample corresponding to ★ < 10 9.5 M ⊙ galaxies.This will allow us to explore a population an order of magnitude less massive than the typical SMGs identified with previous single-dish surveys. • The area and sensitivity of the UDS, combined with the ∼ 5 arcsecond angular resolution of TolTEC on the 50-LMT and deep multi-wavelength data available in the UDS fields, will result in a large sample of DSFGs with robust Optical/IR counterparts.This will allow us to estimate accurate physical properties and photometric redshifts, and study the evolution of the IR luminosity function in a wide redshift range (0.5 < < 4.5), particularly in the LIRG regime (10 11 < IR /L ⊙ < 10 12 ).Furthermore, the UDS will constrain the redshift distribution of relatively faint DSFGs and its dependence with different physical properties.This will provide important insights on the effect of downsizing in this population of galaxies. • The median flux boosting of sources detected with a S/N = 4 in the UDS will be 1.14 +0.41 −0.23 , 1.20 +0.50 −0.27 and 1.26 +0.63 −0.30 at 1.1, 1.4, and 2.0 mm.As expected, flux boosting has a smaller impact on brighter sources detected with a higher signal-to-noise, e.g. for 10 sources, the mean boosting factors and their dispersion are reduced to 1.04 +0.15 −0.10 , 1.05 +0.18 −0.10 and 1.08 +0.23 −0.12 .• Including clustering has a relatively minor impact on the determination of flux boosting factors, with no evident dependence on source S/N.Compared to a randomly distributed population, clustering increases flux boosting by 0.5±0.1,0.8±0.1 and 1.6±2.0 per cent at 1.1, 1.4, and 2.0 mm.The relative increase with angular resolution suggests that clustering might have a stronger impact on lower angular resolution observations (e.g.FWHM≳ 10 arcsec).Clustering, however, increases the uncertainties associated to the flux boosting factor determination, and should be propagated to the flux density uncertainties. Figure 2 . Figure 2. Main sequences of our mock catalogue (magenta solid line) obtained following the selection of star-forming galaxies presented in section 2.2.The projected density map represents the selected star-forming galaxies.For comparison, we include the main sequences derived by Speagle et al. (2014, orange dashed line), Popesso et al. (2022, green dashed line) and Cardona-Torres et al. (2023, blue dashed line). Figure 4 . Figure 4. Number counts at 1.1, 1.4, and 2.0 mm.The black solid line represents our measurement accounting for all the galaxies in the mock redshift survey.The color solid lines show the contributions to the number counts where we separate the galaxies by stellar mass ranges.The vertical lines are the 4 detection limits in the UDS surveys. includes: the compilation of UV and IR measurements fromMadau & Dickinson (2014), where the UV observations have been corrected for dust attenuation; the total SFRD presented inDunlop et al. (2017), derived combining deep ALMA observations and rest-frame UV data of the 4.5 arcmin 2 HUDF; the results fromKhusanova et al. (2020), who estimated the SFRD at = 4.5 and 5.5 from the UV-selected galaxies in the ALMA Large Program to INvestigate CII at Early Times (ALPINE); and theGruppioni et al. (2020) SFRD derived from serendipitous detections around the ALPINE targets.Also shown is the total SFRD fromKoprowski et al. (2017), estimated by integrating their measured IR luminosity function and including data fromParsa et al. (2016) to incorporate the UV contribution. Figure 5 . Figure 5.Total SFRD measured in our mock redshift survey (black solid line), and its relative contribution from the unobscured UV and dust-obscured IR components (blue and pink solid lines respectively).Our results are in good agreement with the observational data from Madau & Dickinson (2014, pink and cyan squares), Dunlop et al. (2017, olive green squares), Khusanova et al.(2020, yellow squares) andGruppioni et al. (2020, orange squares).We note that the compilation of UV measurements fromMadau & Dickinson (2014) have been corrected for dust attenuation in order to recover the total SFRD.We also plot the SFRD estimate byKoprowski et al. (2017, brown dashed-dotted line). Figure 7 .Figure 8 . Figure 7. Infrared luminosity function measured in our 5.3 deg 2 mock redshift survey (black solid line, with shaded regions indicating the 1 Poisson error).The brown squares represent the sample by Koprowski et al. (2017) and brown dashed line they fit.The yellow circles are the data by Gruppioni et al. (2020) and the yellow dotted line their fit.The green triangles are the compilation data by Rodríguez-Puebla et al. (2020) calibrated to the same cosmology.The Béthermin et al. (2017) simulation is indicated with the gray dashed-dotted line.The vertical line indicates the limit in luminosity that the UDS/TolTEC survey will reach at S/N=4. Figure 10 .Figure 11 . Figure 10.Number counts for ten independent areas (colored lines) and their mean number counts for 200 arcmin 2 and 0.5 deg 2 (black dashed line in the left and right panels respectively), showing the effect of cosmic variance.The bottom panels show the residual fraction in the number counts obtained by subtracting the mean number counts from each independent realization and normalizing by the mean value. Figure 12 . Figure12.Redshift distribution at 1.1 mm for selected DSFGs in our mock redshift survey in an area of 0.5 sq.degree to be observed as part of the UDS/TolTEC survey.The cyan and golden distributions correspond to galaxies with flux densities of 0.1 < 1.1 [mJy] < 1.0 mJy and 1.1 > 1 mJy respectively.The errors bars represent the 1 dispersion from the ten 0.5 sq.degree map realizations.The median values for each distribution are indicated by the upper vertical lines.The difference between the bright and faint DSFG redshift distribution can be attributed to the known downsizing effect. Table 1 . Main properties and range of parameters of the galaxies in our mock redshift survey. Figure 9. Mean 1.1, 1.4 and 2.0 mm number counts predicted by our mock redshift survey for the areas of the UDS/TolTEC survey, i.e. 200 arcmin 2 (red) and 0.5 deg 2 (blue). Table 2 . Parameters fitted to equation (17) to model the boosting factor in maps with and without clustering.
14,180.6
2024-03-22T00:00:00.000
[ "Physics" ]
CapsID: a web-based tool for developing parsimonious sets of CAPS molecular markers for genotyping Background Genotyping may be carried out by a number of different methods including direct sequencing and polymorphism analysis. For a number of reasons, PCR-based polymorphism analysis may be desirable, owing to the fact that only small amounts of genetic material are required, and that the costs are low. One popular and cheap method for detecting polymorphisms is by using cleaved amplified polymorphic sequence, or CAPS, molecular markers. These are also known as PCR-RFLP markers. Results We have developed a program, called CapsID, that identifies snip-SNPs (single nucleotide polymorphisms that alter restriction endonuclease cut sites) within a set or sets of reference sequences, designs PCR primers around these, and then suggests the most parsimonious combination of markers for genotyping any individual who is not a member of the reference set. The output page includes biologist-friendly features, such as images of virtual gels to assist in genotyping efforts. CapsID is freely available at . Conclusion CapsID is a tool that can rapidly provide minimal sets of CAPS markers for molecular identification purposes for any biologist working in genetics, community genetics, plant and animal breeding, forensics and other fields. Background DNA sequences from different varieties or accessions of a given species are becoming available through a number of sequencing projects. Single nucleotide polymorphisms (SNPs), together with insertion/deletions (InDels), are the most common type of polymorphism in the genomes studied so far. Large sets of predicted SNPs are publicly available for the human genome (SNP Consortium, http:/ /snp.cshl.org), and for some genetic model organisms, including Caenorhabditis elegans [1], Drosophila melanogaster [2], and Arabidopsis thaliana [3]. In addition, the generation of short stretches of DNA sequence information for non-model organisms, e.g. from ribosomal sequences, is becoming increasingly economical. Approximately 30-40% of SNPs alter restriction endonuclease recognition sites and these are commonly referred to as snip-SNPs [1]. Restriction enzyme digestion pattern polymorphism may be used to create cleaved amplified polymorphic sequence (CAPS) markers (also known as PCR-RFLP markers), which are codominant molecular markers that amplify a short genomic sequence around the polymorphic endonuclease restriction site [4]. They are easily detected by agarose gel electrophoresis. CAPS are thus affordable and practical for genotyping in positional or map-based cloning projects [4][5][6] and in molecular identification studies, where direct sequence-based identification is not necessary or practical. With the increasing availability of parallel genomic sequences from different varieties or accessions of genetic model species and/or the use of genetic fingerprinting methods for forensic and other work, there is a need for a web-based, user-friendly program that facilitates snip-SNP-based CAPS marker design. Currently there is no free software tool available for rapid detection of the restriction site polymorphisms in aligned sequences. Some related applications are for conceptually different markers (SNAPER [7]), are not freely available for bench scientists through a web interface (autoSNP [8]), or have a very limited sequence size input and no sequence alignment option (dCAPS Finder [9,10]), or only identify potential CAPS markers [11] and do not automate the process of selecting the most useful and parsimonious set of CAPS markers for genotyping. To overcome these limitations we created CapsID, a webbased program intended for bench scientists, which identifies differential endonuclease restriction sites in multiple sequence alignments. CapsID then generates the appropriate primers to use, and selects the smallest number of CAPS markers to unambiguously identify a member of a set. Primers are generated automatically, if desired, with the Primer3 [12] program, and in addition virtual gel pictures for each species (sequence) are generated. Implementation Each CAPS marker splits a set of candidate species or strain sequences into two or more testable species or strain sequence sets. In the following example we assume that there is only one target snip-SNP within the amplified region, so: Set1 = {Sequences the restriction enzyme targeting the CAPS cuts in} Set2 = {Sequences the restriction enzyme targeting the CAPS does not cut in} The presence of many possible CAPS markers splits the candidates into many testable sets. Ideally we would like to split N candidates into N testable sets. Our preference for many small testable sets may be summarized by the equation: , where |S| is the number of species in the set S. This will assign a high cost to few large sets, and a low cost to many small sets. Consider the following example: Thus to minimize cost we choose CAPS 1 then 2 then 3and not CAPS 2 then 1 then 3, because the cost function progresses from highest to lowest. Algorithmically, for each possible CAPS marker, a cost is assigned based on the size of the sets created. The one with the lowest cost wins. Following this, the remaining potential CAPS markers are examined, in the context of the sets generated by the winner in the first step. Again the winner is chosen based on the smallest cost for the new sets created by the combined action of the 2 CAPS markers. The algorithm proceeds until the number of sets is equal to the number of candidate sequences. If this criterion cannot be met, then the algorithm alerts the user that there are not enough CAPS markers to unambiguously identify the candidates. If there is a tie in terms of the cost function, CapsID offers these to the user as alternate CAPS markers. CapsID is programmed in Python. Each sequence alignment is parsed using Biopython [13] into either a Bio.Clustalw.ClustalAlignment or Bio.Fasta.FastaAlignment object. Both of these types are subclasses of a more general Alignment class, allowing for both to be handled | | S S Sets 2 ∈ ∑ transparently. The alignments are subsequently processed by the Bio.CAPS module, which we created and have submitted to the Biopython repository [13]. This module makes use of the Bio.Restriction package to computationally "digest" each sequence in the alignment according to the user-specified enzymes. The algorithm keeps track of the sets generated and uses the cost function described to identify the most parsimonious number of CAPS markers. The Bio.CAPS package can also utilize Biopython's Bio.Emboss.Primer module, a wrapper for a primer design program called Primer3 [12], so that primers may be generated using a need_primers call. Finally, we have written a module called gel.spy that uses the Python Imaging Library to generate a virtual gel representation of the fragments on the fly. Results and discussion The sequence input page contains input boxes to paste or upload a set of alignments, as shown in Figure 1 in the top panel. The alignments can be in Fasta or Clustal format. The user can then select which enzymes should be considered for CAPS marker generation. When the user asks for CAPS markers to be created, many Biopython modules are employed in series to generate a marker set. The alignments are first parsed from their native file format into uniform alignment data structures. These data structures are then computationally analyzed for potential CAPS markers. The program then uses a cost function, explained in the Implementation section, to weight each potential set of CAPS markers, so that an optimal set can be found. This optimal set is presented to the user with all the information needed to use the CAPS markers. Of note are fields for enzyme, primer sequences and primer location. To aid in marker identification, a pseudo gel is also displayed using the Python Imaging Library, as seen in Figure 1 in the bottom panel. This pseudo gel is an approximation of what should be seen on the gel when a marker is tested. For a given set of sequences there may be more than one equally parsimonious set of CAPS markers that may be used for genotyping, and in this case the user may click through these sets and inspect the resultant pseudo gels for the 'best' set, in terms of fragment size differences. Conclusion In conclusion, CapsID is an easy-to-use web-based tool for generating the smallest set of CAPS markers to genotype an individual. It is expected that this tool will be of great use to biologists working in community genetics, plant and animal breeding, forensics and other fields. Authors' contributions JT carried out the implementation of the algorithm, designed the cost function to identify the most parsimonious sets, programmed the web interfaced and helped with the manuscript. NJP wrote and edited the manuscript, conceived the study and supervised JT. All authors read and approved the final manuscript. Figure 1 CapsID Screenshots. The top panel illustrates the input page of CapsID. Inputs allowed are the desired enzymes to be used for the CAPS markers, and a multiple sequence alignment(s) of sequences from the reference organisms. The bottom panel shows the CapsID output page summarizing the number of potential CAPS markers found, as the most parsimonious set to use to genotype organisms not in the reference set. Clicking on a given CAPS marker results in a page containing 'virtual' gel images for that CAPS marker in each reference organism to be generated, as well as the primers and the exact cut locations of the indicated enzyme.
2,109.8
2006-05-10T00:00:00.000
[ "Biology", "Computer Science" ]
Digging out Discrimination Information from Generated Samples for Robust Visual Question Answering , Introduction With the vigorous development of computer vision and natural language processing fields, it has promoted the vision-and-language (Gu et al., 2022;Wen et al., 2023b) field to take a forward step.As a typical task of the vision-and-language field, Visual Question Answering (VQA) (Anderson et al., 2018;Cadène et al., 2019a) requires an agent to fully comprehend the information of the questions and images, and then correctly answer the textual question according to the image.Although recent advances (Cadène et al., 2019a) have achieved impressive performance on the benchmark datasets * Corresponding author (e.g., VQA v2 (Goyal et al., 2017)), numerous studies (Agrawal et al., 2018;Kafle and Kanan, 2017) have shown that some VQA models tend to excessively rely on the superficial correlations (i.e., biases) between the questions and answers, instead of adopting reasoning ability to answer the questions.For example, the VQA models can easily answer "2" and "tennis" for the questions "How many . . ." and "What sports . . .", respectively, would obtain higher accuracy, since the corresponding answers "2" and "tennis" are occupied the most in the dataset.However, memorising the biases to answer the questions would signify flawed reasoning ability, resulting in poor generalisation ability. To mitigate the bias issues, many methods have been proposed, which can be roughly categorised into three types: 1) enhance visual attention (Selvaraju et al., 2019;Wu and Mooney, 2019), 2) directly weaken the biases (Cadène et al., 2019b;Niu et al., 2021), and 3) balance the dataset (Chen et al., 2020;Zhu et al., 2020).Previous studies have shown the methods that balance the dataset usually outperform other types of methods, since they dig out the natural distribution of the data, and then devise a suitable strategy to overcome the biases.Specifically, CSS (Chen et al., 2020) and Mutant (Gokhale et al., 2020) methods generate counterfactual samples by masking the critical objects or words in the images and questions, respectively.However, these methods require additional annotations that are hard to obtain.To get rid of the dependence on the additional annotations, MMBS (Si et al., 2022) constructs the positive questions by randomly shuffling the question words or removing the words of question types, which destroys the grammar and semantics of the original questions.Moreover, SimpleAug (Kil et al., 2021) and KD-DAug (Chen et al., 2022) build the new samples by re-combining the existing questions and images, which may be difficult to assign correct answers for the generated samples.SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) construct the negative samples by randomly sampling the images or questions in a mini-batch data.Nevertheless, these methods consider the negative samples only, but ignoring the generated positive samples would improve the diversity of the dataset and further promote the robustness of the VQA models. To overcome the above issues, we propose a method to Dig out Discrimination information from Generated samples (DDG).As pointed out by (Wen et al., 2021), the bias issues exist in both vision and language modalities, we thus construct positive and negative samples in vision and language modalities and devise corresponding training objectives to achieve unbiased learning.Concretely, we feed the samples to the UpDn (Anderson et al., 2018) model pre-trained on the VQA-CP (Agrawal et al., 2018) v2 training set, and select k objects based on the top-k image attention weights of the UpDn as the positive images.The positive questions can be constructed by using the translate-and-back-translate mechanism, e.g., English → French → English.We then combine the positive images and positive questions with original questions and original images, respectively, as positive image and question samples.Based on the positive samples, we adopt a knowledge distillation mechanism (Hinton et al., 2015) to help the learning of the original samples. Moreover, inspired by (Wen et al., 2021), we construct mismatched image-question pairs as negative samples.Generally speaking, one cannot answer the question correctly given the mismatched image-question pairs, since missing the supporting modality information.To promote the VQA models to focus on the vision and language modalities, we devise a training objective that aims to minimise the likelihood of predicting the ground-truth answers of the original samples when given the corresponding negative samples.Besides, we further introduce the corresponding positive samples to assist the training.Based on the above debiased techniques, our DDG achieves impressive performance on the VQA-CP v2 (Agrawal et al., 2018) and VQA v2 (Goyal et al., 2017) datasets, which demonstrates the effectiveness of our DDG. Our contributions can be summarised as follows: 1) We devise a novel positive image samples generation strategy that uses the image attention weights of the pre-trained UpDn model to guide the selection of the target objects.2) We introduce the knowledge distillation mechanism to promote the learning of the original samples by the positive samples.3) We adopt the positive and negative samples to impel the VQA models to focus on the vision and language modalities, to mitigate the biases. Overcoming biases in VQA Recently, researchers have proposed vast debiased techniques (Selvaraju et al., 2019;Niu et al., 2021;Zhu et al., 2020;Wen et al., 2023a) to alleviate the bias issues in VQA, which can be roughly categorised into three types: 1) enhance the visual attention, 2) directly weaken the biases, 3) balance the dataset.Methods that enhance visual attention.These methods seek to adopt human-annotated information to strengthen the visual attention of the VQA models.Specifically, Selvaraju et al. (Selvaraju et al., 2019) aligned the important image regions identified based on the gradient with the human attention maps to enhance the visual attention in the VQA models.Wu et al. (Wu and Mooney, 2019) introduced a self-critical training objective that matches the ground-truth answer with the most important image region recognised by human explanations.However, these methods require human annotations that are hard to obtain.Methods that weaken the biases.Ramakrishnan et al. (Ramakrishnan et al., 2018) adopted adversarial learning to inhibit the VQA models capture the language biases.Inspired by (Ramakrishnan et al., 2018), Cadene et al. (Cadène et al., 2019b) devised a question-only model to generate weight to re-weight the samples.Moreover, Han et al. (Han et al., 2021) forced the biased models to capture different types of biases, and removed them step by step.Different from the above, Niu et al. (Niu et al., 2021;Niu and Zhang, 2021) introduced the idea of cause-effect to help alleviate the biases.Nevertheless, these methods introduce additional parameters in training or inference phrases.Methods that balance the dataset.CSS (Chen et al., 2020) and Mutant (Gokhale et al., 2020) methods generated massive counterfactual samples by masking the critical objects and words in the images and questions, respectively.However, these methods require additional annotations to assign the answers for the generated samples.To get rid of the dependence on the annotations, KD-DAug (Chen et al., 2022) and SimpleAug (Kil et al., 2021) constructed the samples by re-composing the existing questions and images, which however, is hard to assign the correct answers for the generated samples.SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) constructed the negative samples by randomly sampling the images or questions in a mini-batch data.Nevertheless, these methods ignored the positive samples could improve the diversity of the dataset, which was helpful for improving the robustness of the VQA models.Moreover, Si et al. (Si et al., 2022) constructed the positive question samples by randomly shuffling the words or removing the words of question types, which destroys the semantics of the original questions. Different from the above methods, we seek to construct positive samples and negative samples in both vision and language modalities, and devise corresponding debiased strategies to achieve unbiased learning. Knowledge Distillation Knowledge Distillation (KD) (Hinton et al., 2015) is a universal model compression method that seeks to train a small student model guided by a large teacher model.Due to the effectiveness of KD, the idea has been applied to other tasks, e.g., longtail classification (He et al., 2021;Xiang et al., 2020), object detection (Chen et al., 2017;Wang et al., 2019), and video captioning (Pan et al., 2020;Zhang et al., 2020).Recently, some debiased VQA methods (Niu and Zhang, 2021;Chen et al., 2022) introduced KD to alleviate the bias issues.Specifically, Niu et al. (Niu and Zhang, 2021) devised two teachers (i.e., ID-teacher and OOD-teacher) to generate "soft" labels to guide the training of the student model (i.e., the baseline model) with the KD mechanism.Inspired by IntroD, Chen et al. (Chen et al., 2022) adopted a multi-teacher KD mechanism to help generate robust pseudo labels for all newly composed image-question pairs.In our DDG, we seek to improve the reasoning ability of the VQA models with the help of the generated positive samples, via the KD mechanism. Digging out Discrimination Information from Generated Samples As shown by (Agrawal et al., 2018;Kafle and Kanan, 2017), VQA models tend to capture the biases in a dataset to answer questions, instead of adopting the reasoning ability, resulting in poor generalisation ability.Moreover, bias issues exist in both vision and language modalities (Wen et al., 2021).To address the above issues, we seek to construct both positive and negative samples in vision and language modalities, and devise corresponding debiased strategies to achieve unbiased learning.The overall framework is shown in Figure .1. Preliminary Visual question answering (VQA) requires an agent to answer a textual question given a corresponding image.Traditional VQA methods (Anderson et al., 2018;Kim et al., 2018;Ben-younes et al., 2019;Cadène et al., 2019a) regard the VQA task as a multi-class classification problem, where each class corresponds to a unique answer.To be specific, given a VQA dataset with N samples, where v i ∈ V (image set), q i ∈ Q (question set) are the i-th sample in D, and a i ∈ A (answer set) is a corresponding ground-truth answer, VQA methods seek to learn a multimodal mapping: to generate an answer distribution over the answer set A. Generally speaking, most VQA models usually contain four parts, namely, vision feature encoder e v (•), language feature encoder e q (•), multimodal feature fusion module f (•, •), and classifier c(•).These modules can be formed as a traditional VQA model: Formally, since regarding the VQA task as a multiclass classification problem, the VQA models can be optimised by a binary cross-entropy loss L vqa , which can be formulated as: where σ denotes the sigmoid activation function, and a i is the target score obtained based on the answer a i that humans annotated for (v i , q i ). Sample Generation Our method aims to adopt the generated samples to achieve unbiased learning.Hence we present how to generate the positive and negative samples at first.As pointed out by (Wen et al., 2021), biases exist in both language and vision modalities.To overcome the bias issue, we seek to generate positive and negative samples regarding the vision and language modalities for each original sample, to assist the training process. Ground-Truth Figure 1: Overview of our DDG method.we draw support from the pre-trained UpDn (Anderson et al., 2018) model and the translate-and-back-translate mechanism to generate the positive image and question samples, respectively, and then introduce a knowledge distillation mechanism to facilitate the learning of the original samples.Moreover, we construct the mismatched image-question pairs by randomly sampling the images or questions in a mini-batch data as negative samples.By using these negative samples, we enhance the attention of VQA models towards both vision and language modalities when answering the questions, thereby mitigating biases. Positive samples generation.To mitigate the bias issue over vision and language modalities in VQA, we build two types of positive samples, i.e., a positive question sample, and a positive image sample.Specifically, to generate the positive image samples, we seek to draw support from the image attention weights in a pre-trained baseline VQA model.We have empirically found that although the baseline models (e.g., UpDn (Anderson et al., 2018)) achieve unsatisfactory performance in the out-of-distributions (OOD) test set (e.g., VQA-CP v2 (Agrawal et al., 2018) dataset), they still obtain promising performance in the independent and identically distributed (IID) dataset (e.g., VQA v2 dataset (Goyal et al., 2017)).In other words, the baseline models can identify the target objects in the images referred to in the questions to accomplish answering during the training process, regardless of whether capturing the biases.Hence, the image attention weights of the pre-trained UpDn (Anderson et al., 2018) model can help find target objects as positive image samples, which can exclude the background information of the images. Given the sample (v i , q i ) from the VQA-CP v2 training set, we first feed it to the UpDn model pretrained on the VQA-CP v2 training set and would obtain the image attention weights of the UpDn model regarding the objects in image v i .Note that we select k objects based on the top-k image attention weights as the positive image samples where k is a hyper-parameters. To generate the positive question samples, previous methods (Si et al., 2022) seek to adopt some data augmentation methods to expand the data, e.g., randomly shuffle the question words or remove question category words.However, these methods would severely destroy the grammar and semantics of the original question, resulting in changing the semantic information of the questions.To mitigate this issue, inspired by (Tang et al., 2020), we adopt the translate-and-back-translate mechanism to generate the positive question samples.Specifically, we first use pre-trained English-to-French and English-to-German translation models to translate the original question to French and German, respectively.Then we use corresponding pretrained back-translation models to translate them back into English. 1 Moreover, we further adopt a pre-trained sentence similarity model to choose a back-translated question sample that has the highest similarity score with the original question as the positive question sample.Note that for some sim-ple questions, they would still keep the same even feeding them to the translate-and-back-translate process.To generate positive question samples for these questions, we substitute the words in the question with synonyms based on the pre-trained synonym word substitution model. 2 In this way, we obtain the positive question samples (v i , q + i ).Based on the above, we would obtain two types of positive samples (i.e., (v + i , q i ) and (v i , q + i )) for each sample (v i , q i ), in which the positive image samples have foreground information in the image and the positive question samples are semantic equivalent to the original question.Negative samples generation.Inspired by (Wen et al., 2021), we construct the negative samples over language and vision modalities by randomly sampling one question and one image in a minibatch data for each sample.Specifically, given a mini-batch data {(v b , q b )} B b=1 , for each sample (v i , q i ), we randomly sample one image v − i and one question q − i from {(v b , q b )} B b=1 to form the negative samples, namely, negative question sample (v i , q − i ) and negative image sample (v − i , q i ). Generated Samples Driven Robust VQA Positive samples driven robust VQA.We attempt to achieve robust VQA with the help of the generated positive samples.Specifically, given an original sample (v i , q i ) and its counterpart positive samples (v + i , q i ) and (v i , q + i ), we first feed them into the VQA models to obtain the predictions P (A|v i , q i ), P (A|v + i , q i ), and P (A|v i , q + i ).Generally speaking, the ensemble predictions usually perform better than the predictions before the ensemble.We thus adopt a simple ensemble strategy (i.e., averaging these predictions) to obtain ensemble predictions P ens = (P (A|v i , q i ) + P (A|v + i , q i ) + P (A|v i , q + i ))/3.One intuitive way to make the VQA models achieve better performance with the help of the positive samples is to adopt a knowledge distillation mechanism (Hinton et al., 2015).Concretely, we regard the ensemble prediction P ens and the original prediction P (A|v i , q i ) as a teacher and a student, respectively, and then introduce a Kullback-Leibler (KL) Divergence L dis as the objective to optimise the VQA models, which can be formulated as: . (3) 2 The pre-trained model is from the GitHub repository. By minimising the KL divergence, the VQA models can extract discrimination information from the positive samples to help better answer the original questions q i correctly based on the images v i .To guarantee the teacher (i.e., the ensemble prediction P ens ) performs better than the student (i.e., the original prediction P (A|v i , q i )), we still use the binary cross-entropy loss L ens on P ens to further optimise the VQA models.Negative samples driven robust VQA.Besides adopting the positive samples to assist the training process, we also introduce the debiased strategy on negative samples to alleviate the bias issues.As shown by (Agrawal et al., 2018), the bias issue usually denotes the VQA models tend to capture the superficial correlations between one modality and the answers to make a prediction on the questions.To mitigate this issue, one direct solution is to improve the attention on both language and vision modalities information when the VQA models answer the questions.We thus consider adopting the negative samples to achieve this aim. Intuitively, given a mismatched image-question pair, the VQA models even the human being cannot make a correct prediction.Drawing from this insight, when given original samples and the counterpart negative samples, we can alleviate the biases by giving contrary training objectives to the negative samples.This encourages the VQA models to answer the questions by paying more attention to the information of each modality.Concretely, inspired by (Wen et al., 2021), given an original sample (v i , q i , a i ) and its counterpart negative samples (v − i , q i ) and (v i , q − i ), the VQA models cannot answer correctly when feeding the negative samples, which can be achieved by minimising the possibility of predicting the ground-truth answer: ) where x is the index of ground-truth answer a i in the answer set A, and δ is the softmax activation function.Minimising the training objective L neg encourages the VQA models not to give the ground-truth answer when feeding the mismatched image-question pairs.Thus, the VQA models are able to consider both image and question information before making a prediction, which implicitly alleviates the bias issue. Moreover, to further enhance the attention of VQA models towards both vision and language modalities, we introduce positive samples into the training process.Specifically, given positive image samples (v + i , q i , a i ) and negative image samples (v − i , q i ), when feeding them to the VQA models, one hopes the VQA models can answer correctly with high confidence on the positive samples, while having low prediction confidence to the groundtruth answer with the negative samples.This can be formulated as: Maximising the objective encourages the VQA models to make accurate predictions for the matched image-question pairs, while discouraging the models from generating the ground-truth answer when provided with negative image samples.This impels the VQA models to allocate more attention to the vision modality. By leveraging the monotonicity property of the logarithmic function log, we convert the maximisation problem into an equivalent minimisation problem.This transformation can be mathematically formulated as follows: Moreover, with regard to the negative samples, the prediction scores associated with the groundtruth answer of the corresponding positive samples can serve as an indicator of the extent to which VQA models capture biases.The higher the prediction score δ(P (A|v − i , q i ))[x]), the greater the degree of bias it represents.Therefore, it should be subject to a higher penalty in the loss L img .Inspired by the focal loss (Lin et al., 2017), we consider the prediction score δ(P (A|v − i , q i ))[x]) as a measure of the degree of bias, and thus the loss L weight img can be reformulated as: We would obtain L weight que in the same way.Thus the weighted loss is formulated as follows: (7) Overall Training Objective In total, our overall training objective can be formulated as: where λ is a hyper-parameter. Datasets We evaluate our DDG on the OOD dataset VQA-CP v2 (Agrawal et al., 2018) and IID dataset VQA v2 (Goyal et al., 2017) validation set based on the standard evaluation metric (Antol et al., 2015).Due to the page limitation, we put the implementation details and compared methods into the Appendix. Quantitative results We report the experimental results on the VQA-CP v2 and VQA v2 datasets in Table 1.From these results, we have the following observations: 1) On the whole, the methods that balance the datasets outperform the other two types of methods i.e., enhance visual attention and directly weaken the biases.This demonstrates that alleviating the biases by paying more attention to the natural distribution of the data would obtain higher performance. 2) Our DDG outperforms most compared methods.Specifically, our DDG surpasses SCR (Wu and Mooney, 2019), GGE-DQ (Han et al., 2021), SSL-VQA (Zhu et al., 2020), and KDDAug (Chen et al., 2022) by approximately 12%, 3%, 3%, and 1%, respectively.These results demonstrate the effectiveness of our DDG.3) Although our method performs slightly worse than the Mutant (Gokhale et al., 2020) and D-VQA (Wen et al., 2021), our DDG achieves higher performance on the VQA v2 dataset.Moreover, Mutant constructed the counterfactual samples highly relying on the additional annotations, while our method build the samples without introducing additional annotations.Meanwhile, compared to the D-VQA method, our method performs better when the data is limited, which can be shown in Table 2.These results further demonstrate the effectiveness of our DDG. Benefiting from the training process based on the positive samples, our DDG performs better than all the compared methods on the VQA v2 dataset.Specifically, our DDG outperforms SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) by around 1.8% and 0.6%, respectively, which demonstrates our DDG is able to improve the model performance on both IID (i.e., VQA v2 dataset) and OOD (i.e., VQA-CP v2) datasets, further implying the superiority of our DDG. Qualitative results. To further demonstrate the effectiveness of our DDG on alleviating the biases, we provide the et al., 2018).I -IV denote methods that enhance visual attention, directly weaken the biases, balance the dataset using additional annotations, and balance the dataset without introducing additional annotations, respectively. qualitative results on the VQA-CP v2 dataset in Figures. 2 and 3. From the results in Figure 2, UpDn (Anderson et al., 2018) and SSL-VQA (Zhu et al., 2020) fail to find the target objects mentioned in the question within the image, leading to erroneous predictions.In contrast, our DDG demonstrates a remarkable ability to accurately localize the target objects with a high degree of confidence, resulting in precise answers to the posed questions.we find that our method performs better when the training data is limited, which is more practical and suitable for the real world.Specifically, our method performs better than SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) on any proportions of the training data, especially when only remains 20% of the training data, our DDG outperforms SSL-VQA and D-VQA by around 3%.These results demonstrate the superiority of our DDG in the data-limited scenarios. Effect of each component of our DDG.We conduct ablation studies on the VQA-CP v2 dataset to evaluate each component in our DDG, and show the experimental results in Table 3. From these results, we have the following observations: 1) when introducing the ensemble binary cross-entropy loss L ens with the positive samples, the model performance improves by around 5% compared with the UpDn (Anderson et al., 2018) model (i.e., 41.53% vs. 46.63%),which demonstrates the positive samples are able to assist the training process to alleviate the bias issue.2) By incorporating the KL loss L dis , the performance would be further improved (i.e., 46.63% vs. 47.77%), which highlights the ensemble prediction is able to guide the training of the original prediction.3) Upon introducing L neg and L weight , which leverage negative samples, the performance would improve substantially (i.e., 47.77% vs. 61.14%).This significant enhancement underscores the significance of promoting the attention of VQA models towards both vision and language modalities when answering the questions. Table 3: Effect of each component of our DDG on the model performance.We show the results on the VQA-CP v2 dataset in terms of Accuracy (%).We use UpDn (Anderson et al., 2018) as the backbone model.These results further demonstrate the effectiveness of each component in our DDG.Effect of k. k denotes the number of target objects that are selected as the positive samples, which can be referred to in Section 3.2.To demonstrate the effect of the k on the model performance, we conduct experiments on the VQA-CP v2 dataset regarding different k.From the results in Table 4, we have the following observations: 1) with the increase of k (e.g., from 3 to 10), the model performance exhibits a gradual improvement.The results indicate that a higher value of k increases the likelihood that the positive image samples indeed encompass the target objects mentioned in the corresponding questions.2) Once k exceeds 10, the model performance starts to drop, thereby illustrating that an excessively high value of k introduces extraneous background information that adversely affects the model's performance.These results demonstrate that an appropriate k helps to obtain the best performance on our DDG.Evaluation of λ. λ is the weight of the loss L weight .We conduct ablation studies about different λ on the model performance, and show the experimental results in Table 5.From the results, the best performance is obtained in our DDG when λ = 0.05, and the performance is drop whenever λ is higher or lower than 0.05.These results demonstrate that a suitable weight of loss L weight helps to obtain better performance.et al., 2016) and UpDn (Anderson et al., 2018)) on the model performance.We report the experimental results on the VQA-CP v2 dataset in terms of Accuracy (%).† denotes the re-implementation of the baseline. Evaluation of different backbones.Our DDG is model-agnostic.To demonstrate the effectiveness of our DDG on different backbones (i.e., SAN (Yang et al., 2016) and UpDn (Anderson et al., 2018)), we conduct experiments on the VQA-CP v2 dataset, and show the results in Table 6.From the results, our DDG consistently achieves a substantial improvement in model performance, regardless of which backbone it is.These results further embody the superiority of our DDG. Conclusion In this paper, we have proposed a novel method named DDG to alleviate the bias issues in VQA from vision and language modalities.Specifically, we construct both positive and negative samples in vision and language modalities without using additional annotations, in which the positive questions have similar semantics to the original questions, while the positive images contain foreground information.Based on the positive samples, we heuristically introduce the knowledge distillation mechanism to facilitate the training of the original samples through guidance from positive samples.Moreover, we put forth a strategy that encourages VQA models to focus more on the vision and language modalities when answering the questions, aided by the negative samples.Extensive experiments on the VQA-CP v2 and VQA v2 datasets show the effectiveness of our DDG. Datasets We conduct experiments on the VQA-CP (Agrawal et al., 2018) v2 and VQA (Goyal et al., 2017) v2 datasets.Specifically, the training set of VQA-CP v2 contains approximately 121k images and 483k questions, while the test set contains around 98k images and 220k questions. Implementation Details Following existing VQA methods (Anderson et al., 2018;Cadène et al., 2019b;Zhu et al., 2020), we extract the top-36 object features with a dimension of 2048 in each image by the Faster-RCNN (Ren et al., 2015) model that is pre-trained by (Anderson et al., 2018).Moreover, each question is first truncated or padded into the same length (i.e., 14), and then encoded by the Glove (Pennington et al., 2014) embedding with a dimension of 300.The dimension of the question encoder (i.e., single layer GRU (Cho et al., 2014)) is 1280. Inspired by SSL-VQA (Zhu et al., 2020), we introduce one Batch Normalisation (Ioffe and Szegedy, 2015) layer before the classifier of UpDn (Anderson et al., 2018).We train our method for 30 epochs with the Adam (Kingma and Ba, 2015) optimiser.Specifically, we adopt L ens and L dis to train the baseline model for 12 epochs, and introduce L neg and L weight at the 13-th epoch.The learning rate is set to 1e-3, and decreases by half every 5 epochs after 10 epochs.The batch size is set to 256.We set k and λ to 10 and 0.05, respectively.We implement our method based on PyTorch (Paszke et al., 2019), and the model is trained with one Titan Xp GPU.Moreover, our method does not introduce additional parameters except the backbone model. Note that our method is model-agnostic and can be applied to different backbones of VQA models.To better demonstrate the effectiveness of our DDG, we conduct experiments based on different backbones, including UpDn (Anderson et al., 2018), and SAN (Yang et al., 2016) in the same settings.Moreover, we perform experiments over three rounds using varying seeds, and present the results in terms of mean values.The source code and the pre-trained models are available at DDG. Training Method We provide the training method of our DDG in Algorithm 1. Specifically, when the training epoch is lower than the threshold τ , we forward the base model M b with the original and positive samples to calculate the knowledge distillation loss L dis and binary cross-entropy loss L ens .Moreover, when the training epoch is higher than threshold τ , we construct the negative image and question samples without introducing additional annotations, and then forward M b with the negative samples and obtain the loss L neg and L weight .Finally, we update M b based on the overall loss L. More Ablation Studies Effect of the training strategy.As shown in Algorithm 1, we adopt knowledge distillation loss L dis and ensemble binary cross-entropy loss L ens to train the base model for 12 epochs.To demonstrate the effectiveness of the training strategy, we conduct experiments about the training strategies on the VQA-CP v2 dataset, and the results are shown in Table 7. From the results, the strategy that trains the model with the KL loss in the whole training process performs worse than that training for 12 epochs.We infer that the KL loss may accelerate the fitting of the training dataset, which hinders the Algorithm 1 Training method of our DDG.Randomly sample a mini-batch data {(v i , q i , a i )} b i=1 from the training data, and obtain the corresponding positive samples {(v + i , q i , a i )} b i=1 , and {(v i , q + i , a i )} b i=1 . Require 4: Forward M b with the training data and the positive samples, and then obtain the predictions P (A|v i , q i ), P (A|v + i , q i ), P (A|v i , q + i ), and the ensemble prediction P ens . 5: Calculate the binary cross-entropy loss L vqa for P (A|v i , q i ) by Eq.( 2).Calculate the Knowledge Distillation Loss L dis based on P (A|v i , q i ) and P ens via Eq.(3).9: Calculate the binary cross-entropy loss L ens for P ens by Eq. ( 2).Randomly sample images and questions from the mini-batch data {(v i , q i , a i )} b i=1 to form the negative samples as {(v i , q i , a i )} b i=1 and {(v i , qi , a i )} b i=1 . 13: Forward M b with negative samples, and obtain the predictions P (A|v − i , q i ) and P (A|v i , q − i ). 14: Calculate the loss L neg based on the predictions of the negative samples via Eq.( 4). 15: Calculate the loss L weight based on the predictions of both positive and negative samples by Eqs. ( 6) and ( 7).Update M b by minimising the overall loss L (obtained via Eq.( 8)).18: end while training process of the negative sample losses L neg and L weight . The objective of the KL loss is to make the two distributions close.In our method, we seek to make the predictions of the original samples approach to the ensemble predictions.Thanks to the generated high quality positive samples, the KL loss can improve the robustness of the VQA models (Refer to in Line 1-2 of Table 7), to some extent.However, if we adopt the KL loss in the whole training process, the VQA models will fit the data distributions excessively, and thus may hinder the training process of the negative sample losses.The experimental results in Table 7 also confirm it.For example, our DDG with the training strategy that introduces KL loss in the overall training process still performs better than that training using only (L dis and L ens ), but performs worse than that training the model using KL loss for 12 epochs. As shown in Table 8: Comparison with the state-of-the-art data augmentation based methods (e.g., SimpleAug and KD-DAug) on the VQA-CP v2 dataset. More Visualisation Results Qualitative results.We provide more visualisation results in Figure .4 to present the effectiveness of our DDG.From the results, our DDG localise the target objects more accurately than the UpDn (Anderson et al., 2018) model and SSL-VQA method, and thus makes a more correct prediction than the compared methods.These visualisation results demonstrate the effectiveness of our DDG. Visualisation of the generated samples.As shown in Section 3.2, we have generated both positive image and question samples.To evaluate the generated methods, we provide some visualisation results about the augmented questions and selected target objects in Table 9 and Figure.5, respectively.From the results in Table 9, our augment questions have similar semantics to the original questions, which demonstrates our generated questions are reasonable as the positive samples.Moreover, although the baseline model UpDn (Anderson et al., 2018) trained on the VQA-CP (Agrawal et al., 2018) Figure 2 :Figure 3 : Figure 2: Qualitative comparison among UpDn(Anderson et al., 2018), SSL-VQA(Zhu et al., 2020), and our DDG on the VQA-CP v2 test set.For each example, we put the bounding box with the highest attention weight in the image and show the answers with the top-5 predictions.The bold, red answer is the ground-truth answer. base model M b , batch size b, threshold τ .1: Randomly initialise the parameters of M b .2: while not converge do 3: Figure 5 : Figure5: We provide some visualisation results of the augmented images based on our generation technique referred to in Section 3.2.We put the bounding box with the top-3 attention weight of the pre-trained UpDn(Anderson et al., 2018) model in the image. Table 2 : Effect of different scales of the training data of VQA-CP v2 on the model performance.We report the results in terms of Accuracy (%). Table 4 : Effect of different k.We report the experimental results in terms of Accuracy (%). Table 5 : Effect of different λ.We report the experimental results in terms of Accuracy (%). Table 7 : Table 8, the VQA-CP v2 dataset comprises 438k training samples, while KDDAug, SimpleAug, and our DDG generate an additional Effect of the training strategy.We report the experimental results in terms of Accuracy (%)."L dis + L ens (all epochs)" denotes we additionally introduce L dis and L ens losses to train the UpDn model in the whole training process."DDG (KL for all epochs)" means we train the UpDn model with the DDG method, where the KL loss exists in the whole training process."DDG (KL for 12 epochs)" denotes we train the UpDn model with the DDG method, and the KL loss exists in the first 12 epochs.
8,380.4
2023-01-01T00:00:00.000
[ "Computer Science" ]
A Reitveld quantitative XRD phase-analysis of selected composition of the Sr ( 0 Qualitative XRD phase-analysis of four (x = 0.10, 0.20, 0.30, 0.40) selected compositions of the Sr(0.5+x)Sb(1−x)Fe(1+x)(PO4)3(0 < x < 0.50) system was undertaken. XRD data shows the absence of a continuously solid solution. In fact, each composition is composed only of a mixture of the two end members, Sr0.50SbFe(PO4)3 (R3 space group) and SrSb0.50Fe1.50(PO4)3 (R3c space group), type-phases. Rietveld refinement method, using the XRPD technique, has been used for a quantitative phase-analysis of these compositions. In order to evaluate the relative errors of this experimental result, a set of standard phase-mixtures of both end compositions of the system was also quantified by the Rietveld method. Obtained results show the usefulness of this method for quantitative phase-analysis, particularly in geology including other classes of materials such clay and cement. INTRODUCTION The Nasicon-type family has been the subject of intensive research due to its potential applications as solid electrolyte, electrode material, low thermal expansion ceramics and as storage materials for nuclear waste [1][2][3][4][5][6] The structure of such materials with general formula A x XX (PO 4 ) 3 consists of a three-dimensional network made up of corner-sharing X(X )O 6 octahedra and PO 4 tetrahedra in such a way that each octahedron is surrounded by six tetrahedra and each tetrahedron is connected to four octahedral.Within the Nasicon framework, there are interconnected interstitial sites usually labelled M1 (one per formula unit) and M2 (three per formula unit) through which A cation can diffuse, giving rise to a fast-ion conductivity [1,3,7] (Fig. 1).The four such sites per formula unit can be represented by the crystallographic [M2] 3 [M1]XX (PO 4 ) 3 formula.Each M1 cavity is situated between two X(X )O 6 octahedra along the c-axis.Six M2 cavities with eightfold coordination are located between the [O 3 X(X )O 3 [M1O 3 X(X )O 3 O 3 X(X )O 3 ] ∞ ribbons and surround the M1 cavity "Fig.1".Recently, the structural characteristics by powder X-ray diffraction (XRPD) study using the Rietveld method for A 0.50 SbFe(PO 4 ) 3 (R3 space group) and ASb 0.50 Fe 1.50 (PO 4 ) 3 phases (R3c space group) (A = Ca, Sr) were realised [8,9].In all A 0.50 SbFe(PO 4 ) 3 samples, A 2+ cations occupied principally one-half of the M1 3a sites and the Sb 5+ and Fe 3+ cations are orderly distributed within the SbFe(PO 4 ) 3 framework.In the ASb 0.50 Fe 1.50 (PO 4 ) 3 phases, the A 2+ cations are located in the M1 sites whereas the Sb 5+ and Fe 3+ cations are statistically distributed within the Nasicon framework.In the Ca (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 (0 < x < 0.50) system, a qualitative phase-analysis of the XRPD patterns have clearly shown the absence of a continuously solid solution [8].In fact, XRD spectra of selected compositions of this system is composed principally of their two end members Ca 0.50 SbFe(PO 4 ) 3 (R3 space group) and CaSb 0.50 Fe 1.50 (PO 4 ) 3 (R3c space group).In a continuation of our search concerning Nasicon-like structure, in this paper, we report the results of the structural characterisation of the Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system.As it will be shown in the following, this system is principally composed of a mixture of both end members Sr 0.50 SbFe(PO 4 ) 3 and SrSb 0.50 Fe 1.50 (PO 4 ) 3 type-phases (abbreviated as [Sr 0.5 ] and [Sr 1 ] respectively).To our knowledge no most interest investigation on XRD Rietveld quantitative phase-analysis (RQXRD) within a biphasic system was reported.Herein, the Rietveld refinement methods was used as a tool for a quantitative phaseanalysis of the four (x = 0.10, 0.20, 0.30, 0.40) selected compositions of the Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system. RESULTS AND DISCUSSION If one wants to quantify the components of a complex mixture of solids, amongst resources which may be used is the Rietveld Quantitative X-ray Diffraction (RQXRD).Application of the Rietveld method to quantitative phase-analysis provides many advantages over traditional methods that utilize a small pre-selected set of integrated intensities.The power of this method, to quantitative phase-analysis is illustrated by a large number of papers on many different classes of materials [e.g., mineral, clay, cement,. ..] [11][12][13][14][15][16][17][18].In fact, even if the Rietveld method offers superior performance over other methods such the RIR (Reference Intensity Ratio) method [14], recent report on the IUCR Round Robin study on quantitative-phase analysis indicates that accurate results may not always be obtained [19,20].As a conclusion of this latest studies it is obvious that it is of the utmost importance that the preparation of sample and specimen be appropriate for the task at hand.The second important point is the data collection, including appropriate wavelength. Experimental and calculated XRD profile of both end members, Sr 0.50 SbFe(PO 4 ) 3 , abbreviated [Sr 0.5 ], and SrSb 0.50 Fe 1.50 (PO 4 ) 3 , abbreviated [Sr 1 ], is shown in Figure 2. Since the positions of the main diffraction lines of the two XRD spectra of [Sr 0.5 ] and [Sr 1 ] are close, in the case of the four selected compositions of the Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system, we have shown only the representative area of the spectrum which is between 10 and 50 in two theta (Fig. 3). Composition (x = 0.40) is isostructural to [Sr 1 ] phase but, as it was not expected, the XRPD patterns show that for compositions x with (0 ≤ x < 0.40) only a biphasic system composed principally of both end, [Sr 0.5 ] and [Sr 1 ], type-phases exist (Fig. 3).It should be noticed that similar results are already obtained in the case of the Ca (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 (0 ≤ x ≤ 0.50) system [8].In order to clarify this assumption, a quantitative phaseanalysis using the Rietveld method (RQXRD) of selected compositions of the Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system has been undertaken.In all cases, Rietveld refinement was carried out using a two-phase model, consisting of [Sr 0.5 ] and [Sr 1 ] Initial starting parameters for the Reitveld refinements of the four selected compositions of Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system were based upon the already above mentioned crystallographic results [9].During the course of refinement, all variables parameters such atomic coordinates and cell parameter were allowed to vary.A relatively slight variation of the crystallographic parameters was observed and the obtained reliability factors were acceptable.Obtained molar phase fraction of the constituents, from the RQXRD analysis for each of both [Sr 0.5 ] and [Sr 1 ] type-phases, were indicated in Figure 3 for each composition. It should be noticed that there is no direct relation between obtained molar phase fraction from RQXRD and the value of composition x in Sr (0.5+x) Sb (1−x) Fe (1+x) (PO 4 ) 3 system.In fact obtained values of molar phase fraction should be dependant principally of the preparation route of every composition.Since The present study investigates accuracy and reliability of the RQXRD analysis at various molar phase fraction of the two ends, [Sr 0.5 ] and [Sr 1 ], type-phases, and in order to demonstrating the power of this method to analyse multiphase samples, a set of standard phase-mixture of the two, [Sr 0.5 ] and [Sr 1 ], materials have also been reported.The three standard binary mixtures of [Sr 0.5 ] and [Sr 1 ], with a molar fractions of (33% of [Sr 1 ], 66% of [Sr 0.5 ]) in (a), (50% of [Sr 1 ], 50% of [Sr 0.5 ]) in (b), and (66% of [Sr 1 ] 33% of [Sr 0.5 ]) in (c), were also mixed up and then analysed to assess the precision of Rietveld quantitative analysis.Note that synthetic materials were chosen for ease of assessment of results since the true concentration, of each phase within the mixture, is known from the molar phase fraction.Results of the Rietveld RQXRD analysis for the three standard selected mixtures of [Sr 0.5 ] and [Sr 1 ] are gathered in table 1. Observed and calculated XRPD patterns, in the 10-33 • (2θ ) range, of the three standard binary mixtures are shown in Figure 4.
1,961.4
2013-01-01T00:00:00.000
[ "Materials Science" ]
Second order topology in a band engineered Chern insulator Haldane model is a celebrated tight binding toy model of a Chern insulator in a 2D honeycomb lattice that exhibits quantized Hall conductance in the absence of an external magnetic field. In our work, we deform the bands of the Haldane model smoothly by varying one of its three nearest neighbour hopping amplitudes (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t_1$$\end{document}t1), while keeping the other two (t) fixed. This breaks the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_3$$\end{document}C3 symmetry of the Hamiltonian, while the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_x*T$$\end{document}Mx∗T symmetry is preserved. The symmetry breaking causes the Dirac cones to shift from the K and the K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$'$$\end{document}′ points in the Brillouin zone (BZ) to an intermediate M point. This is evident from the Berry curvature plots which show a similar shift in the corresponding values as a function of the deformation parameter, namely \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{t_1}{t}$$\end{document}t1t. We observe two different topological phases of which, one is a topological insulator (TI) and the other is a second order topological insulator (SOTI). The Chern number (C) remains perfectly quantized at a value of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C=1$$\end{document}C=1 for the TI phase and it goes to zero in the SOTI phase. Furthermore, the evolution of the Wannier charge center (WCC) as the band is smoothly deformed shows a jump in the TI phase indicating the presence of conducting edge modes. We also study the SOTI phase and diagonalize the real space Hamiltonian on a rhombic supercell which shows the presence of in-gap zero energy corner modes. The polarization of the system, namely \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_x$$\end{document}px and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_y$$\end{document}py, are evaluated, along the x and the y directions, respectively. We see that both \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_x$$\end{document}px and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_y$$\end{document}py are quantized in the SOTI phase owing to the presence of the inversion symmetry of the system. Finally we establish the SOTI phase as an example of a topological phase with zero Berry curvature and provide an analogy with the two dimensional Su–Schrieffer–Heeger model. quantized charge transport across the bulk of a system due to adiabatic transformation of the Hamiltonian along a closed loop in the parameter space 28 .This, in turn can be obtained by integrating the Berry phase over the same closed loop.Previous studies on the Haldane model reported a transition from a topological to a trivial phase under a smooth deformation of its bandstructure which is brought about by tuning the amplitude of one of the nearest neighbour (NN) hopping parameter (say t 1 ) while keeping other two NN parameters (say t) fixed 29 .This deformation breaks the C 3 symmetry of the hexagonal lattice.In our work, however, we show that this takes the system from a TI to an HOTI phase, or more specifically a second order topological insulator phase (SOTI).States localised at the corners of a rhombic supercell of the hexagonal lattice have been found when t 1 |t| exceeds a certain critical value.These localized states in dimensions d − 2 are symbolic of a higher order topological phase.We evaluate the Chern numbers corresponding to both the TI and the SOTI phase.The Chern number exhibited in the SOTI phase is zero, owing to the absence of conducting modes at the edges of the system.Further, we also study the evolution of the Wannier charge center along a closed one dimensional loop in the Brillouin zone both for the TI and the SOTI phase.The Wannier charge center which represents the average position of charge in the unit cell of a crystal lattice bears the same information as is carried by surface energy bands 30,31 .Furthermore, to characterize our SOTI phase we calculate the polarization along the x and y directions which are bulk topological invariants obtained as the integral of the Berry connection 10,32 .Interestingly, we show here that the second neighbour hopping t 2 which breaks time reversal symmetry, and is of prime importance in the TI phase, holds no significance in the SOTI phase.This is in contrast to the work done in Ref. 32 , where both the nearest neighbour and next nearest neighbour hopping amplitudes are modified.We show that the insignificance of t 2 in the SOTI phase is of prime importance in establishing that the SOTI phase is an example of a topological phase with zero Berry curvature.A study on the topological phases of strained pristine Graphene, has also been done in Ref. 33 . The paper is structured as follows.In "The Hamiltonian", we discuss the tight binding Hamiltonian for the Haldane model and discuss the deformation of the bandstructure as a function of ratio of the two NN hopping amplitudes ( t 1 t ).We define the Chern number as a topological invariant for the TI phase and show that it goes to zero as the system transcends to a higher order topological insulator.We also calculate the Wannier charge center of the system which is equivalent to calculating the Berry phase along one particular direction (say k x ) of the 2D BZ and study its evolution over the momentum in the other direction ( k y ) for both the TI and the SOTI phases.In "Second order topological phase", we plot real space eigenvalue spectra of a rhombic supercell of the Haldane model in the SOTI phase and show the presence of zero energy modes that are distinctly separate from the bulk.Further, the probability density of these states are shown which confirm that they are localized at the two corners of the rhombic supercell.We define bulk polarization as a topological invariant for the SOTI phase, and show that it is quantized owing to the inversion symmetry of the system.Finally in "Analysis of the second order topological phase", we analyse the SOTI phase and establish an analogy with the two dimensional Su-Scrieffer-Heeger (SSH) model.We also establish that the second neighbour hopping amplitude t 2 plays no role in the higher order topology of the deformed Haldane model and hence it can be considered as a classic example of a topological phase with zero Berry curvature. The Hamiltonian The Haldane model is defined on a honeycomb lattice as represented in Fig. 1.The vectors connecting the nearest neighbours are given by � ) where a 0 is the length of the nearest neighbour distance.The lattice vectors are given by � In our model, the NN hop- ping along the direction δ1 is assumed to be t 1 , while it is given by t in the directions δ2 and δ3 .The time reversal The hopping along the direction δ1 is given by t 1 (shown in green), while it is given by t in the other two directions (shown in black).The second neighbour hoppings are given by dashed lines and have an amplitude t 2 .symmetry of the system is broken by the next nearest neighbour (NNN) hopping which has a form t 2 e iφ where φ is the phase associated with the hopping.φ is positive (negative) for hopping along the anticlockwise (clock- wise) direction.For our purpose, we keep φ fixed at π 2 .The hexagonal lattice has two sublattices denoted by A and B in the figure.The Semenoff mass m is positive(negative) for the sublattice A(B).The NNN vectors, that is, the vectors connecting the nearest neighbour A-A (or B-B) atoms are given by, � The real space Hamiltonian for the Haldane model can be written as, where c i ( c † i ) represent annihilation (creation) operators at lattice site i.We Fourier transform this Hamiltonian to obtain the tight binding Hamiltonian for the deformed Haldane model, Here σ x , σ y and σ z denote the Pauli matrices.The bond in the direction δ1 is referred to as the strong bond, while the bond in the other two directions are weak when compared with the former.The Hamiltonian here, has been written in the sublattice basis ( |c k A � , |c k B � ).It is to be noted that in the original Haldane model, the hopping is uniform along all three nearest neighbour vectors, that is, t 1 = t .Furthermore, the deformed Haldane model breaks the C 3 symmetry which was otherwise present in the original counterpart.It however preserves a product of M x and T, where M x represents mirror symmetry about x = 0 and T is the time reversal symmetry operator.It is to be noted that the action of the M x T symmetry on a Hamiltonian H( k) is given by For the Haldane Hamiltonian, the M x T symmetry is denoted by σ 0 K , where K represents complex conjuga- tion and σ 0 represents identity.The action of the operator σ 0 is understood as follows.For an infinite lattice, the action of M x does not change the sublattice structure and resembles identity.The entire operation of M x T hence boils down to complex conjugation.Physically, the M x operator changes the direction of the second neighbour hopping.The amplitude of the second neighbour hopping being direction dependent, changes the physical structure of the Hamiltonian and causes the clockwise hopping to carry a positive phase.However, the time reversal symmetry changes the sign of the phase and re-establishes the previous structure of the Hamiltonian. We first discuss the bandstructure of the deformed Haldane model, as a function of the quantity ξ = t 1 t .This system shows two Dirac nodes at the K ( −2π , 2π 3a 0 ) points in the BZ at ξ = 1 when both t 2 and m are zero.The spectral gaps open up at these points when t 2 is rendered finite.For our purpose, we keep the value of t 2 fixed at 0.1t and the phase φ at π 2 .The Semenoff mass m is also fixed at zero.To aid in our understanding, the BZ of the honeycomb lattice and the path in reciprocal space that has been used for the construction of the bandstructure have been shown in Fig. 2. As the value of ξ is increased from a value 1, the gaps in the bandstructure diminish and shift away from the K and K ′ points, as shown in Fig. 3.As t 1 becomes equal to 2t ( ξ=2), the gap closes at the M point (0, 2π 3a 0 ) in the BZ even when the time reversal symmetry remains broken. (1) On increasing ξ further, the gap reopens.As reported in previous studies, this gap closing was considered to induce a transition of the system from a TI to a trivial phase 34 .We however show that beyond ξ = 2 , the system enters into a SOTI phase. Next, we study the bandstructure of a zig-zag semi-infinite ribbon-like configuration of the model with periodic boundary condition (PBC) along direction â1 and open boundary condition (OBC) along the direction â2 , to show the presence of edge modes connecting the conduction and valence bands in the TI phase (Fig. 4).The width of the ribbon along the y direction is given by a 0 ( 3N 2 + 1) , where N denotes the number of unit cells in the y direction.We have chosen N to be equal to 127.It can be observed that for ξ < 2 , the system shows the presence of modes that traverse the bulk energy gap (Fig. 4a,b).Exactly at ξ = 2 (Fig. 4c), the gap closes (the tiny gap seen in the figure at k x = 0 will vanish for large longitudinal system dimensions).For ξ > 2 , the bandstructure still shows the presence of in gap modes.However they are completely detached from the bulk and carry no topological significance (Fig. 4d).These modes correspond to counterpropagating edge states that cancel each other at each boundary of the ribbon and gives rise to a trivial insulating phase. To characterize this topological insulator phase, we evaluate the Chern number which is a topological invariant obtained by integrating the Berry connection over a closed loop in the BZ.It is directly related to the conductivity of the edge modes.To calculate the Chern number, we first calculate the Berry curvature of the system.The Berry curvature is given as 35,36 , The Chern number is now calculated using, (4) We first plot the Berry curvature of the system over the entire BZ for different values of ξ (Fig. 5).A non-zero Berry curvature is a direct consequence of the presence of Dirac nodes in the system.Our plot shows that at ξ = 1 , the Berry curvature is concentrated around the K and K ′ points in the BZ (Fig. 5a).As ξ is increased, the Berry curvature slowly shifts towards the M point (Fig. 5b-d).This is because with the increase in ξ , the C 3 sym- metry, which kept the Dirac nodes pinned to the K and K ′ points is broken.We integrate this Berry curvature over the BZ to obtain the Chern number C. We plot C as a function of ξ , as shown in Fig. 6.It can be clearly seen that C is perfectly quantized at 1 when ξ < 2 .Furthermore, for ξ > 2 , C goes to 0. Now, we look at the evolution of the Wannier charge center (WCC) over a closed loop in the BZ (Fig. 7) and study its behaviour as a function of ξ .For a 2D system, this is equivalent to calculating the Berry phase along one direction (say k x ) and evolving it as a function of momentum in the other direction (say k y ) 30,37 .This plot bears information about the bulk topology.In our case, we calculate the WCC along the x direction and look for its evolution as a function of k y .The mathematical expression for calculating the WCC (or equivalently the Berry phase) is given as 38 , where n is the band index.In the TI phase, we see the WCC winding non-trivially as k y in increased from 0 to 2π (Fig. 7a,b).This indicates a shift in the charge center of the refer- ence unit cell over a closed loop in k y .Charge conservation in this case demands the presence of edge states that traverse the bulk energy gap and transfers an electron from the valence to the conduction band.We also plot (6) Figure 6.The Chern number is perfectly quantized at 1 for ξ < 2 , which is the TI phase.For ξ > 2 , the Chern number goes to 0. This indicates vanishing of the edge modes in the system which is evident from the bandstructure plots (Fig. 4) as well.The parameters t, t 2 , φ and m are fixed at 1, 0.1, π 2 and 0, respectively.www.nature.com/scientificreports/ the behaviour of the WCC in the region ξ > 2 .It can be clearly seen that the WCC shows no winding here and oscillates about zero.This hints at a topologically trivial phase. Second order topological phase We now focus on the region where ξ > 2 .The Chern number and the WCC both hint at this region being topo- logically trivial.We, however take a suitably formed rhombic supercell (Fig. 8) and diagonalize the real space Hamiltonian for this structure.The energy spectra of the real space Hamiltonian, for ξ < 2 does not show any specific feature with regard to higher order topology (or more specifically, second order topology).However beyond the TI phase ( ξ ≥ 2 ), we see the presence of modes pinned at zero energy and distinctly separated from the bulk (Fig. 9).We further plot the probability density of these states which are found to be localized at two corners of the rhombus (Fig. 10).We had kept the value of the Semenoff mass m fixed at 0 which keeps the inversion symmetry in the system intact.On increasing the Semenoff mass, inversion symmetry in the system gets broken.The degeneracy of the states is lifted and the states drift from zero energy.They are, however no longer cleanly separated from the bulk.We also plot the fractional charge accumulation of the system at half filling (Fig. 11).It can be seen that the corners host a fractional charge density which tends to 0.5 on increasing ξ .The charge accumulation is also more precise for larger system sizes.The scenario corresponding to an armchair edge rectangular supercell has also been considered by us.The supercell has alternating zig-zag and armchair edges with the localization occurring at sites that are connected to the bulk of the system through weak bonds. We find a series of states around zero energy which are well separated from the bulk and localized at the "weak" sites along the zig-zag edges.However, this does not denote a conventional HOTI phase and hence we skipped further discussion on it. To characterize the corner states of the rhombic supercell, we calculate the bulk polarization which is equivalent to the position of charge in a unit cell.In a crystal lattice it is difficult to unambiguously define a well defined unit cell that hosts a periodic distribution of charge.Such a definition is however important for the evaluation of polarization under the Clausius-Mosotti picture 39 .Hence, in bulk solids, we rely on the Wannier functions for the calculation of bulk polarization.Wannier functions are localized wave functions formed by suitable superposition of the delocalized Bloch wave functions.The expectation value of the position operator with respect to these Wannier functions is known as the Wannier charge center and is used to define polarization for a crystalline system.WCC, and hence the polarization in a direction α is given as 10,40 where |w n � is the Wannier function corresponding to the n th band.Here we have taken the electronic charge e to be 1.S is the area of the 2D BZ of the honeycomb lattice which in this case is equal to 8π 2 3 √ 3a 2 0 .It can be seen that the polarization is directly related to the Berry connection of the system.Owing to the gauge dependence of the Berry connection, the polarization is defined modulo a lattice vector.We calculate p α=x,y as a function of ξ for our deformed Haldane model (Fig. 12).p y is found to be perfectly quantized at a 0 2 for ξ > 2 .For ξ < 2 (TI phase), the value of p y is not quantized.In case of the polarization along the x direction, we find that it is uni- formly zero both for the TI and SOTI phase.The quantized value of p y at a 0 2 indicates a discrepancy between the (7) Fractional charge accumulation of 0.5 at the corners of the system for ξ = 2.5 is shown.The accumulation is more noticeable for higher values of ξ. positions of the WCC and the lattice site.This indicates a non-trivial bulk topology.The quantization of the polarization p y is due to the presence of inversion symmetry in the system.Inversion symmetry (which is only present when m = 0 ) allows an interchange of the A and B sublattices, while keeping the center of the strong bond invariant.This causes the center of charge to be localized at the center of the strong bond and results in p y to be quantized.The presence of M x * T symmetry which maps the corners (at which localization occurs) of the rhombus onto themselves, further augments the robustness of the higher order topological states. Analysis of the second order topological phase It is interesting to note that while, the second neighbour hopping t 2 plays an important role in deciding the fate of topology in the TI phase ( ξ < 2 ), this is not the case for the SOTI phase ( ξ > 2).The bandstructure correspond- ing to the TI phase shows a distinct gap opening transition in going from t 2 = 0 to t 2 = 0 .The bandstructures in the SOTI phase show no such transition indicating that the Hamiltonian with t 2 = 0 is adiabatically connected to the Hamiltonian with t 2 = 0 in the SOTI phase (Fig. 13).The real space eigenvalue spectra for the rhombic supercell also shows that the SOTI phase is unaffected by the presence of t 2 (Fig. 14).This allows us to put t 2 = 0 and hence make the system time reversal symmetric.The Hamiltonian (Eq.2) can now be written as: Clearly the gap has now opened.In (c,d) the system is in the SOTI phase.Here ξ = 2.5 and t 2 = 0 and 0.1, respectively.Clearly, no topological phase transition occurs.Thus the two Hamiltonians corresponding to these parameters can be adiabatically connected.Here t, φ and m are fixed at 1, π 2 and 0, respectively.Figure 14.The real space eigenvalue spectra of the Hamiltonian for the rhombic supercell for (a) t 2 = 0 and (b) t 2 = 0.1 .Evidently, the SOTI phase remains unaffected in the presence of the second neighbour hopping.Other parameters are kept the same as previously mentioned. The eigenvalues for the Hamiltonian are ±|G| and the ground state eigenvector is given by |� g � = (−e iθ , 1)/ √ 2 , where θ = −iln(G/|G|) .The expression for the polarization thus becomes, Now following the argument that the topological invariant does not change unless there is a gap closing transition, we explicitly calculate polarization for a special situation t = 0 such that t 1 remains greater than 2t.Thus, The ground state for this Hamiltonian is given by |� g � = (−e −ik y a 0 , 1)/ √ 2 .Evidently the polarization vec- tor assumes the value P = (p x , p y ) = (0, a 0 2 ) .It is interesting to note that the topology of the SOTI phase bears an analogy with the topology of the Su-Schrieffer-Heeger (SSH) model.Beyond ξ = 2 , the rhombic supercell behaves like a two dimensional SSH model where the sites boxed in Fig. 15 act as a single chain and the ones parallel to these constitute sites in the other dimension.There is one subtle difference with the traditional one dimensional SSH model where a trivial to topological transition occurs at ξ = 1 .However, in this case the transi- tion from a TI (trivial SOTI) to an SOTI phase occurs at ξ = 2 .The sites marked as A and B as shown in Fig. 15 are connected to the bulk of the system by weak bonds causing the zero energy states to localize there.Further, this second order topological phase is an example of a topological phase with zero Berry curvature.The absence (presence) of time reversal symmetry or a non-zero t 2 ( t 2 = 0 ) is not crucial for the SOTI phase as evident from Fig. 13c,d, owing to the adiabatic connection of the Hamiltonians.Therefore, we assume time reversal symmetry to be prevalent in the system.The combined action of time reversal symmetry (TRS) and inversion symmetry (IS) forces the Berry curvature to be zero uniformly across the Brillouin zone 3 , which results in the vanishing of the Berry curvature.However, the Berry connection is non-zero, yielding the signature of a topological phase with zero Berry curvature. Conclusion In this work, we have studied a band deformed Haldane model by smoothly tuning one of the three nearest neighbour hopping amplitudes in a honeycomb lattice.This band deformation breaks the C 3 symmetry of the original Haldane model and causes the Dirac nodes at the K and K ′ points in the BZ to shift and move towards an intermediate M point as t 1 (the nearest neighbour hopping amplitude along δ1 ) approaches 2t (where t is the nearest neighbour hopping amplitude along δ2 and δ3 ).We show that this causes a shift in the concentration of Berry curvature in the BZ as well.We calculate Chern number and plot it as a function of ξ ( = t 1 t ) to show that it has a value of 1 for ξ < 2 while it becomes zero beyond that.We establish that the region ξ > 2 is a higher order topological phase, as contrary to earlier works which reported this phase as trivial owing to the absence of edge states in the system.We further calculate bulk polarization ( p α=x,y ) for this model, which serves as a topological invariant.p y acquires a constant value of a 0 2 while p x remains zero for the entire SOTI phase.This quantization is a consequence of the inversion symmetry which remains intact in the system as long as the Semenoff mass is zero.Finally, we show that the SOTI phase bears an analogy with the two dimensional SSH model and establish that the presence or absence of time reversal symmetry is inconsequential to the SOTI phase making it a classic case of a topological phase with zero Berry curvature.(10) This edge ends at a site A, which is connected via weak bonds to the rest of the system, making it a host to the localized higher order topological state. Figure 1 . Figure 1.Schematic diagram of the honeycomb lattice.The two sublattices are denoted as A and B. The nearest neighbour vector directions are shown as δ1 , δ2 and δ3 .The hopping along the direction δ1 is given by t 1 (shown in green), while it is given by t in the other two directions (shown in black).The second neighbour hoppings are given by dashed lines and have an amplitude t 2 . Figure 2 . Figure 2. Brillouin zone for the honeycomb lattice with the path (K→M→K ′ ) shown along which the bandstructure is calculated. Figure 3 . Figure 3.The bandstructure of the deformed Haldane model is plotted as a function of ξ = t 1 t along the Ŵ → K→M→K ′ → Ŵ path in the BZ of the honeycomb lattice for (a) ξ = 1 , (b) ξ = 1.5 , (c) ξ = 2.0 , (d) ξ = 2.2 .We observe that with increasing ξ the band minima points slowly shift towards the M point of the BZ from the K and K ′ points.At ξ = 2 the band gap vanishes at the M point indicating a possible topological phase transition.Beyond this point, at ξ > 2 , the band gap reopens.The parameters t, t 2 , φ and m have been fixed at 1, 0.1, π 2 and 0. Figure 4 . Figure 4.The bandstructure for a zig-zag ribbon-like configuration of the Haldane model is shown.For ξ < 2 , in (a,b), we see edge modes crossing the bulk gap.The system is in the TI phase in this region.(c) At ξ = 2 , the bandstructure undergoes a gap closing transition.The closure is not perfect due to the limited number of lattice sites taken along â2 . (d) Beyond ξ = 2 we still see the presence of in-gap modes.However they do not traverse the bulk gap and can be easily removed by an adiabatic deformation of the Hamiltonian.We have kept the values of the parameters t, t 2 , φ and m fixed at 1, 0.1, π 2 and 0. The number of unit cells along â2 (see text) is 127. Figure 5 . Figure 5.The Berry curvature for different values of ξ = t 1 t are shown over the Brillouin zone of honeycomb lattice.For (a) ξ = 1 , the Berry curvature is confined near the K and K ′ points of the BZ.With increasing values of ξ in (b-d) the region of high concentrations shift towards the M point.This indicates a shift in the point of minimum band gap with increasing ξ .The values of the parameters t, t 2 , φ and m are fixed at 1, 0.1, π 2 and 0. Figure 7 . Figure 7. Wannier charge center (WCC) along the x-direction has been plotted as a function of momentum along the y-direction, namely k y . (a,b) TI phase, where a non-trivial winding of the WCC, is noted as k y traverses a closed loop in the BZ.This is equivalent to having a non-trivial Chern number and hence a nontrivial bulk.The jump in WCC indicates a shift in the average position of the electrons in a unit cell.(c,d)The system here is no longer in the TI phase.The WCC trivially oscillates about zero as a function of k y .We have taken t, t 2 and φ as 1, 0.1 and π 2 , respectively.The Semenoff mass m is taken to be close to zero.Specifically we have taken m = 0.00001. Figure 9 . Figure 9. Real space Hamiltonian of the band deformed Haldane model has been diagonalized on the rhombic supercell.The resulting eigenvalues for (a) ξ = 2.2 and (b) ξ = 2.5 have been shown when the Semeneoff mass, m = 0 .The plots show the presence of zero energy eigenvalues which are distinctly separated from the bulk.It is to be noted that the system is no more in the TI phase for both the plots.In (c,d), the modes that were initially at zero energy, shift in the presence of a non-zero Semenoff mass.Again, the parameters t, t 2 , and φ are fixed at 1, 0.1 and π 2 , respectively. Figure 10 . Figure 10.The probability density plots for the states at zero energy corresponding to the parameter values (a) ξ = 2.2 and (b) ξ = 2.5 .The system is in the SOTI phase and demonstrates the presence of corner modes shown by deep blue colour.The other relevant parameters are kept same as mentioned in Fig. 9.The Semenoff mass is kept 0. Figure 12 . Figure 12.The polarization, p x , p y as a function of ξ has been shown.(a) p x vanishes uniformly both in the TI and SOTI phase.(b) p y is quantized in the SOTI phase at a 0 2 which is the center of the strong bond.The value of the Semenoff mass is kept infinitesimal (specifically m = 0.00001 ).Other parameters are fixed as previously mentioned. Figure 13 . Figure 13.Band gap calculated for the fermionic dispersion in the TI phase and the SOTI phase are shown.(a)Here ξ = 1.5 and t 2 = 0 .This shows that there are Dirac nodes at the K and K ′ points in the BZ. (b) Here ξ = 1.5 and t 2 = 0.1 .Clearly the gap has now opened.In (c,d) the system is in the SOTI phase.Here ξ = 2.5 and t 2 = 0 and 0.1, respectively.Clearly, no topological phase transition occurs.Thus the two Hamiltonians corresponding to these parameters can be adiabatically connected.Here t, φ and m are fixed at 1, π 2 and 0, respectively. Figure 15 . Figure 15.The rhombic supercell in the SOTI phase bears an analogy with the two dimensional SSH model.(a) Two of the edges (upper and lower) hosting an uncompensated weak bond at one end are marked.(b) The upper edge (marked) of the rhombus is shown.The dominant bonds with hopping amplitude t 1 are shown by a solid line and the weak bonds are shown by dashed lines.This edge ends at a site A, which is connected via weak bonds to the rest of the system, making it a host to the localized higher order topological state.
7,352.8
2024-01-22T00:00:00.000
[ "Physics" ]
Signal enhancement in supercritical fluid chromatography‐diode‐array detection with multiple injection Abstract To circumvent the detrimental effects of large‐volume injection with fixed‐loop injector in modern supercritical fluid chromatography, the feasibility of performing multiple injection was investigated. By accumulating analytes from a certain number of continual small‐volume injections, compounds can be concentrated on the column head, and this leads to signal enhancement compared with a single injection. The signal to noise enhancement of different compounds appeared to be associated with their retention on different stationary phases and with type of sample diluent. The diethylamine column gave the best signal to noise enhancement when acetonitrile was used as sample diluent and the 2‐picolylamine column showed the best overall performance with water as the sample diluent. The advantage of multiple injection over one‐time large‐volume injection was proven with sulfanilamide, with both acetonitrile and water as sample diluents. The multiple injection approach exhibited comparable within‐ and between‐day precision of retention time and peak area with those of single injections. The potential of the multiple injection approach was demonstrated in the analysis of sulfanilamide‐spiked honey extract and diclofenac‐spiked ground water sample. The limitations of this approach were also discussed. INTRODUCTION As a good complement to traditional reversed-phase liquid chromatography, SFC has drawn more and more attention in the past decade [1]. Compared with reversed-phase liquid chromatography, SFC offers a wider range of stationary phase chemistry selectivities with a lower consumption of organic solvent [2]. Major application fields of SFC include pharmaceutical analysis, biological sample analysis, natural and food product analysis [3][4][5][6]. Revolutionary development can become more concentrated with larger volume of sample injected [11]. However, for a specific sample diluent, the injection volume can only be increased to a certain amount before it brings unacceptable negative effects on chromatography performances [12,13]. The trade-off between detectability and peak resolution by selection of sample diluent and injection volume is especially crucial in SFC [14][15][16][17]. Compared with HPLC, SFC demands special designs of injection. The classical variable-loop autosampler is not appropriate for SFC use as the metering device can be cavitated by the expansion of the mobile phase [18]. Instead, most of the modern SFC systems use fixed-loop injection to isolate the metering device from the loop and rotor. Even though extra work is demanded, different injection volume is possible by installing injection loops of different volumes. However, the injection volume is set to below 10 µL in most analytical SFC applications reported so far [6,19,20], which is the consequence of three major factors: strong solvent effect, viscous fingering, and pressure consideration. The rule-of-thumb of dissolving the sample in a solvent that is as similar as or even weaker than the mobile phase starting composition is clearly not applicable in SFC, since compressed CO 2 is not possible to be used as sample diluent. The sample diluent in SFC often consists of a single or a mixture of organic solvents, which unavoidably leads to a strong solvent effect, especially for poorly retained compounds [14,16]. Deformation of the injection plug can also take place due to the large viscosity difference between the injection plug and the mobile phase [21]. Although being among the least suitable SFC sample diluent, water has been proven to offer beneficial effects in several studies, especially when acetonitrile is used to dilute the aqueous sample [22][23][24][25]. One particular obstacle associated with water-containing injection is that it can cause a huge pressure spike right after the injection, possibly due to the pumping of a water plug through the column at high SFC flow rates [26,27]. In order to decrease the negative effects of large volume injection in SFC and strive for higher detectability without sacrificing the injection flexibility, this work evaluated the possibility of performing multiple accumulative injections in modern analytical packed-column SFC using sub-2 µm particles. The multiple injection approach was briefly introduced to enhance detectability in the early days of SFC when capillary columns were used [28]. When the mobile phase starts at a composition that has very limited elution strength, some analytes from multiple injections can be effectively trapped and accumulated on the head of the column. In this study, a number of rapid small-volume injections were made continually with low-elution strength 100% CO 2 as the mobile phase. This was then followed by a final one-time gradient elution with co-solvent and separation of the sample. A schematic elucidation of the approach is shown in Supporting Information Figure S1. The gain in detectability can be measured by the S/N enhancement ratio of the analyte peak. Figure 1 is a simplified drawing that shows how the S/N enhancement ratio is calculated for a multiple injection analysis. In this work, the signal enhancement arising from different number of injections (up to 20) was first investigated on different stationary phases with compounds of a wide variety of physiochemical properties dissolved in both acetonitrile (ACN) and water (Supporting Information Table S1). The multiple injection approach was then applied for the analysis of sulfanilamide in spiked honey extract and diclofenac in spiked ground water as proofs of concept. One important fact that is important to note is that with increasing proportion of methanol in the mobile phase, the critical points of the mobile phase change drastically. Consequently, the separations in the latter period are technically in the liquid state. Even though the change from supercritical to liquid state does not cause significant change in mobile phase properties, as long as the fluid is in a singlephase, the inaccuracy of definition should still be pointed out for better clearance [1]. Reagents Caffeine was purchased from Merck (Darmstadt, Germany). Fluoranthene, ibuprofen, p-hydroxybenzaldehyde, p-coumaric acid, 3-methoxycinnamic acid, ferulic acid, sulfanilamide, and diclofenac were purchased from Sigma Chemical (St. Louis, Missouri, USA). Carbon dioxide of 5.3 purity grade was purchased from Linde (Guildford, UK). Methanol and acetonitrile (HPLC grade) used were purchased from VWR (Radnor, Pennsylvania, United States). Milli-Q water was used in all experiments. The compound concentration in all standard solutions was 50 µg/mL dissolved in both acetonitrile and water (fluoranthene and ibuprofen in water were not studied because of poor solubility). Approximately 3 mL honey purchased from a local market was extracted with 3 mL acetonitrile and spiked with sulfanilamide to 500 ng/mL. Ground water sample was collected from a small lake near the lab, filtrated and then spiked with diclofenac to 1 µg/mL. Apparatus The experiments were conducted on an Agilent Infinite SFC system (Santa Clara, CA, USA) consisting of a SFC controlling module (G4301A), one binary pump (G4302A), one diode array detector (G1315C), an autosampler (G4303A), a degasser (G4225A), and a thermostated column compartment (G1316C). The system was controlled by an Agilent Openlab CDS Chemstation C.01.07 software. Data processing was conducted with Agilent Chemstation software. Five columns of a variety of stationary chemistries were used in the study: Systematic study of effect of multiple injection on different stationary phases All five columns (1-AA, DIOL, DEA, 2-PIC, and octadecyl bonded high strength silica) were studied in this part. The accumulative injections before the final analysis were made with neat CO 2 as mobile phase at a flow rate of 3 mL/min. The column temperature was set at 40 • C and the back-pressure was set at 110 bar. The interval between two injections was as short as permitted by system settings (around 1 min). The mobile phase gradient after the last injection started with neat CO 2 , ramped up to 5% methanol in 0.01 min and to 25% methanol in 4.99 min, after which the mobile phase went back to neat CO 2 for column conditioning. The flow rate of the gradient elution was 3 mL/min. The column temperature was maintained at 40 • C and the back-pressure was set at 110 bar. Both ACN and water solutions of the model compounds selected were injected. Multiple injections consisting of different numbers of single injections (1, 2, 4, 6, 8, 10, 14, and 20) were made with all standard solutions on all columns. Each multiple injection experiment was performed with three replicates. The single injection was done with a 3 µL injection loop. UV signal at 220 (for ibuprofen) and 280 nm (for other compounds) were recorded with a diode-array detector at a sampling frequency of 20 Hz. Within-day precision was assessed by performing six continual multiple injection (1, 4, 8, and 12 times) of 3-methoxycinnamic acid, sulfanilamide, ferulic acid, and p-coumaric acid ACN solutions using the DEA column. Between-day precision was assessed by performing multiple injection (1, 4, 8, and 12 times) of the same solutions with the DEA column in three non-consecutive days. The comparison experiments of one-time large-volume injections were carried out under the same conditions as those of the final analysis in the multiple injection experiments, except that injection loops of different volumes were used. Analysis of sulfanilamide-spiked honey extract and diclofenac-spiked ground water The DEA column and the 2-PIC column were used for the analysis of sulfanilamide in honey extract and diclofenac in spiked ground water, respectively. For both analysis, the accumulative injections before the final analysis were made with 0% MeOH in CO 2 and were terminated immediately after the final accumulative injection. Then the gradient started with neat CO 2 , ramped up to 5% in 0.01 min and to 25% in 4.99 min, held at 25% for 1 min after which the mobile phase went down to neat CO 2 . A 3 µL injection loop was used in both analysis. The column temperature was set at 45 and 40 • C, and back-pressure regulator pressure at 120 and 110 bar, respectively for the analysis of honey extract and ground water. Multiple injection using acetonitrile as sample diluent For analytical SFC, acetonitrile has been proven to be a good polar sample diluent due to its aprotic feature and relatively low viscosity [14]. Thus, ACN was employed as sample diluent to evaluate the performance of the multiple injection technique. In this study, the S/N enhancement of different analyte peaks varied hugely with multiple injection. Hypothetically, if an analyte band does not move or diffuse between two con- tinual injections, the final signal enhancement ratio obtained from multiple injection compared with a single injection should be equal to the number of accumulative injections performed. However, the actual signal enhancement is highly possibly affected by three major factors: (1) Band broadening during the intervals between injections; (2) the migration of the bands during the accumulating process; (3) column overloading. There were four different scenarios observed in this work as demonstrated in Supporting Information Figure S2. Some compounds have poor retention on the column and its band moved very quickly out of the column even when the mobile phase was 100% CO 2 during the injection intervals. Consequently, the peaks for poorly retained compounds were almost of the same size as those obtained with a single injection and the S/N enhancement ratio was around 1 (Scenario 1). For compounds that have relatively poor retention, the final peak appears split or resembles a volume-over load wide peak (Scenario 2). This is the worst situation as there is no enhancement of S/N and the broad peak can interfere with the analysis of closely eluted adjacent peaks. Some compounds had moderate retention on the column, and the final peaks appeared to be symmetric but broadened with some gain in S/N (Scenario 3). For compounds that can be retained strongly, the final peaks presented relatively high S/N enhancement ratio, with limited peak broadening (Scenario 4). Signal to noise enhancement was assessed for all compounds until the final peak started to show shouldering or splitting after certain numbers of accumulated injections. As expected, multiple injection led to different S/N enhancement performances depending on the properties of the analyte and the stationary phase. Table 1 and Figure 2A show the S/N enhancement ratios of the studied analytes when a 2-PIC column was used. In general, the longer the compound was retained, the better the S/N was enhanced. This is mainly a result from less analyte band migration during the injection intervals, as the compounds can be accumulated in an overall narrower band. However, p-coumaric acid exhib-ited the best S/N enhancement as compared to sulfanilamide, although the latter had the longest retention time. This might be the result of faster migration of sulfanilamide bands under 100% CO 2 mobile phase flow. As the mobile phase during the accumulation process did not contain any protic co-solvent, most of the sulfanilamide molecules remained in neutral form, which could minimize the ionic interactions between the molecule and the stationary phase. In contrast, p-coumaric acid could interact strongly with the amine moieties on the 2-PIC stationary phase regardless of the composition of the mobile phase. Although both ibuprofen and 4-hydroxybenzaldehyde showed poor S/N enhancement with multiple injection, the final peak of ibuprofen did not start to display shouldering until six injections had been accumulated. In contrast, 4-hydroxybenzaldehyde peak split already after two accumulated injections, despite that 4-hydroxybenzaldehyde had slightly longer retention time than ibuprofen. It can also be observed that the enhancement curve for some relatively more retained compounds started to show the tendency to level out after certain numbers of injections. This is the consequence of mass overloading as more and more analytes are accumulated on the column head. With the active sites at the very beginning part of the column occupied by the accumulated compounds, the later-eluting compounds were flooded forward to bind to free active sites. Consequently, the molecules spread more widely in the column with shorted final retention time, which can be clearly observed in Scenario 4 in Supporting Information Figure S2. Besides the properties of the compounds, the stationary phase chemistry also plays an important role in degree of enhancement of the S/N. As can be seen in Figure 2, S/N enhancement varies greatly on different columns. As had been discussed above, S/N enhancement by multiple injection generally works better for compounds that elute relatively late. The DEA column provided the best overall S/N enhancement. The S/N values increased around 10 times compared with a single injection for sulfanilamide, ferulic acid, F I G U R E 2 Plots of S/N enhancement ratio vs. number of accumulated injections with ACN as sample diluent (retention times of each compound is written besides the compound names) 3-methoxycinnamic acid, and p-coumaric acid after 14 injections had been accumulated. In contrast, the 1-AA and DIOL columns did not retain the compounds as strongly as the DEA column. Consequently, the S/N enhancement performances were worse with early eluting compound, i.e. peaks already split after very few accumulated injections. To further confirm the dependence of S/N enhancement on analyte retention, the retention times of all 8 compounds studied on the four Torus columns (DEA, 2-PIC, 1-AA, and DIOL) are displayed in Supporting Information Figure S3. Combined with the observation in Figure 2, a strong general correlation between how well the compounds are retained and S/N enhancement through multiple injection can be established. Figure 2E shows the signal enhancement on the C18 column. Only fluoranthene, caffeine, and 3-methoxycinnamic acid were plotted as the other compounds appeared as shoul-dering or split peaks even with a single injection. Multiple injection only provided moderate S/N enhancement of caffeine. Interestingly, even though fluoranthene and 3methoxycinnamic acid were both weakly retained with similar retention time, S/N enhancement from multiple injection was significantly better with 3-methoxycinnamic acid than fluoranthene. This might be caused by neat CO 2 having difference elution strength for the two compounds. During the accumulative injections with only neat CO 2 as the mobile phase, fluoranthene is likely less adsorbed to the stationary phase and partitions more to the CO 2 as compared to 3-methoxycinnamic acid, since fluoranthene presents only a ring structure with no polar moiety. As the DEA column provided the overall best signal enhancement with no peak shouldering or splitting of any compound, the S/N enhancement ratio of each compound was 3732 SUN ET AL. F I G U R E 3 S/N enhancement and compound retention on the DEA and the 2-PIC columns for different number of accumulated injections plotted against their retention times to illustrate the correlation. As can be seen in Figure 3A, even though the dependence of S/N enhancement on the compound retention is obvious, there seems to be a retention time threshold between 0.8 and 1.1 min after which the S/N enhancement ratio does not increase much with retention time. Similar trend was observed also with the 2-PIC column ( Figure 3B), the retention time threshold on this column appeared to be between 1.1 and 2.2 min. Multiple injection using water as sample diluent The use of water has long been regarded as detrimental to SFC analysis. However, some recent studies have proven the possibility and beneficial effects of water as sample diluent [22][23][24][25][26]. Water was also investigated in this work as sample diluent with multiple injection as compared to acetonitrile. Fluoranthene and ibuprofen were excluded because of solubility issues. As expected, peak shouldering or splitting occurred already with single injection for certain compounds on all columns. As the accumulation was achieved with 0% co-solvent in the mobile phase, the huge viscosity difference between the water injection plug and the mobile phase can cause severe viscous fingering. Also, the poor miscibility of water and CO 2 could contribute to the peak distortion as well. For the peaks which had a good peak shape with single injections, multiple injection experiments led to very different S/N enhancement (Figure 4). Also here the 2-PIC and DEA columns provided apparently better enhancement than the other three columns. The 2-PIC column had the best overall performance, taking into consideration the number of peaks that did not split with single injection. Interestingly, it was observed that multiple injection led to apparently increased S/N enhancement for the late eluting compounds on the 2-PIC column when the sample diluent was changed from ACN to water, despite very little change in retention time of the compounds. The main cause of this could be that water demonstrates different solvent effects for analytes as compared to ACN and it might also change the properties of the analytes. For example, sulfanilamide can be in charged form when dissolved in water and interacts strongly with the stationary phase through ionic interaction, which is not the case with ACN as the sample diluent. Furthermore, the poor miscibility of water with neat CO 2 enables the analyte to be injected in a relatively narrower plug compared to that with ACN as sample diluent. This demonstrates that the property of sample diluent also plays an important role in determining how well the signal is enhanced through multiple injection. Based on all the aforementioned experimental findings, our hypothesis is that the concentrating effect of analyte on the column head through multiple injection is dependent on three major factors: compound retention, elution strength of neat CO 2 and the type of sample diluent used. Comparison with large-volume single injection Two comparatively late eluting compounds, p-coumaric acid and sulfanilamide, were selected to perform large-volume single injections as compared to the multiple injection approach. Large-volume single injections of p-coumaric acid and sulfanilamide standards in both ACN and water were conducted with the DEA column. As can be seen in Table 2, similar S/N enhancement ratios were obtained for p-coumaric acid with both injection approaches, regardless of the sample diluent. However, the advantage of multiple injection over large-volume single injection was observed with sulfanilamide. With similar total injection volumes, multiple injection provided 2 to 3 times higher S/N enhancement than those of large-volume single injection. The advantage of multiple injection over large-volume single injection might be the result of different sample diluent plug dilution. Sample diluent plugs experience dilution by the mobile phase following the plug during the injection process, which can decrease its strong solvent effect and improve the concentrating effect of analyte molecules on the column head [29]. Even though the dilution can happen fast, significant band broadening can take place when the injection volume is large enough. However, when this large volume injection was divided into several continual injections of a much smaller volume, each small sample diluent plug could be much more sufficiently diluted by the mobile phase during each injection than one intact large volume plug. Consequently, they displayed lower solvent effect when passing through the analyte bands. Thus, the accumulated analyte bands possibly migrated more slowly and spread less in a multiple injection process than in a large-volume single injection. Further experiments are of course needed to validate this hypothesis. Experiments with one-time 40 µL injection of aqueous solutions were not feasible, as a huge pressure spike took place right after the injection and quickly surpassed the upper pressure limit of the instrument, which caused system shut-down. This phenomenon has been reported and attributed to the high pressure needed to push the viscous water sample diluent plug that is not readily soluble in the mobile phase through the column under the SFC flow rates [26]. Further experiments with the 2-PIC column showed that apparent pressure spike occurred already with 20 µL single injection of aqueous solutions (data not shown). Repeatability of multiple injection The repeatability of the multiple injection approach has been evaluated with six experiments performed within the same day and three experiments during three non-consecutive days. Single, 4-time, 8-time, and 12-time injections were carried out for 3-methoxycinnamic acid, sulfanilamide, ferulic acid, and p-coumaric acid ACN solutions on the DEA column. The results are summarized in Table 3. In terms of retention time, multiple injection yielded comparable repeatability to those of single injection regardless of the number of accumulated injections. In terms of peak area, the withinday RSD from multiple injection also resemble those from a single injection. Also, no obvious trend can be observed with different number of accumulated injections. Interestingly, multiple injection seemed to provide better repeatability than single injection based on the RSD values obtained on different days. Furthermore, between-day RSD exhibited a clear descending trend with increasing number of injections accumulated. F I G U R E 5 Multiple injection of sulfanilamide spiked (500 ng/mL) honey extract in ACN Proof of concept-two application examples Sulfanilamide is a sulfonamide that is used in some parts of the world for treating bee diseases caused by bacteria [30]. Contamination of this type of sulfonamides in the honey product is a potential threat to human health because of toxicity and human allergy. Sensitive analytical methods are therefore needed in order to confirm its residue [31]. An ACN extract of honey was spiked with sulfanilamide and was used to test the potential of the multiple injection approach. Based on the S/N enhancement results of different columns in the previous sections, the DEA column was selected to perform the test. As can be seen in Figure 5, a single injection of 3 µL did not provide any recognizable peak. While more and more injections were accumulated, the sulfanilamide peak started to emerge and gradually became apparent. In contrast, the one-time injection of a large-volume of sample did not give the same enhancement. The repeatability of analysis of sulfanilamide in honey extract using multiple injection approach was also evaluated. Eight-time multiple injection of Diclofenac belongs to the category of nonsteroidal antiinflammatory drugs and is widely used for treating inflammation and alleviating pain [32]. As the public concern of pharmaceutical contamination in the environment is growing, studies have been made to investigate the influence of diclofenac on living creatures and have proven its harmful effects [33]. For the determination of diclofenac in environmental samples that may be present in very low concentrations, an analytical method with high detectability is a necessity. In this study, a ground water sample spiked with diclofenac (1 µg/mL) was used to further prove the usefulness of the multiple injection approach. The same five columns screened in Section 3.2 were compared in terms of their S/N enhancement of diclofenac in ground water, which is depicted in Figure 6A. Peak shouldering happened after two injections accumulated on the C18 column and 10 injections accumulated on the DIOL column. The other three columns provided similarly good S/N enhancement. 2-PIC was chosen to perform the multiple injection of spiked ground water sample, considering that it had the highest S/N enhancement ratio and the least severe peak shouldering and splitting as described in Section 3.2. Figure 6B displays the S/N enhancement of diclofenac on the 2-PIC column. The diclofenac peak from the single injection of 3 µL of the water sample was hardly visible, while later the peak became identifiable and even quantifiable with S/N enhancement from multiple injection. As aforementioned, one-time injection of large-volume water sample led to a pressure spike and noisy baseline in the chromatogram. Limitations and future aspect Even though the multiple injection technique has been proven to improve the detectability of compounds in analytical SFC when fixed-loop injection is used, it is unavoidably associated with certain apparent limitations. First of all, relatively strong retention of the targeted analytes is a key factor to consider during column selection, which does not necessarily lead to the best resolution of the various compounds in the sample. Also, in an SFC chromatogram of a complex sample, the analyte peak of interest can be closely surrounded by adjacent peaks. Consequently, the broadening of these peaks during the accumulation process can diminish the resolution and cause difficulty in quantification. In such cases, a more selective detector like a mass spectrometer should be used. As this work aims at demonstrating the potential and usefulness of multiple injection in modern analytical SFC, detailed optimisation of various parameters was not performed. In future application of this technique, parameters such as sample diluent, column temperature, injection volume, and number of injections should be optimised. Due to system equilibration and the needle wash pre-programmed in the software before every injection, the interval between the injections was approximately one minute which could not be altered. This could unavoidably cause peak broadening for the relatively less retained compounds. If this interval could be freely programmed, it would be one important parameter to optimize, which could potentially improve the signal enhancement even more. CONCLUDING REMARK S In modern analytical SFC with fixed-loop injection, the pursuit of higher detectability by increasing the injection volume is often associated with poor chromatographic performance. This issue can be partially circumvented by the use of a multiple injection technique as demonstrated in this work. In general, the S/N enhancement was dependent on the retention time of the compounds and the type of sample diluent. The DEA column showed the best S/N enhancement of various analytes with ACN as sample diluent. The 2-PIC column provided the best overall S/N enhancement with water as sample diluent, considering also peak shape. The advantage of multiple injection over one-time large-volume single injection was proven with sulfanilamide. With similar total injection volumes, multiple injection provided two to three times higher S/N enhancement than those of largevolume single injection. Multiple injection yielded similar repeatability in terms of retention time and peak area in comparison to those of single injection-regardless of the number of accumulated injections. The usefulness of the multiple injection approach was further demonstrated in the analysis of sulfanilamide-spiked honey ACN extract and diclofenac spiked ground water sample.
6,337.8
2019-10-17T00:00:00.000
[ "Chemistry" ]
OpenMindat: Open and FAIR mineralogy data from the Mindat database The open data movement has brought revolutionary changes to the field of mineralogy. With a growing number of datasets made available through community efforts, researchers are now able to explore new scientific topics such as mineral ecology, mineral evolution and new classification systems. The recent results have shown that the necessary open data coupled with data science skills and expertise in mineralogy will lead to impressive new scientific discoveries. Yet, feedback from researchers also reflects the needs for better FAIRness of open data, that is, findable, accessible, interoperable and reusable for both humans and machines. In this paper, we present our recent work on building the open data service of Mindat, one of the largest mineral databases in the world. In the past years, Mindat has supported numerous scientific studies but a machine interface for data access has never been established. Through the OpenMindat project we have achieved solid progress on two activities: (1) cleanse data and improve data quality, and (2) build a data sharing platform and establish a machine interface for data query and access. We hope OpenMindat will help address the increasing data needs from researchers in mineralogy for an internationally recognized authoritative database that is fully compliant with the FAIR guiding principles and helps accelerate scientific discoveries. | INTRODUCTION Data science is changing the style of research in geosciences.Workflow platforms, data cleansing, statistics, machine learning, data mining and data visualization are often seen in the general work of geoscientists in nowadays.Exciting data-driven scientific discoveries have been increasingly made in mineralogy (Hazen et al., 2019), geochemistry (Keller et al., 2015), geophysics (Bergen et al., 2019), palaeobiology (Fan et al., 2020), palaeoclimatology (Ma, Hinnov, et al., 2022), plate tectonics (Merdith et al., 2021) and more.Geoscientists and data scientists have seen the big opportunities made available by building, sharing and exploring the 'big Earth data'.In recent years, many initiatives and programs were built to accelerate that process, such as the EarthCube program initiated by the National Science Foundation (NSF) in the United States (Richard et al., 2014), the Deep-time Data Driven Discovery (4D) Initiative started by the Carnegie Institution for Science (4D Initiative, 2019) and the Deeptime Digital Earth (DDE) big science program started by the International Union of Geological Sciences (IUGS) (Wang et al., 2021).Practitioners in the field of geoinformatics have also summarized methodologies and approaches, and shared their best practices, such as abductive analysis (Hazen, 2014), data exploration (Ma et al., 2017), knowledge-rich intelligent systems (Gil et al., 2019) and context-aware machine learning in geosciences (Reichstein et al., 2019), just to name a few. Among the active works of data science for geoscience, mineral informatics (Prabhu et al., 2022) rises as a unique topic.It incorporates many of the methods and best practices mentioned above and highlights the advantages of shared open data, multidisciplinary working teams and iterative datathon activities.In recent years, the work of mineral informatics has proven that many scientific discoveries can be made when the data are in place and coupled with advanced analytical techniques and data science expertise.The research outputs on mineral evolution and ecology (Cleland et al., 2021;Hazen et al., 2019;Hystad et al., 2015) are good examples illustrating the key components and steps in the methodology of mineral informatics.Nevertheless, the practitioners, including the authors of this paper, also found that accessing adequate reliable open data often remains a challenge for mineral informatics (Ma, 2022).Through community activities (Golden et al., 2019;Prabhu et al., 2021;Wyborn et al., 2021), although researchers are gradually improving the FAIRness (Findable, Accessible, Interoperable and Reusable) (Wilkinson et al., 2016) of open data in the field of mineralogy, more work still needs to be done to fully meet the needs of mineral informatics, particularly for machine-to-machine interactions (Musen, 2022;Wyborn & Brownlee, 2022). The purpose of this paper was to present our design and development of OpenMindat, a project that aims to build machine access interface for the Mindat database.Mindat, one of the largest mineral databases in the world, has been widely used by many geoscientists, especially those in the field of mineralogy, but the machine interface for bulk data download has never been fully established.Through a research project funded by NSF, the authors of this project have designed the technical structure for enabling machine readability of Mindat, established the open data service, and are now working on the technical extension.We hope OpenMindat will help meet the data-intensive research needs of many geoscientists.The remainder of the paper is organized as follows.Section 2 gives a brief introduction to the background of Mindat and the needs for better machine access from it.Section 3 presents the designed structure of OpenMindat, the established machine interface for open data service and the progress of other technical developments.Section 4 discusses the experience and best practices from OpenMindat and the work on mineral informatics, and it also lists a few items for future work.Finally, Section 5 concludes the paper. STATUS OF MINDAT Mindat was started as a private database of minerals and their localities around the world by Jolyon Ralph in December 1993.In October 2000, its website (www.mindat.org) was launched as a free-to-use resource targeted primarily at mineral collectors and amateur mineralogists based around a 'crowdsourcing' model where verified members can contribute new data for the database.Unlike the Wikipedia system where anonymous edits are possible, all contributions to Mindat must be approved by registered users and are peer-reviewed by regional experts.By July 23 of 2022, Mindat has 64,726 registered users and 6,865 of them have the 'data contributor' right.A management team of 53 mineralogical experts from around the world was selected to oversee the review process and to manage the direction of the site.Mindat is by far the most widely used mineral database in the world.In 2021 alone, it received 44,333,302 page views from 14,506,574 sessions of 10,148,136 unique visitors.Those website visitors are from 246 countries and territories, and about 32% of usage is from the United States.More impressively, Mindat has underpinned massive numbers of scientific discoveries in mineralogy, petrology, economic geology, geochemistry, planetary science and other related disciplines.As evidence, the number of scientific publications citing Mindat has greatly increased in recent years (Figure 1). In contrast to those significant scientific merits and social impacts, the infrastructure development and maintenance of Mindat is struggling to keep pace with the overwhelming data needs, to take advantage of new technologies and to scale its base physical infrastructure.Similar to its 'crowdsourcing' data, the infrastructure development and maintenance of Mindat rely entirely on 'crowdsourcing' donations and sponsorships.Although all the data on the Mindat website are free for users to browse through well-organized GUI (Graphical User Interface), the machine interface for data access and download has never been fully established.An experimental simple API (Application Programming Interface) has been running on the Mindat web server for nearly a decade.However, it has not been made public because of the risk of overloading the server and bringing the website down for all other users.In the past decade, Jolyon Ralph has received numerous requests for sharing Mindat data.Very often, what Ralph could do is to design specific queries though the experimental API to download a part of the Mindat data, and then share with the data requester.This case-by-case data sharing mechanism is tedious while the data sharing needs are continuously increasing: clearly, this method will not scale.In worst-case scenarios, copies of Mindat have been downloaded and then republished as new vocabularies online without recognition of Mindat, and sometime with new Uniform Resource Identifiers that were given to the same terms.Not only is this practice unethical, but the mineral names/definitions may also be changed and hence a growing number of online mineral name vocabularies create confusion among users as to which is the most authoritative mineral database and which ones are likely to be sustained over the long term.To better serve both scientific and social needs, it is time to develop a fully open access and interoperable architecture for Mindat (i.e.OpenMindat), and make it an active node in the geoscience cyberinfrastructure ecosystem that is recognized as an international authoritative resource. | Major data components and work plans to implement FAIR principles By July 2022, the Mindat website hosts more than 18 million pages, and it uses about 25.8 TB disk space (Ralph, Martynov, et al., 2022).Mineral species, locality descriptions, mineral occurrence information and photographs of specimens are the most highlighted records available on the site.Each mineral occurrence is a single named mineral species reported from a single named locality (Figure 2).Mindat provides detailed attributes for each mineral species, such as those subjects listed in the right part of Figure 2. In recent years, Mindat has also set up interfaces with many other open geoscience data resources and imported records about other data subjects, such as rocks and fossils.On the GUI of Mindat website, geological map services from other data portals such as Macrostrat (Peters et al., 2018) can also be loaded to illustrate the geological environment of localities and mineral occurrences.Geological age of mineral occurrences has recently drawn attention in Mindat as it is an essential attribute for the study of mineral evolution as well as the co-evolution of geosphere and biosphere (Golden et al., 2019;Hazen & Ferry, 2010).From a broad perspective, we can map locality and geological age into the classes Spatial Object and Temporal Entity in GeoSPARQL (Battle & Kolas, 2011) and the Time Ontology (Cox & Little, 2020) respectively (see top part of Figure 2).The locality records show diverse patterns, such as literal address written in a hierarchal form, coordinates of a point and coordinates of a region and more.Similarly, the age records can also show in literal and/or numeric forms.In Mindat, significant efforts have already been spent to cleanse and reconcile the locality and age records into standardized forms.The comparison to community-level ontologies and data models such as GeoSPARQL and Time Ontology has provided 'food for thought' for us to address further topics of data interoperability in Mindat and work towards the FAIR principles. Comparing with Figure 2, the detailed records in Table 1 give a more comprehensive view of the data components in Mindat as well as their properties, data sources, community standards and progress and concerns in data curation.The work on OpenMindat also provides an opportunity for us to take stock of the current records in Mindat and draw plans for cleansing and extension.With current Mindat as the foundation, OpenMindat will connect to a range of authoritative geoscience databases, collect properties to enrich the specification of mineral data, leverage community-level standards and crowdsourcing efforts to cleanse the records, and share data with other databases and the broad community of users. The designed structure of OpenMindat is shown in Figure 3.Besides leveraging existing data resources in the geoscience community, OpenMindat will also incorporate several state-of-the-art theories, platforms and technologies in the development.FAIR principles (Wilkinson et al., 2016) will be used to guide the data curation and service construction to increase the findability, accessibility, interoperability and reusability of data for both humans and machines.In the detailed technical development, Schema.org(Noy et al., 2019) will be used in the OpenMindat architecture to increase the findability, through the EarthCube GeoCODES and the Google Dataset Search, which will help increase the visibility of OpenMindat data to a much broader community and make it easier to be integrated with other open geoscience data.The database structure and nomenclature will adopt disciplinary standards such as those ratified by the International Mineralogical Association (IMA) and IUGS, thus, to help link OpenMindat to the Open Knowledge Network (Guha & Moore, 2016).A Service-Oriented Architecture (Fielding, 2000) will be applied to support the machine interface for data access.Packages in Python and R will be developed to enable data query and access directly from workflow platforms, such as Jupyter and RMarkdown, to facilitate reproducible workflows and open science.Several cloud-based facilities, such as PanGeo (Arendt et al., 2018) and Geoweaver (Sun et al., 2020), will be reused to tackle big data processing in workflows.We will also leverage the popular Mindat website to publish updates about OpenMindat and facilitate data science activities among the geoscience community. | Progress of technical development and results Following the structure and roadmap planned in Figure 3, we have made solid progress on two major topics: (1) Cleanse data and improve data curation and (2) Build a data-hosting platform and establish the machine interface.The subsections below will give details about the development activities and the current outputs. Cleanse data and improve data curation. As a large crowdsourcing database, Mindat has inherent biases within the data due to the sources of data and the areas of interest of people willing to contribute (Geldmann et al., 2016;Kosmala et al., 2016;Ralph, Martynov, et al., 2022).For example, the most common rock-forming minerals are vastly under-represented.The feldspar group represents more than 50% of crustal volume, yet they are not among the most reported minerals on Mindat.Amusingly, the relatively rare mineral gold is among the very most cited minerals, because every 'gold showing the major data components in Mindat, with more details on mineral species, mineral occurrence and locality. A B L E 1 Key data subjects in OpenMindat and their connections to the geoscience cyberinfrastructure ecosystem (Records collected in July 2022).mine' must have the mineral gold.The same is not true for most other commodities: most copper mines do not have the mineral copper, for instance.Other biases are simply due to the lack of available information on many geographic areas worldwide.Fortunately, these biases are well understood and can be dealt with by statistical models (Gonsamo & D'Odorico, 2014;Robinson et al., 2018;Xue et al., 2016).We have conducted a thorough review detailing the known biases and areas of doubt within the Mindat database and have used that as a guide to cleanse the records before releasing them on the API. Subject To facilitate ethical data reuse, we have planned a framework of data access licence to different data types in Mindat to support OpenMindat.The Creative Commons License (i.e.CC-BY-NC-SA 4.0) is a popular choice for licencing open data.However, as part of this procedure, we have discussed with our existing partners and source databases about the impact of such a licence.We are creating a forked copy of the CC-BY-NC-SA 4.0 licence tailored specifically for these needs.Such a licence will allow free-of-charge access and use of the OpenMindat data for non-commercial purposes with as few restrictions as possible.The share-alike directive in the proposed licence ensures that any database derived significantly from OpenMindat would also need to be licensed under a similar permissive licence.This protocol will encourage the growth of more open data and open science projects (cf.Rule 2 in Cox et al., 2021).For example, the information about a locality, it's coordinates, recorded mineral list and reference list would fall under this new licence.If a locality contains a GeoJSON polygon that has been taken from OpenStreetMap, this falls under OpenStreetMap's ODbL licence (which allows us to redistribute their data with attribution).In comparison, some other data types in Mindat should use different licences.For example, a photograph of a mineral specimen remains the copyright of the photographer and OpenMindat does not have any right to distribute this image without permission. 2. Build a data-hosting platform and establish the machine interface. In order to provide a robust data hosting platform for the data access and downloads, OpenMindat has a new server physically separate from the existing Mindat web server (Ma, Ralph, et al., 2022;Ralph, Ma, et al., 2022).The hosting platform is an Infra-2 Infrastructure Server from OVHCloud with 64 GB of RAM, 3× 3.84 TB SSD disks in Soft RAID configuration and 1 Gbps unmetered public bandwidth.The current Mindat website has 20 years of experience in running a highly popular data-intensive website from dedicated servers.We have collaborated closely with the Mindat server provider to build the platform for OpenMindat, to ensure fast server-to-server connectivity for keeping OpenMindat up-to-date with the primary Mindat database.The Mindat website is built on PHP/ MySQL technology.The OpenMindat hosting platform will be connected with a synchronized MySQL database to the live Mindat server so the data on OpenMindat is always up to date.MySQL synchronization has a low impact on server performance and network bandwidth (Pohanka & Pechanec, 2020). The OpenMindat data API has been established and made open.It deploys a RESTful API (Richardson & Ruby, 2008) structure and allows programmatic access to the data on demand.Any current registered users of Mindat with the 'data contributor' status can find an API token on their user profile.A newly registered user of Mindat can also request the 'data contributor' status and obtain the API token.A tutorial about user registration and API token request was shared by Zhang (2023), which also includes links to shared Jupyter workflows of data query as well as documentation of the API parameters (Also see below the section 'Data availability statement').With the API, various data queries are now available.The following is a list of representative use cases based on the requests and feedback from Mindat users: (1) Retrieve a full list of all IMA-approved mineral species with detailed properties.( 2) Retrieve a list of mineral species matching certain chemical criteria, such as 'mineral species containing nickel or cobalt, with sulphur but without oxygen'.(3) Validate alternative mineral/rock names.For example, if the name 'amethyst' is sent, then it would return that the correct mineral species is quartz, and that 'amethyst' is a varietal name.(4) Provide a hierarchical taxonomy of petrological names and their definitions.This function will enable other geoscience data repositories, portals and platforms to readily utilize the taxonomy.( 5) Provide a list of recorded mineral localities within a given area, such as an address, a polygon or a buffer zone from a point.Except item (3), for which the data are still under cleansing, all the other use cases in the above list can now be realized through the OpenMindat data API. | Ongoing work to further improve the FAIRness of OpenMindat data Together with the OpenMindat server, we are working on two other approaches that help users access the open data.The first is developing packages in Python and R to enable data query and access directly from workflow platforms such as Jupyter and RMarkdown.The functions in those packages will wrap the detailed technical process of the RESTful API data access, and enable capabilities of interest to geoscientists, such as data query, pre-processing and integration.We will apply a use case-driven iterative approach (Ma et al., 2014) to analyse the needs of geoscientists and develop the functions accordingly.The second is building copies of datasets on certain topics and enabling bulk data downloads.These pre-packaged downloads would be rebuilt approximately once every 2 weeks depending on server usage.These will be available in JSON, CSV and MySQL format.Data downloads will not include the big chunks of data imported from other databases as this should instead be accessed directly from their original source. OpenMindat has joined the open data activities in the geoscience community to further increase FAIRness of the share data.GeoCODES (McHenry et al., 2021) has been successfully aggregating standardized metadata from many geoscience data repositories and facilitates and building a centralized index portal to allow researchers to quickly find data of interest.Another advantage of GeoCODES is that, since it adopts the technical approaches of Schema.org, the metadata released on each individual geoscience data repository as well as the assembled metadata at GeoCODES will all be indexable by Google Dataset Search (Noy et al., 2019).This feature will further increase the discoverability of the data, as Google has a huge user community.Mindat has already deployed a framework to host a unique webpage for each mineral species, together with a schema for the metadata.We will further refer to the IMA mineral name list and the mineral semantics discussed in Prabhu et al. (2021) to refine the elements in the metadata schema.The metadata recommendations of GeoCODES will be adapted to make our metadata schema compatible with Schema.org.Then, we will develop a program to automatically generate a piece of JSON-LD code for the metadata of each mineral species and make them indexable by GeoCODES and Google Dataset Search.This feature will help users quickly find webpages of interest on the Mindat website.For example, when they use the Google Dataset Search engine to find a certain mineral, the results will show links to Mindat.Such functionalities will be complementary to the OpenMindat data API.That is, although users can query and download datasets from the API, the mineral species webpages are still able to provide complementary information to human readers, and the inserted metadata can increase their visibility to search engines and improve the efficiency of search. | DISCUSSION Mindat has demonstrated thriving sustainability, mainly due to the mechanism of crowdsourcing in both data collection and data quality control and an increasing demand for accurate, trusted information.This experience can be shared with many other open data portals in various geoscience disciplines.Mindat has a very small team working on the technical development of the database software and the website in comparison to the large numbers of data contributors and data curators (See Section 2).While the data contributors across the world scale up the data input significantly, the mineralogical experts in the data curation and management team ensure the correctness, completeness and consistency of the uploaded records before they are made public.The unique data subjects in Mindat, such as mineral occurrence (i.e.species-locality pair) make the database a useful resource for exploring cutting-edge research topics, such as mineral ecology and mineral evolution (Hazen et al., 2019).The comprehensive mineral species name list, detailed attributes, integrated links to other data resources and friendly user interface make it a popular reference among both geoscience professionals and the general public for mineralogical information and knowledge. Nevertheless, comparing with the guidelines in the FAIR data principles, Mindat still faces many challenges, which are the driving force for us to propose and build OpenMindat.We need to note that, although OpenMindat has a focus on mineralogy, it will not be a data island only limited to a single discipline.Instead, OpenMindat will refer to a list of existing data portals, resources and software (Table 2) to facilitate data interoperability, enrich data subjects and records and establish new functions.For instance, locality and age are the two data subjects that are in urgent needs from many geoscientists.For the heterogeneous locality records, we will leverage local experts to merge duplicate textural records and apply OpenStreetMap's nomination service and similar tools to get latitude and longitude records.Among the current Mindat locality records (Table 1), about 41,000 of them are with margin of error on estimated coordinates between 3 and 500 km.Improving locality information is thus a priority; we hope to get at least 50% of these to under 3 km margin of error in the next 3 years.The age information of mineral compositions is invaluable to the study of mineral evolution.There are currently over 5,500 age records in Mindat.We will import all the 21,000 age records from the RRUFF Mineral Evolution Database (Table 2) and continue to add age records by exploring other resources. We have scheduled a two-phase development plan for the OpenMindat data API.What we have established so far is the first phase, in which the API can search any of the datasets in OpenMindat and is able to combine them where possible.Also, the API is able to specify which variables are needed in the output.For example, a complex query could be described as 'Return all minerals and their references where the mineral contains cobalt but not oxygen and is found in South Africa or Zambia'.Results are returned in the format specified by the user.The second phase of the API, which we plan to build, will allow OpenMindat to accept new data from users into the system.Such data will need to come from registered users and will need to go through a process of review and verification before they become part of the public data.Data uploads will be limited to a number of subjects in the existing data structure of OpenMindat, including locality, mineral occurrence, specimen analysis and photograph, and those new data will be synchronized with Mindat. Mindat has proven to be a unique and invaluable resource for scientific research, and we expect OpenMindat will further facilitate the growth of new hypotheses and discoveries.For instance, in the study of mineral ecology, the mineral/locality data from Mindat have been used to document the diversity/distribution of minerals on Earth today and predict the as yet undocumented diversity of minerals (Hystad et al., 2015(Hystad et al., , 2019)).In the study of mineral evolution, based on analysis of more than 100,000 dated mineral-locality pairs, it has been discovered that Earth's mineralization has been episodic, with close associations of diverse mineral formation to intervals of supercontinent assembly, and intervening periods of reduced mineral formation (Hazen et al., 2014).Recently, Hazen and Morrison have introduced a new 'evolutionary system of mineralogy' that attempts to place all of Earth's minerals in their temporal and paragenetic contexts (Hazen & Morrison, 2020).These examples are just a small part of the innovative discoveries enabled by Mindat in various disciplines.Such data-intensive scientific pursuits will be significantly leveraged with the open data provided by OpenMindat.In our previous studies, we have also summarized experience and best practices of applying data science in geoscience, including the abductive process (Hazen, 2014), visual exploratory analysis (Ma et al., 2017) and Agile-style datathon activities (Ma, 2022).Other scientists using OpenMindat data, if needed, may adapt those experience and best practices in their own studies. The open data community has been promoting the concept of 'ecosystem', which means a virtuous circle of interacting factors such as data, tools, researchers and scientific discoveries.To establish OpenMindat as an active node in the open data ecosystem, we have allocated tasks in the above-mentioned technical developments.The data quality improvement, stable data API and direct data search and access from workflow platforms are the key items.Those outputs, once fully established, will support fluent data flow and reduce the burden of data access and cleansing for scientists.Additionally, we have a few other thoughts related to the work on OpenMindat.One of them is the semantics of mineral data.For instance, mineral species, locality, age and rock are among the key objects in OpenMindat, and all of them face the challenges of machine-readable semantics, such as shortage of community-level data models, varied data representations and formats and heterogeneity in terminology.Geoinformatics researchers have recognized those challenges and built some initiatives (Prabhu et al., 2021).Another thought is to facilitate the usage of OpenMindat and accelerate new scientific discoveries in more areas.We plan to collaborate with communities such as the American Geophysical Union, the Geochemical Society and the Geological Society of America to organize workshops and conference sessions to provide training opportunities for geoscientists and students.Overall, the aim of OpenMindat is to establish an open access data portal for the popular Mindat database, to boost new scientific discoveries in the evolutionary system of mineralogy.A key part of the activities is to establish a FAIR framework to curate scientifically important properties in mineralogy and develop machinefriendly interfaces for data access.We have enriched and cleansed massive records related to all the mineral species recorded in Mindat and have established an open data API to enable data query and download.Moreover, we will establish OpenMindat's connection to EarthCube GeoCODES, and develop packages in Python and R for querying and accessing OpenMindat data from workflow platforms.Based on our previous successful discoveries in data-intensive geoscience studies, we firmly believe that opening the Mindat data for free academic use will encourage a new generation of mineralogical research, including projects that we would never have imagined ourselves.We are also optimistic that by making Mindat easier to access, particularly for machines, that there will be less copying and republishing of Mindat data on multiple disconnected websites that do not automatically update as changes are made to Mindat.We hope that OpenMindat will encourage a greater involvement from the wider geoscience community, with more people willing to share their data for the benefit of all, rather than creating their own local resource online.We are convinced that multi-dimensional analysis of these mineralogical data from a recognized, authoritative OpenMindat server will lead to important new revelations. F Number of scientific publications citing 'mindat.org'during the period 2006-2021 (records from Google Scholar). Number of records Properties & description Community standards, data sources and curation concerns F I G U R E 3 OpenMindat's technical structure and its approach to leverage and reuse existing data resources, tools and infrastructure. Identified data resources for complementing OpenMindat records and functions. T A B L E 2
6,616.4
2023-05-29T00:00:00.000
[ "Geology", "Computer Science" ]
An ultraviolet dyegraph for measuring the chemical disturbances of sinking particles and swimming plankton Sinking particles and swimming plankton may deform ambient chemical gradients, leaving long trails of chemical disturbance that dissipate on a diffusive timescale. Imaging this phenomenon in a laboratory setting requires a large field of view, a broad depth of field, and a passive chemical substance that does not influence the hydrodynamics. These requirements obviate the use of traditional schlieren optical systems, which rely on density stratification and their consequent buoyancy forces, and modern planar laser‐induced fluorescence, which has a narrow depth of field. Further, the light used to induce plankton movement must be kept separate from the recorded signal. Here, we describe a new ultraviolet “dyegraph” technique that quantifies dye disturbance caused by sinking spheres and swimming zooplankton. An experimental tank is filled with seawater and a gradient of an ultraviolet‐absorbing pigment. Using modified shadowgraph optics, ultraviolet light spreads from a pinhole to a collimating lens, passes through the tank to a telecentric lens, and is recorded by a camera fitted with an ultraviolet bandpass filter. The dye is transparent in visible light but appears dark in the imaging system. The 3D chemical disturbance is reconstructed from images using tomographic inversion. After outlining the method, we present examples of chemical deformation trails generated by sinking spheres and live, swimming copepods. Microscale gradients of chemicals often constrain the rates of growth, grazing, predation, and mating in planktonic communities (Pasciak and Gavis 1974;Dodson 1988;Ploug et al. 1999;Long et al. 2007;Yen and Lasley 2010;Seymour et al. 2017). The primary sources of chemical microstructure are thought to be excretions from plankton and turbulent stirring of large-scale gradients (Azam 1998;Stocker 2012). While turbulence is pronounced near boundaries, the pelagic habitat of plankton can be relatively quiescent in certain regions and seasons (Peters and Marrase 2000;Waterhouse et al. 2014;Fuchs and Gerbi 2016). In addition to stirring by ambient turbulence, there is mounting evidence that swimming plankton and sinking or floating particles can also stir ambient chemical gradients. Katija and Dabiri (2009) observed jellyfish dragging dye injected ahead of their swimming path in a highly stratified lake. Noss and Lorke (2014) measured increased scalar dissipation rates of a dye gradient in the vicinity of swimming daphnia in an experimental tank. Recent simulations of a sphere sinking through a linear chemical gradient indicate that the gradient is deformed and enhanced in a long trail that persists on a diffusive time scale, much longer than the velocity disturbance that created it (Inman et al. 2020). Further experiments in a controlled laboratory setting are needed to quantify the structure and extent of gradient enhancement by sinking particles and swimming plankton. Imaging trails of deformed chemicals -"gradient deformation trails"presents a number of challenges that are not met by current methods. Traditional shadowgraph or schlieren techniques rely on the refraction of light by disturbances in density stratification (Yick et al. 2007). Biologically relevant chemicals in the ocean such as nitrate or dissolved organic matter generally do not influence the density of the fluid, making these methods unsuitable for characterizing the stirring of gradients of these chemicals. Planar laser induced fluorescence (PLIF) has been used to measure scalar dissipation by swimming zooplankton (Noss and Lorke 2014), but the narrow imaging plane does not capture the full trail of chemical disturbance and the laser may affect swimming behavior. Both a large observation window and large depth of field are required to capture the trail. Here, we describe a new imaging system to quantify a vertical dye gradient stirred by sinking particles and swimming zooplankton in controlled laboratory conditions. *Correspondence<EMAIL_ADDRESS>This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Calanoid copepods are ubiquitous in the world's oceans; many species perform diel vertical migrations, making them an ideal choice for zooplankton-swimming experiments (Mauchline 1998). Such phototactic zooplankton can be induced to swim in a desired direction using a light stimulus. To prevent the stimulating light from interfering with the imaging, the wavelength of light used to guide plankton swimming must be separated from the imaging wavelength. Wilhelmus and Dabiri (2014) used a blue-green laser to initiate vertical migrations of Artemiasalina and tracked microparticles using a red planar laser. We have determined that Calanus pacificus copepods swim in response to white light but do not react to ultraviolet wavelengths. To conduct imaging of the trails created by swimming zooplankton, we therefore utilize a pigment that absorbs ultraviolet light but is transparent in the visible spectrum. The modified shadowgraph optical system described here measures the concentration field of an ultraviolet-absorbing dye stirred by spheres and copepods in a tank. The dye represents an ambient nutrient such as nitrate that typically occurs as a vertical gradient through the base of the euphotic zone. Parallel-light shadowgraphs afford a broad field of view, large depth of field, and negligible magnification through the focal plane. This means that an object in the imaging volume of the tank is imaged at essentially the same size, and is in focus regardless of its position. While shadowgraphs are much simpler than Schlieren optical systems, they are not often used in modern flow visualization because the shadow cast by a density anomaly is not a 1 : 1 representation of the flow that caused it (Settles 2001). We overcome this limitation in our apparatus by modifying the shadowgraph to image the specific wavelengths absorbed by the dye, rather than the density gradients alone. Because the light rays are collimated, the light intensity recorded by this "dyegraph" is the integrated dye concentration along a ray path, projected onto the imaging plane of the camera. A 3D disturbance of the dye field can then be reconstructed using tomographic inversion, assuming axisymmetry. Images acquired by the dyegraph include light that is refracted by density gradients; however, this interference is minimal. To our knowledge, this is the first system designed to record 3D disturbances by moving objects of a 1D ambient chemical field (an assertion supported by G. S. Settles, pers. comm.). The dyegraph allows quantification of variables relevant to gradient deformation, such as the maximum distance an isosurface of chemical is displaced. Materials and procedures Optics Illumination for a shadowgraph consists of a pinhole light source that projects onto a collimating lens. Collimated light then passes through the imaging volume to a second lens which focuses the light onto a camera sensor. Objects or dye in the imaging volume obstruct the light and appear dark in shadowgraph images. Because the light is collimated, the field of view is determined by the size of the lenses, and there is no magnification with distance through the imaging volume. In our system, the light source is an ultraviolet 3 W LED (LED Engin LZ1-10U600, 0.6 A/4.2 V) with peak emission at 365 nm that passes through a 1 mm pinhole (Fig. 1). A diffuser placed between the LED and the pinhole blurs the diode pattern, which would otherwise be visible in the image. The UV LED is attached to a heatsink mounted on a translation stage to allow fine adjustments. Ultraviolet light is collimated by a 20 cm diameter, 40 cm focal length plano-convex lens (Edmund Optics) positioned in line with the light source using a custom-designed adjustable lens mount. After passing through the imaging volume, light rays are received by a bi-telecentric lens (Opto Engineering TC23192) that reduces distortion from oblique light rays. A 15 cm diameter circle of light travels through the UV-transmitting acrylic tank to the telecentric lens, which focuses the image onto the camera sensor. A visible spectrum LED is positioned above or below the tank to stimulate phototactic organisms. A UV bandpass filter attached to the camera prevents visible wavelengths from interfering with the image. All optical components and the tank are fastened to an optical breadboard. illumination with a resolution of 73 μm is recorded at 5 frames s −1 by a 5 megapixel CMOS camera with a 1.69 cm detector (Point Grey GS3-U3-51S5M-C, 1.3 ms shutter/1.25 gamma). Because white light is used to induce zooplankton phototaxis, a 275-375 nm ultraviolet bandpass filter is fixed to the camera to prevent visible wavelengths from interfering with the image. The shadowgraph optics are calibrated with images of a checkerboard pattern, and processed using the Matlab Computer Vision System Toolbox (The MathWorks 2016). Within the field of view, 75% of the area displays less than 1 pixel of radial distortion, with a maximum of 1.8 pixels of radial distortion near the edges of the lens. Experimental tank The inner dimensions of the experimental tank are 30 cm tall, 12 cm wide, and 5 cm long in the direction of light rays, containing a 1.8-liter volume (Fig. 2). The image produced by the dyegraph is then a circular section 12 cm wide and 15 cm tall, integrating 5 cm through the tank interior. The tank is constructed out of 9.5-mm thick UV-transmitting acrylic, typically used in tanning beds (Plexiglas G-UVT). The tank rests on 8 cm tall supports that provide access to a single port for filling from the bottom, as well as space to place a visiblespectrum LED for driving plankton phototaxis. The base of each tank is fastened to an optical breadboard on a vibrationdamped table. Four regularly spaced ports in each side wall of the tank permit water samples to be extracted with a syringe for density or chemical measurements. The side ports consist of a nylon bulkhead fitting capped with 3-mm thick silicone septa, allowing a needle to be inserted into the center of the tank. Dye The dye is a commercially available ultraviolet-absorbing pigment produced by Lycus for use in cosmetics; benzophenone-9 (Maxgard 1800) is water soluble, nontoxic, and absorbs 365 nm wavelengths. Dissolved in seawater, the dye is colorless in the visible spectrum but appears dark when imaged using the ultraviolet shadowgraph. The attenuation of light by dye pigments is described by where I is the measured pixel intensity, I 0 is the pixel intensity with no dye present, α is the attenuation coefficient, C is the concentration of dye, and x is in the direction of the light rays. A calibration curve for dye concentration vs. average pixel intensity was created by filling the experimental tanks with known concentrations of dye dissolved in seawater (Fig. 3). The equation for dye concentration as a function of pixel intensity in a tank of thickness X is where the first coefficient is the attenuation α −1 = 0.002 kg m −2 and the standard deviation is ± 0.002 kg m −3 . The value of the molecular diffusivity of benzophenone-9 is required for determining nondimensional fluid parameters, but has not previously been determined. We estimate the temperature-dependent molecular diffusivity of benzophenone-9 using the Stokes-Einstein relationship and verify it by observing diffusion of a layer of dye in the experimental tank. The Stokes-Einstein equation defines the diffusivity of a solute in terms of the viscous drag on a single spherical molecule. Edward (1970) adapted the relation for nonspherical molecules: where k is the Boltzmann constant, T is the absolute temperature, n is an empirical correction factor, ν is the dynamic viscosity, r is the van der Waals radius of the molecule, and f/f 0 is the frictional coefficient for ellipsoids. The radius of the benzophenone-9 Filling port Sample port 30 cm 5 cm 12 cm molecule is estimated as r = 3V=4π ð Þ 1=3 = 4:9227 Å , where V is the sum of the incremental atomic volumes (Bondi 1964). For T = 291.65 K (18.5 C), n = 5.97, and f/f 0 = 1.05, the diffusivity of benzophenone-9 is D = 3.7141 × 10 −10 m 2 s −1 . Recording the diffusion of a dye layer in the experimental tank at 18.5 C, the analytical solution to the diffusion of a delta function is fit to the vertical gradient of dye at two time points resulting in a mean diffusivity of D = 3.2914 × 10 −10 m 2 s −1 with a standard deviation of ± 1.775 × 10 −10 m 2 s −1 (see Inman 2018). The two estimates agree and the Stokes-Einstein relation is used hereafter to accommodate varying temperatures in the experiments. Stratification The tank is filled with salinity-stratified seawater using the double-bucket method (Oster 1965). The first beaker (A) of saltier seawater is pumped into a second beaker (B) of fresher seawater that is mixed on a stir plate (Fig. 4). A second pump operating at twice the rate of the first pump transports the mixed seawater into the bottom of the tank, creating a linear salinity gradient. A negative vertical dye gradient is superimposed on the salinity stratification by adding dye to beaker A. The magnitude of the dye gradient is increased by adding the dye to the first beaker after filling begins. To create an isolated dye layer, dye is injected inline through a threeway stopcock midway during filling. The salinity stratification causes the dye to flatten into a layer that is pushed up to the middle of the tank from below by the pump. We tested several configurations of peristaltic pumps and gear pumps during preliminary tank fillings. Peristaltic pumps produce a pulsating flow that can cause mixing in the bottom of the tank, but the average flow rate is fairly steady with increasing back pressure from the water level in the tank or beaker. On the other hand, gear pumps generate smoother flow, but the average flow rate decreases with back pressure. As such, a Labconco peristaltic pump is used to transfer water between the first and second beakers, while a Cole-Parmer gear pump with a Micropump head moves fluid between the second beaker and the experimental tank. The gear pump speed is adjusted while filling the tank to compensate for back pressure. Heating of the fluid by the stir plates and pumps can cause convection in the tank. We regulate the temperature using a Haake K20 water bath connected to the two waterjacketed beakers and a segment of water-jacketed 316 stainless steel tubing. After filling, the tank is covered and allowed to stand for 8 h to relax any residual flows. We confirm the linearity of the density gradient by using a syringe to take 10 mL samples from the center of the tank through each of the four side ports after the conclusion of an experiment. The exact sampling locations are determined from images by measuring the distance from the tip of the needle to a reference pixel in a mark on the tank wall. The densities of the samples are measured with a Rudolph Research Analytical densitometer thermostated at 20 C and precise to 0.01 kg m −3 . Images of the linear dye gradients are also used to verify the linearity of the density gradient. The Brunt-Väisälä frequency is calculated using where g is the gravitational acceleration, ρ is the measured density, and ρ 0 is the mean density in the tank. Salinity and kinematic viscosity are calculated from the average density and temperature for each experiment using the Gibbs-SeaWater Oceanographic Toolbox and the Seawater Thermophysical Properties Library (McDougall and Barker 2011;Nayar et al. 2016). Spheres To study the deformation of dye gradients by moving objects, we performed experiments with either sinking spheres or swimming copepods. For the first set of experiments, we used polystyrene spheres produced by Cospheric with a diameter of 4.8 mm and densities ranging from 1023 to 1030 kg m −3 . Spheres were wetted with seawater and placed in the top of the tank with tweezers. We quantified the sinking speed by tracking the centroid of the sphere from one image to the next at 5-12 positions within the 15 cm vertical field of view. Because the tanks were density stratified, the spheres slowed as they descended. The mean sinking rates W of 23 separate sphere drops varied from 5.1 ± 0.4 to 20.8 ± 0.5 mm s −1 . The Reynolds (Re), Froude (Fr), and Péclet (Pe) numbers were calculated using the average velocity of the sphere, W. These nondimensional numbers are Re = 2aW/v, Fr = W/aN, and Pe = 2aW/D, where 2a is the spherical diameter, ν is the kinematic viscosity, N is the buoyancy frequency (Eq. 4), and D is the molecular diffusivity of the dye (Eq. 3). The Reynolds number between experiments ranged from 28 to 100, Froude from 12 to 45, and Péclet from 4.2 × 10 4 to 1.5 × 10 5 . The presence of tank walls can increase the drag on the sphere above that predicted for an unbounded medium. Following the empirical correlations compiled by Clift et al. (1978), the increase in drag caused by the walls of the tank is expected to be less than 10% for 28 ≤ Re ≤ 100 (see Inman 2018). Copepods We conducted net tows 2.5 km offshore of the Scripps Institution of Oceanography at 150-m depth over the Scripps submarine canyon. The contents of the cod ends were brought back to a cold room on shore and sorted under a dissecting microscope. We separated adult C. pacificus by sex into beakers and placed them in a 16 C incubator with a lamp set for a 12 h light/dark cycle. Air bubblers kept the seawater oxygenated at all times. We grew Rhodomonas salina, Dunaliella salina, Pseudo-nitzschia delicatissima, and Pseudo-nitzschia hasleana in seawater with proline or F/2 media. A "salad" of these phytoplankton was fed to the copepods daily. We used adult female C. pacificus in experiments based on their phototactic behavior and relatively large size; the typical body length is 3 mm and the prosome diameter is 1 mm. The strongest phototactic response was qualitatively determined to occur at night, with copepods moving away from a white light source. During experiments, we used a pipette to carefully place copepods in the top of the tank. Using LED flashlights positioned around the tank, outside the imaging volume, we induced the copepods to swim vertically through the tank and away from the walls. The copepods typically swam upward -rarely in a straight lineand passively sank downward. The copepods did not appear to react to the presence of the dye or the UV light source. Image analysis The dyegraph imaging system records the 2D projection of the 3D dye field. Assuming that the disturbance of the dye field by a moving object is axisymmetric, we can reconstruct the 3D dye field using an inverse Abel transform (Pretzler 1991). First, image pixel intensity is converted to dye concentration using the calibration above. Then, an average of up to 60 frames of undisturbed fluid is subtracted from images of the moving object to give the dye anomaly. Five consecutive images (one second's worth) tracking the object are averaged and processed with a Wiener filter to reduce high-frequency noise caused by air moving between the tank and the lenses. From the dye anomaly image F(y, z), the radially symmetric original distribution of dye anomaly f(r, z) can be calculated using the Abel inversion, where r = 0 is the axis of symmetry, R is the edge of the domain, and F(R,z) ≈ 0. Note that this transformation changes the coordinate system from Cartesian (x, y, z) to cylindrical (r, φ, z). The integral is evaluated numerically using the series expansion method described in Pretzler (1991) and coded by Killer (2016). The code does not allow for a solid object to block the axis of symmetry, and so pixels occupied by the sphere are replaced with an average of five horizontal pixels adjacent to the sphere. Adding the background dye The double-bucket system used to fill the tank with dye gradients and salinity stratification. Initially, beaker A is filled with higher salinity seawater and dye and beaker B with lower salinity seawater and no dye. A peristaltic pump transfers higher density fluid from beaker A to beaker B, where a stir plate mixes the two fluids. A gear pump running at twice the speed as the peristaltic pump moves the mixed fluid from beaker B to the experimental tank. Temperature is regulated with a water bath that runs distilled water through the two jacketed beakers and a section of jacketed tubing between the gear pump and the tank. concentration to the dye anomaly from the Abel inversion yields a 2D vertical slice through the axis of the disturbed dye field. We process images of passively sinking copepods in a similar manner. Trails left by swimming copepods, however, are more difficult to analyze because small variations in swimming direction prevent image averaging following the copepod: bends in the trail would not align. Instead we averaged images in the reference frame of the tank to preserve the trail. Unfortunately, detail around the copepod is lost in this averaging. Copepod trails are not fully axisymmetric because the swimming appendages are located in two longitudinal rows on the ventral side of the body. However, particle image velocimetry (PIV) experiments of free swimming calanoid copepods indicate that the downstream velocity field for each row of appendages is roughly axisymmetric during cruising, if the organism is not rotating (Catton et al. 2007(Catton et al. , 2012. We can then assume that the downstream dye field disturbance is axisymmetric aligned with a row of appendages, and tomographically invert these image segments separately. Based on the orientation of the copepod with respect to the camera, the segment is chosen so that the disturbance caused by one row of appendages does not overlap with the other. Numerical simulations To assess the accuracy of the dyegraph method, numerical simulations of a sphere sinking through a chemical gradient are run for comparison. A full description of the model is available in Inman et al. (2020). In brief, the nondimensionalized Navier-Stokes equations are solved on a curvilinear grid using finite differences and the successive over-relaxation method. Simulations are conducted in the reference frame of the sphere with the outer boundary located 1200 sphere diameters away from the origin in all directions. Using the same non-dimensional parameters as a given experiment, the model is run until the sphere has moved an equivalent distance from an impulsive start. Assessment To demonstrate the method, we present images of a 4.8 mm sphere sinking through linear dye and salinity gradients (Fig. 5). Both dye concentration and salinity increase toward the bottom of the tank. The velocity of the sphere decreased from 8.2 mm s −1 at the top to 7.6 mm s −1 at the bottom of the field of view. Average fluid parameters calculated for the sinking sphere were Re = 38.4, Fr = 27.9, and r (cm) Pe = 5.73 × 10 4 . Lower concentrations of dye dragged down by the sphere are visible as a trail that spans the 15 cm vertical field of view (Fig. 5a). The dye anomaly in the trail and near the sphere is more apparent after subtracting the (stationary) background dye concentration (Fig. 5b). This view represents the integrated dye anomaly projected onto the camera, and accordingly the values go to zero on either side of the trail. The dim feathered pattern surrounding the trail is caused by irregularities in the underlying dye gradient. The Abel inversion of the dye anomaly is shown in Fig. 5c. Adding the background dye concentration to the Abel inversion represents a 2D slice through the axis of the trail, as shown by the contours in Fig. 5d. The 3D dye concentration field is recreated by rotating the 2D slice about the axis of symmetry. The dye concentration throughout the trail is less than at the top of the background gradient, indicating that dye is dragged from far above the field of view. Smoothed contours of dye concentration separated by 5.5 × 10 −3 kg m −3 are in excellent agreement with contours produced by a numerical simulation with the same Re, Fr, and Pe numbers (Fig. 5d). As with the model, the experimental contours are contiguous between their background location and their attachment point on the sphere. Close to the centerline, noise amplified by the Abel inversion makes the structure of dragged dye contours more difficult to determine. Nonetheless, the dye anomaly near the sphere on the axis of the trail corresponds to the farthest a dye contour was dragged by the sphere. Dividing the maximum dye anomaly by the background gradient gives the maximum distance a contour is dragged, in this case 47.3 sphere diameters (22.7 cm). The farthest a contour is dragged in the equivalent simulation is 54.3 sphere diameters. Given that the experimental and simulated contours have the same shape, it is likely that the 14% lower value derived from the experiment is caused by smoothing of noise along the centerline. Because the refractive index of a fluid depends on its density, disturbing the density field causes fluctuations in the illumination recorded by the shadowgraph. The differences in light intensity are proportional to the Laplacian of the density anomaly (Settles 2001). The trail seen in Fig. 5b is thus a combination of the dye anomaly and the second spatial derivative of the density anomaly. Because one is the result of absorbing light and the other of bending light, it is difficult to predict how the two interact. To quantify the relative contributions of the dye and density disturbances to the imaged signal in the trail, spheres were dropped through tanks filled to the top with a salinity gradient but only midway with a dye gradient (i.e., the dye was added to the first bucket after filling began). In a section without dye (e.g., above the dashed line in (a)), the Laplacian of the density anomaly is recorded and the higher stratification signal maximum is more than double that of the lower stratification case. (c) With dye present (e.g., below the dashed line in (a)), the magnitude of the dye anomaly is slightly lower for the higher stratification case, showing that the Laplacian of the density anomaly does not amplify the dye anomaly signal. The density anomaly alone is visible at the top of the image, while the bottom displays the combined dye and density anomalies (Fig. 6a). Figure 6b,c shows horizontal cross sections through the Abel-inverted trails of two sphere descents through similar dye gradients but different salinity gradients. The Re and Pe numbers are roughly the same and the Fr numbers differ, but not enough to change the flow dramatically (Inman et al. 2020). The experiment with the greater background density gradient has a stronger signal in the absence of dye (Fig. 6b). If the refraction and absorption effects are additive or multiplicative, the experiment with the stronger density gradient would also display a proportionately larger signal in the presence of dye. However, this is not the case (Fig. 6c): the higher density stratification case shows a slightly lower combined dye and density anomaly signal than the weaker stratification case. Therefore, as long as the dye gradient is sufficiently large, refraction caused by the density anomaly does not greatly interfere with the magnitude of the dye anomaly signal. Trails of dye disturbance left by swimming copepods also spanned the vertical field of view, although rarely in a straight line. Subtracting the background dye gradient, the trail is visible behind an adult female copepod swimming upward through a tank configured with higher dye concentration at the bottom of the tank (Fig. 7a). The body is in three-quarter view with the ventral side facing left, leaving the trail downstream of the left row of appendages roughly unobstructed by the right. The trail appears broader and more diffuse in relation to the body of the copepod when compared to the trail left by a sphere. Choosing the left row appendages as the axis of symmetry, the reconstructed dye concentration field reveals that dye is pushed downward in reverse of the swimming direction (Fig. 7b). This is consistent with the "breaststroke" motion of the cephalic appendages observed in PIV studies (van Duren and Videler 2003;Jiang and Osborn 2004;Kiørboe et al. 2014). Discussion In both sphere and copepod experiments, the dyegraph optical system captures trails of dye anomaly anticipated by numerical simulations. Despite the simplicity of the optics, this modified shadowgraph provides a 15-cm diameter field of view with little radial distortion, ample depth of field with no change in magnification, and resolution primarily limited by the sensor of the camera. Similar characteristics would be difficult to achieve with schlieren or PLIF techniques. Comparison with numerical simulations indicate that the 3D dye field could be accurately reconstructed using an Abel inversion. Due to the large field of view and resolution of the camera, the dye field near the sphere or copepod was not as well resolved as has been demonstrated with schlieren or PLIF systems (Yick et al. 2009;Noss and Lorke 2014). In our dyegraph system, the Laplacian of the density anomalythe intended subject of traditional shadowgraphscould interfere with dye concentration measurements; however, weak stratification and strong dye gradients mitigate this potential problem. An additional benefit of imaging dye is that integration is not required after tomographic inversion, as is the case with the schlieren technique. As opposed to density stratification, the dye can be configured as gradients or thin layers that do not influence the hydrodynamics of the experiment. The dyegraph technique is well suited for recording chemical disturbances by swimming organisms that occur over much larger spatial scales than the organism. The method can be modified to record dye disturbances by < 1 mm particles and plankton by simply increasing the resolution of the camera. A smaller dye disturbance signal can be enhanced by reducing the length of the tank in the direction of the light rays, so that there is less fluid to image through. Another potential extension of the dyegraph method is measuring dye as a proxy for chemicals released by sinking organic matter or zooplankton. In such an experiment, copepods and their fecal pellets would be cultured in a known concentration of dye, rinsed, and placed in a tank of pure seawater. If the dye is not metabolized, copepod excretions or plumes released by sinking fecal pellets could be recorded. The 3D dye field can be reconstructed by the dyegraph if the signal is axisymmetric, but not if it is asymmetric. In the case of asymmetry, the integrated 2D dye concentration recorded by the camera could be useful for determining the rate of release and subsequent diffusion of the chemical. Further, an array of 5-15 smaller dyegraphs arranged in a semicircle around a cylindrical tank may provide enough overlapping views for stochastic tomographic reconstruction of an arbitrary 3D dye field (Gregson et al. 2012).
7,161
2020-10-19T00:00:00.000
[ "Environmental Science", "Physics" ]
Non-Tethered Understanding and Scientific Pluralism I examine situations in which we say that different subjects have ‘different’, ‘competing’, or ‘conflicting understandings’ of a phenomenon. In order to make sense of such situations, we should turn our attention to an often neglected ambiguity in the word ‘understanding’. Whereas the notion of understanding that is typically discussed in philosophy is, to use Elgin’s terms, tethered to the facts, there is another notion of understanding that is not tethered in the same way. This latter notion is relevant because, typically, talk of two subjects having ‘different’, ‘competing’, or ‘conflicting understandings’ of a phenomenon does not entail any commitment to the proposition that these subjects understand the phenomenon in the tethered sense of the word. This paper aims, first, to analyze the non-tethered notion of understanding, second, to clarify its relationship to the tethered notion, third, to explore what exactly goes on when ‘different’, ‘competing’, or ‘conflicting understandings’ clash and, fourth, to discuss the significance of such situations in our epistemic practices. In particular, I argue for a version of scientific pluralism according to which such situations are important because they help scientific communities achieve their fundamental epistemic goals—most importantly, the goal of understanding the world in the tethered sense. Introduction There is an ambiguity in the notion of understanding that is not often examined by epistemologists and philosophers of science. Consider the following examples: (a) Creationists have a mistaken understanding of evolution; (b) Plato's understanding of the transmigration of souls differed significantly from Pythagoras'. The notion of understanding that appears in (a) and (b) is obviously not the same as the notion typically discussed in contemporary epistemology and philosophy of science. It seems to be possible to have some understanding of a phenomenon such as evolution (in One of the reasons why the notion of understanding NT is more interesting than previously recognized is that it features in sentences of the form "S1 and S2 understand P differently", "S1 has a different understanding of P than S2", or "S1 and S2 have different/ competing/conflicting understandings of P". Consider the following concrete examples: (c) Copernicus and Ptolemy understood planetary motion differently; (d) Different physicists had (and have) competing understandings of empty space; (e) Priestley and Lavoisier developed conflicting understandings of combustion. Each of these sentences can be uttered without any commitment to whether the subjects in question understand T or understood T planetary motion, empty space, or combustion. To be sure, at least some of these sentences may be compatible with the fact that at least some of the subjects have an understanding T of the phenomena mentioned. But the situation is complex, and in order to disentangle it, we need a better grasp of the structure of the notions of understanding T and understanding NT , as well as the relationships between them. The situations that we can describe by sentences of the form "S1 and S2 have different/ competing/conflicting understandings of P", or at least some of them, are akin to disagreements. Perhaps they should not be characterized as disagreements in the strict sense, 3 but whether or not this is so, for our purposes it suffices to say that they are 'disagreementlike', or that they are instances of what Audi (2013) calls 'cognitive disparity'-"a kind of difference-usually also yielding a tension-between cognitive elements" (Audi 2013, 204). This paper is motivated by two core ideas: The first idea is that the relationship between the notions of understanding NT and understanding T resembles, in some crucial senses the relationship between belief and knowledge. The second idea is that there is a parallel between the way in which doxastic disagreements, i.e., the presence of opposing beliefs, in an epistemic (e.g., scientific) community can foster the acquirement of knowledge in this community, and the way in which the presence of opposing understandings NT can foster the acquirement of understanding T . The latter idea amounts to a special version of epistemic or scientific pluralism, which I develop in detail in Sect. 5. Before advocating this pluralist position, however, I first need to clarify the conceptual resources necessary to its formulation-in particular, the distinction between understanding T and understanding NT and the idea of different understandings NT being in tension or opposition to one another, which, unlike the notions of belief, knowledge, or doxastic disagreement, are not part of the standard philosophical discourse. This is what I do in Sects. 2 to 4. In Sect. 2, I start by briefly reviewing the various definitions of understanding T that have been offered in the recent philosophical literature. I do not do this for the purpose of evaluating these definitions or identifying what I take to be the most promising ones. In contrast, I remain deliberately neutral as to which of them is better and which is worse, and this neutrality is indeed an important part of this paper's strategy-namely, to characterize understanding NT 1 3 by demarcating it from understanding T (not from understanding T as defined by a particular existing theory), and then to explore the relationship between these two concepts. Therefore, the reason why I shall discuss the various existing definitions of understanding T is to identify something similar to their least common denominator and to formulate a rather general definitional scheme of understanding T , which can then (in Sect. 3) be used to introduce understanding NT , demarcate the latter from the former, and explore the relationship between them. In Sect. 4, I then suggest an account of what exactly happens in situations in which one subject is said to have a certain understanding NT of some phenomenon, whereas another subject is said to have a different, competing, or conflicting understanding NT of it, before I finally examine the epistemological significance of such situations and their relevance for science in Sect. 5. The Notion of Understanding in Contemporary Epistemology and Philosophy of Science Recent epistemology and philosophy of science have seen a remarkable shift of interest from traditional topics such as the concept of knowledge to that of understanding. The philosophers in this debate 4 usually distinguish between different kinds of understanding, most importantly objectual understanding, i.e. understanding of a certain subject matter or phenomenon P (such as evolution or climate change), and understanding-why, i.e. understanding why a certain event occurred or why a certain state of affairs obtains (for example, why the stock market crashed in 2008, or why the Moon orbits the Earth). For convenience sake, in this paper I shall concentrate on objectual understanding, but it should be relatively easy to apply mutatis mutandis my considerations to explanatory understanding as well. 5 My contention in this paper is that there is, in addition to the concept of objectual understanding that has been studied by epistemologists and philosophers of scienceunderstanding T as I have referred to it-, another, equally relevant concept of (objectual) understanding that, unlike the first one, neither has nor needs a tether to the facts. One of my aims is to analyze this concept-understanding NT -and explore its relationship to that of understanding T . But first we need some clarity about the notion of understanding T . The problem is, of course, that much of the debate concerns exactly the question of how this concept should be defined, and a considerable number of alternative definitions have already been proposed. Given this situation, and since my primary focus is on understanding NT , the best strategy, in my view, is to use an account of understanding T that is as neutral as possible with regard to the various definitions that have been proposed in the literature, while still being informative enough to allow its demarcation from understanding NT . In other words, I shall here neither develop a new concrete definition of understanding T nor commit myself to any one of the existing proposals. Rather, my strategy is to use a sort of general definitional scheme or template that is intended to be compatible with a broad variety of concrete definitions as proposed in the philosophical literature (though I shall not commit myself to the view that all existing proposals are compatible to it). The definitional scheme that I am going to suggest presupposes that a subject who has understanding T of a phenomenon needs some mental representation of that phenomenon, which can be a theory, model or similar entity (I take this assumption to be relatively uncontroversial; a similar assumption is made, for example, in the overview article by . Furthermore, I take it that any comprehensive account of understanding T should make some assertions about, first, the subject's attitude towards the representation; second, the subject's epistemic warrant for having her attitude towards the representation; third, the cognitive tasks that the subject needs to be able to exercise in relation to the representation; and fourth, the relationship between the representation and the world. Accordingly, my definitional scheme has four components, each of which corresponds to a fundamental dimension or aspect of the concept of understanding T : an epistemic-pro-attitude component, a warrant component, a cognitive component, and a tether component: (UND T ) Subject S has an understanding T of phenomenon P iff (1) S has a suitable epistemic pro-attitude towards a representation R of P (epistemic-proattitude component), (2) S's epistemic pro-attitude has the right sort of warrant (warrant component), (3) S is able to perform a characteristic set of cognitive tasks with regard to R (cognitive component), (4) R stands in a suitable relation to P (i.e., R is correct) (tether component). As already indicated, UND T is deliberately unspecific as to what exactly a suitable epistemic pro-attitude, the right sort of warrant, a set of characteristic cognitive abilities, and a suitable relation between R and P are. The scheme is meant to be a sort of smallest common denominator about which a wide variety of contributors to the debate on understanding T can agree. 6 But in order to better grasp how the scheme is supposed to work, it is useful to briefly consider how the four components could possibly be specified. As for the epistemic-pro-attitude component, one possibility is to require that a subject who understands T P needs to believe that R is correct (this would be in line with, e.g., Kvanvig 2003or Pritchard 2009). Instead of full belief, it is also possible to require an acceptance of R's correctness (for defenses of acceptance-based views of understanding T , see Dellsén 2016a; Elgin 2017). A very weak position is defended by Wilkenfeld (2017), who only requires a positive degree of belief. The term 'epistemic proattitude' is supposed to cover all these suggestions. Most accounts of understanding T further demand the subject's epistemic pro-attitude to be justified or warranted in some sense. For example, for those theories that claim that understanding T is a species of knowledge (such as Grimm 2006;Khalifa and Gadomski 2013;or Kelp 2017), it would be natural to require that the subject's belief in the correctness of R is justified and not 'gettierized'. 7 Other accounts reject the notion that understanding T is a species of knowledge, but still maintain that the subject's belief in or acceptance of R's correctness requires some sort of justification and/or the absence of certain forms of epistemic luck (see, for example, Kvanvig 2003;Pritchard 2009;. Wilkenfeld (2017) also requires the subject's positive degree of belief to be justified. 8 Depending on which concrete approach is assumed to be adequate, the cognitive component can be specified by using such concepts as 'manipulating', 'grasping', or 'inferring'. If S understands T P, then S can manipulate R in the right sorts of ways (Grimm 2010;Wilkenfeld 2013), and/or grasp certain dependence relations (Kvanvig 2003;Strevens 2013;Gijsbers 2013), and/or draw certain inferences (Hills 2009;Newman 2012), and/or give relevant explanations in S's own words and follow the explanations given by someone else (Hills 2009). Finally, as for the fourth component, the spectrum of positions defended in the literature ranges from a 'strong', to a 'moderate', to a 'non-factivity'. 'Strong factivity' means that R needs to be completely true. By contrast, proponents of 'moderate factivity' (such as Kvanvig 2003;Mizrahi 2012) argue that only certain essential elements of R need to be true, while the falsity of other elements are compatible with the subject's understanding T . Accordingly, a theory whose central assumptions are true may qualify as a suitable representation, even though some of its more peripheral assumptions are false. Finally, according to the 'non-factivity' view, R does not need to be true at all. Proponents of this position (e.g., Elgin 2007;) regularly invoke, for example, scientific models, which can be highly idealized and still facilitate understanding T . Nevertheless, even proponents of this weak position maintain that R needs to stand in some suitable relationship to reality. As Elgin puts it: "understanding somehow answers to the facts" (Elgin 2007, 33). I shall also speak of a 'correct representation', which means that the representation satisfies the conditions of the fourth component, regardless of whether it is specified in terms of strong, moderate, or non-factivity. 9 In the next section I examine non-tethered understanding and argue that its relationship to tethered understanding resembles, in several crucial ways, the relationship between belief and knowledge. In particular, I argue that understanding T is tethered, while understanding NT is not, much like knowledge is factive, while belief is not. And just as belief is an ingredient of knowledge, understanding NT is an ingredient of understanding T . 10 Non-Tethered Understanding As we have seen in the introduction, there is a kind of understanding that can be attributed to a subject without any commitment to whether the subject understands T the phenomenon in question. How should this concept be defined? As already indicated, one crucial feature is that it does not need any sensitivity to the facts, and therefore a definitional scheme of understanding NT will lack a tether component. Furthermore, I suggest that a definitional scheme of understanding NT does not require a warrant component either. While a subject who understands T needs to justifiably believe or justifiably accept that R is correct, it seems to be possible to attribute an understanding NT of a phenomenon to a subject even if the subject's belief in or acceptance of R's correctness is not justified (however, the fact that the subject's belief or acceptance is justified is of course compatible with attributing an understanding NT to her). Therefore, while interpreting understanding T as a species of knowledge can perhaps be a viable theoretical option, it is not a viable option to interpret understanding NT as a species of knowledge, since, in the case of understanding NT , neither the subject's epistemic pro-attitude needs to be justified, nor does the representation she uses have to be correct. As far as the other components are concerned, it seems that the two concepts are quite similar. If we say that a subject S has an understanding NT of a phenomenon P, we seem to say, first, that S has an epistemic pro-attitude toward a representation R of P, and therefore a definitional scheme of understanding NT needs an epistemic-pro-attitude component, and second that S is able to perform a number of characteristic cognitive tasks with regard to R, and therefore it also needs a cognitive component. These considerations suggest the following definitional schema for understanding NT : (UND NT ) Subject S has an understanding NT of phenomenon P iff (1) S has a suitable epistemic pro-attitude towards a representation R of P (epistemic-proattitude component), (2) S is able to perform a characteristic set of cognitive tasks with regard to R (cognitive component). 10 Note that the terms 'tethered' and 'correct', as I shall use them, are not synonymous. If we compare the relationship between understanding T and understanding NT to that between knowledge and belief, we could say that the tethered/non-tethered distinction plays a similar role with respect to the former, as does the factive/non-factive distinction with respect to the latter. Just as belief is non-factive, understanding NT is non-tethered. However, this does not exclude the possibility that a belief is true or that an instance of understanding NT involves a correct representation. accepts R 2 ); (ii) S's epistemic pro-attitudes have the right sort of warrant; (iii) S is able to perform a characteristic set of cognitive tasks with regard to R 1 and R 2 ; (iv) R 1 stands in a suitable relation to P (e.g., R is correct in terms of strong factivity) and R 2 stands in a suitable relation to P (e.g., R is correct in terms of non-factivity). Footnote 9 (continued) That UND NT needs a cognitive component might not be obvious (could a subject's understanding NT not be a mere representation of a phenomenon?, one might ask), but it can be made plausible by recognizing the parallel between how understanding T can be (or rather, cannot be) transmitted through testimony, and how understanding NT can be (or cannot be). Suppose that S knows nothing about phenomenon P and understands T nothing about it. Then, an expert on P tells S an isolated proposition p about P (or a set of isolated propositions that includes p as a member). Given that propositional knowledge can be transferred through testimony, it seems that S now is in a position to know p and the other propositions (if they are true). If these propositions qualify as a correct representation of P, and if S has a warranted epistemic pro-attitude towards them, she fulfills three of the four conditions of UND T . However, unless S grasps any connections between them (or is able to fulfill whatever cognitive tasks are necessary for understanding T ), it would not be correct to attribute any understanding T of P to S. Given that the ability to perform these cognitive tasks cannot easily be transferred from one subject to another, it follows that, as Gordon puts it, "understanding [i.e., understanding T ] can't simply be given to another in the way knowledge can" (Gordon 2017, 298; emphasis in original). I think that a similar case can be made for non-tethered understanding as well. Suppose again that S is initially completely uninformed about P and has neither an understanding T nor an understanding NT of P, and is now told by another subject that p and the other propositions about P are true. To better capture our intuitions about the cognitive component, we may now assume that these propositions are incorrect. However, let us suppose that S nevertheless has a suitable epistemic pro-attitude towards them. Now, if S is able to fulfill a suitable set of cognitive tasks with regard to these propositions, S could, in a conversion about P, naturally say something like "As I understand it, p". It would also be possible to say things like "On S's understanding, p", or "According to S's understanding, p". However, if S is not able to fulfill the necessary cognitive tasks (if the propositions are nothing but isolated bits of information for her), it would be very strange to utter such sentences. Thus, intuitively, we would not attribute any understanding NT of P to S, unless S is able to fulfill a suitable set of cognitive tasks with regard to the propositions. Another concern one might have about the cognitive component is the following: Strevens (2013) has proposed that the 'grasping' that many philosophers in the debate around understanding T consider to be of central importance is already factive. If this be the case, using the notion of grasping might well be a natural way of specifying the cognitive component of UND T , but it could not possibly be used to specify the cognitive component of UND NT because in that case grasping would imply (or presuppose) that R is correct, and the absence of a correctness requirement (i.e., the absence of the tether component) is exactly what is distinctive about understanding NT . However, as Strevens also argues, it is easy to decompose the notion of grasping-provided that it really is factive-into a purely (non-factive) cognitive part (which Strevens refers to as 'grasping*') and a condition that ensures the correctness of what is grasped. For now, I shall not take a position on whether the 'ordinary' notion of grasping is really factive or not. Instead, I wish to argue that if 'grasping' should be factive, and provided that one wants to use it to analyze the concepts of understanding T and understanding NT , a natural way to do this is to use the notion of grasping* within the cognitive components of UND T and UND NT , while the tether component of UND T does the work of ensuring that what is grasped really is correct. In other words, the cognitive components of UND T and UND NT would be identical insofar as they are specified using the notion of grasping*: when a subject S has an understanding NT of a phenomenon P, S grasps* something about a representation of P; and when S has understanding T of P, S similarly grasps* something about a representation of P, but, in addition, we require this representation to be correct (whereby this requirement is stated in the tether component, not in the cognitive component). One might still worry about how the cognitive component is actually supposed to work in UND NT , given that UND NT does not require R to be a correct representation of P. In my view, a plausible answer to this question is the following. The characteristic set of cognitive tasks mentioned in UND NT is the set X of cognitive tasks for which the following holds: if R were a correct representation of P, S would need to be able to perform the tasks included in X in order to achieve understanding T of P. For example, in order to attribute S an understanding NT of combustion based on phlogiston theory, S needs to be able to perform those cognitive tasks that, if phlogiston theory were correct, S would need to be able to perform in order to achieve understanding T of combustion. With these considerations in mind, how should the relationship between understanding NT and understanding T be described? I think it is instructive to compare this relationship with that between belief and propositional knowledge. Looking at UND T and UND NT , one might say that understanding NT is an ingredient of understanding T in a similar way in which belief is an ingredient of knowledge. Knowledge is belief plus something else, and understanding T is understanding NT plus something else. In particular, a belief must be true in order to amount to knowledge; similarly, the representation's correctness is something that is required for understanding T , but not for understanding NT . Exactly what else is needed to turn a true belief into knowledge is a source of dispute among epistemologists (well-known proposals are, of course, that the belief must be justified and nongettierized, or the result of a reliable belief-forming process). As for understanding T /understanding NT , an additional requisite of understanding T is whatever is stated in the warrant component of UND T . 11 Note that it is possible to say that a subject S has an understanding NT of P without understanding T it at all. This would be the case if S uses an incorrect representation (i.e., a representation that does not stand in a suitable relationship to P) or if S's epistemic proattitude is not warranted. For example, S arguably can achieve no understanding T of combustion on the basis of phlogiston theory (assuming that phlogiston theory is not a correct representation of combustion), but S can develop some understanding NT of it on the basis of phlogiston theory. It is probably in such situations that we say the subject has a 'mistaken understanding' of the phenomenon. 12 11 If understanding T should not require any form of justification at all (see footnote 8), it seems that understanding T is nothing more than an understanding NT that includes a correct representation. 12 Note, however, that even an incorrect representation can be the object of someone's understanding T . For example, it is perfectly legitimate to say that "S understands T phlogiston theory", even if it is not the case that S understands T combustion. As Elgin (2017, 46) puts it: "it is possible for an account [i.e., a representation] to be the object of understanding [i.e., understanding T ] even if, owing to its untenability, it affords no understanding [i.e., understanding T ] of its purported subject matter." Similarly, de Regt emphasizes that both correct and incorrect theories can be intelligible to scientists (where a theory is 'intelligible' to a scientist if she understands T it; see de Regt 2017, 40). While I fully agree with Elgin, de Regt, and other authors that having an understanding T of a phenomenon P should not be confused with having an understanding T of a (perhaps incorrect) representation of P, I maintain that in order to obtain a complete picture, we should also take into account non-tethered understanding and distinguish four different aspects: (1) having an understanding T of phenomenon P, (2) having an understanding T of a representation R of P, (3) having an understanding NT of P, and (4) having an understanding NT of a representation R of P. (1), but It is also worth noting that it is possible to say that two subjects S1 and S2 have different understandings NT of P, without implying anything about whether anyone, both, or none of them understand T P. Conversely, however, it is not possible that a subject understands T P without having some understanding NT of P. Different, Competing, and Conflicting Understandings NT Now, what happens in cases in which two subjects are said to have different, competing, or conflicting understandings NT of a phenomenon P? On reflection, I think it is plausible to explicate such situations by saying that the essential feature of them is that the subjects use different, competing, or conflicting representations of P. This suggests the following definitional scheme: (DIFF) S1 and S2 have different/competing/conflicting understandings NT of P iff (1) (a) S1 has a suitable epistemic pro-attitude towards a representation R1 of P, (b) S1 is able to perform a characteristic set of cognitive tasks with regard to R1, (2) (a) S2 has a suitable epistemic pro-attitude towards a representation R2 of P, (b) S2 is able to perform a characteristic set of cognitive tasks with regard to R2, (3) R1 and R2 are different/competing/conflicting. One might object that differences between S1 and S2's representations are not the only possible differences between their understandings NT . For example, if S1 and S2 share the same representation R, they could still differ with regards to their epistemic pro-attitude towards R or the warrant they have for their attitudes. Of course, both need to have a proattitude that is sufficiently strong. For example, if one takes a positive degree of belief to be the epistemic pro-attitude required for understanding NT , both S1 and S2 need a positive degree of belief towards the proposition that R is correct. Nevertheless, it is possible that one of them has a stronger epistemic pro-attitude than the other. For example, while S1 might simply have an (unjustified) positive degree of belief towards the proposition that R is correct, S2 might (justifiably) accept or believe that R is correct. Another possibility is that S2 may be able to perform a larger number of cognitive tasks regarding R than S1. Again, S1 needs to be able to perform a sufficient number of characteristic cognitive tasks as well, otherwise we could not attribute any understanding NT to S1 at all. Nevertheless, S1 and S2 may differ with regard to how well they are able to perform the cognitive tasks. Arguably, however, differences of this sort are not what speakers have in mind in typical situations where they attribute different, competing, or conflicting understandings NT to two subjects. For example, suppose that Plato and one of his pupils both have suitable epistemic pro-attitudes towards Plato's theory of the transmigration of souls, and are able to fulfill a characteristic set of cognitive tasks with regards to it, but that Plato has a stronger Footnote 12 (continued) not (3), requires R to be correct; similarly, (2), but not (4), requires the subject's representation of R to be correct. At this point an important difference between de Regt's notion of intelligibility and the notion of understanding NT also becomes apparent. While the former is a property of theories or representations more generally, the latter is directed at the phenomena themselves. For example, phlogiston theory may be intelligible to phlogiston theorists, while they have an understanding NT of combustion. pro-attitude towards his pro-attitude, a better justification for it, or is better able to fulfill the cognitive tasks than the pupil. Would we, in this case, say that Plato and the pupil have different (or even competing or conflicting) understandings NT of the transmigration of souls? Probably not, I suppose. It seems that, at least typically, talk of different, competing, or conflicting understandings NT implies the presence of different, competing, or conflicting representations. 13 Let us now turn to the fact that, obviously, it makes a difference whether the representations are said to be 'conflicting', 'competing', or merely 'different'. Accordingly, it makes a difference whether S1's and S2's understandings NT are said to be 'conflicting', 'competing', or 'different'. Speakers can articulate slight nuances by describing the relationship between two subjects' understandings NT with either of these adjectives. 14 If two representations are merely different, they need not necessarily be in tension. Both can be correct and equally adequate representations of P. For example, R1 and R2 could focus on different aspects of P in the following way: R1 makes some assumptions about certain aspects of P and remains silent about those aspects about which R2 makes some assumptions; and R2, conversely, remains silent about those aspects about which R1 makes some assumptions. 15 In other cases, however, there is a tension between two understandings NT /representations. We could call the strongest form of such tension 'conflict'. If R1 and R2 are said to be 'conflicting', then the correctness of one representation excludes the correctness of the other. In other words, if R1 is correct, R2 is incorrect, and vice versa-they cannot both be correct at the same time, although they can be incorrect at the same time. As a consequence, it is not possible for both S1 and S2 to understand T P if they have conflicting understandings NT of P. Note that the question of whether two representations are conflicting in this sense partially depends on what it means for a representation to be 13 However, could the differences between Plato and the pupil's ability to perform the necessary cognitive tasks not be more profound? Instead of being mere quantitative nuances (i.e., differences in how well the subjects perform the cognitive tasks), could they not be be the source of more substantial differences or tensions? I am inclined to think not; unless there are differences between their representations, there are no substantial differences or tensions between the subjects. For suppose that the different performance of the cognitive tasks would result in S1 believing (or accepting) a proposition p about the phenomenon that S2 does not believe (or accept). Then that belief (acceptance) would be a part of S1's representation of the phenomenon, but not a part of S2's. On the other hand, suppose that their different performances of the cognitive tasks do not result in the subjects having different beliefs (acceptances) about the phenomenon. In this case, it seems to me, there is no substantial difference or cognitive tension between them. So, either the subjects believe (accept) the same things about the phenomenon, in which case there is no substantial difference or tension, or they believe different propositions about it, in which case there may be a substantial difference or tension, but it is due to their having different representations. 14 There are further adjectives that can be used to describe the relationship between two peoples' understandings NT or representations that seem to be more or less equivalent to the three that are mentioned in DIFF. For example, the phrase 'alternative understandings NT /representations', arguably, is more or less equivalent to 'different understandings NT /representations'; 'contesting understandings NT /representations' is more or less equivalent to 'competing understandings NT /representations'; and 'incompatible understandings NT /representations' is more or less equivalent to 'conflicting understandings NT /representations'. 15 For example, think of two theories of a phenomenon, one of which focuses on its causes, while the other focuses on its effects. Or think of theories that give different (but mutually compatible) accounts of some type of human behavior, either in social, or psychological, or neurological terms, respectively. It might be worth noting that similar things apply to explanatory understanding NT as well. One can have different but compatible explanatory understandings NT of why a person behaved the way she did (such as neurophysiological, psychological, or sociological understandings NT ), or different but compatible explanatory understandings NT of why a house burned down (such as an understanding NT in terms of the physical/technical details or an understanding NT in terms of an arsonist's motivation). 1 3 correct (recall that I use the phrase 'correct representation' in the sense of 'representation that affords tethered understanding'). For suppose that a representation's correctness implies that it is completely true. In this case, R1 and R2 cannot be correct at the same time if there is a single contradiction between them (i.e., a proposition such that R1 assumes its truth while R2 assumes its falsity). On the other hand, if it should be the case that a representation can be correct without being completely true, single contradictions between R1 and R2 do not exclude the possibility that they are both correct (for example, R1 and R2 could be consistent with respect to their essential assumptions, but inconsistent with respect to some of their peripheral assumptions). Thus, although UND NT does not have a tether component, the way the tether component is specified in UND T has an indirect influence on what it means for two understandings NT to be conflicting. While a conflict between two representations is the strongest form of tension between them, there can be tensions that do not require conflict. I suggest that two representations are 'competing' when there is tension between them, which leaves it open whether they are conflicting or not. If R1 and R2 are competing but non-conflicting, then there is tension between them, but it is possible for them to be both correct at the same time. Accordingly, it is possible for both S1 and S2 to have some understanding T of P if they have competing understandings NT of it. How such non-conflicting tensions can be explained again depends on what it means for a representation to be correct. If a representation's correctness does not require the truth of its peripheral assumptions, then both R1 and R2 can be correct, despite possible inconsistencies between their peripheral assumptions. Even if a representation's correctness does require its complete truth, I think it is possible that R1 and R2 can be competing but non-conflicting. In general, two representations of P may be said to be competing if each claims to provide a better, more adequate, account of P. Thus, for example, a representation may claim to be better than another, empirically equivalent representation on the grounds that it is simpler, more elegant, more fruitful, or broader in scope (that is, if it scores better with respect to some epistemic value(s) in Kuhn's (1977) sense). A conflict between two representations is the strongest form of discrepancy between them. Conflicting representations are always competing representations (but not vice versa), and competing representations are always different representations (but not vice versa). The same logical ordering results in different, competing, and conflicting understandings NT . Some but not all cases of different understandings NT should be described as being 'disagreement-like'. As I said, there are cases in which there is no tension between different understandings NT. If we take disagreement-likeness to necessarily involve some sort of tension, it follows that these situations are not disagreement-like. However, if two understandings NT are competing or conflicting, there is a tension between them, so it seems natural to describe them as disagreement-like. In my view, DIFF captures what is typically meant when one says that two people "have different, competing, or conflicting understandings" of a phenomenon or subject matter. Similarly, we can make sense of what we typically mean when we say that S1 and S2 "share the same understandings" of a phenomenon P. In this case, S1 and S2 have suitable epistemic pro-attitudes towards an identical representation R of P and are able to perform the characteristic cognitive tasks with regard to it. However, as already indicated, this does not imply that there are no discrepancies whatsoever between S1 and S2's understandings NT . If two people share the same understandings NT of P, one of the subjects could have a stronger-than-necessary epistemic pro-attitude towards the representation or be able to perform more-than-necessary cognitive tasks than the other subject. Different, Competing, and Conflicting Understandings NT and Scientific Pluralism The achievement of understanding T is a fundamental intrinsic epistemic goal in scienceperhaps even its most important goal (see, for example, Salmon 1998;Kvanvig 2003;Dellsén 2016b;Elgin 2017). Arguably, developing understandings NT is a less important intrinsic scientific goal, if it is an intrinsic goal at all. Typically, scientists do not develop understandings NT of some phenomenon for their own sake, but because they want to understand T that phenomenon. As a result, they favor those understandings NT that they assume involve correct representations (much like they favor those beliefs that they take to be true). If understanding NT is an ingredient of understanding T -in a similar way that belief is an ingredient of knowledge-, we may say that developing understandings NT first and foremost is a precondition or requirement for achieving understanding T . In a similar way in which scientists need to form beliefs if they aim to acquire knowledge, they need to develop understandings NT if they aim to achieve understanding T . The analogy between the understanding NT -understanding T relationship and the beliefknowledge relationship goes even further. From a socio-epistemological point of view, there are good reasons to suppose that doxastic disagreement is valuable for science because it helps scientific communities acquire knowledge about the world (see, for example, De Cruz and De Smedt 2013). Similarly, as I shall argue in this section, a diversity of different, competing, and conflicting understandings NT within scientific communities is valuable because it helps them achieve their goal of understanding T the phenomena within their fields of research. Indeed, I think that some of the reasons supporting this claim are familiar from the debate about scientific pluralism. In his discussion about why epistemic diversity is good for science, Chang (2012) distinguishes between 'benefits of toleration' and 'benefits of interaction'. This distinction can be applied rather straightforwardly to understanding NT and its relationship to understanding T . The idea behind benefits of toleration is that we should welcome epistemic diversity even in the absence of interactions between scientists from opposing camps. When applied to belief, one of the main arguments is that, typically, it is uncertain whether a scientific belief shared by a number of scientists is true. Even beliefs that seem to be well justified at a certain time can later turn out to be false. Something similar can be put forward for understanding T . Phenomena that appeared to be well understood T can turn out to be poorly understood T . In response, scientific pluralists will recommend that scientists should hedge their bets. Applied to understanding T , scientists (or more appropriately, scientific communities) should aim at a diversity of understandings NT in order to be better prepared for situations in which an understanding NT that seemed to be best suited for making sense of a certain phenomenon turns out to be misguided. If there are alternative understandings NT of the same phenomenon on the market, scientists can draw on them and explore their respective prospects of providing better accounts. If there are interactions between proponents of different understandings NT , additional benefits-benefits of interaction-are possible. In particular, proponents of different understandings NT can subject other understandings NT of the same phenomenon to critical scrutiny and identify their weaknesses. As a consequence, proponents of these other understandings NT can learn from this criticism and improve their own understandings NT . One can ask which of the definitional components of understanding NT are involved in producing the epistemic benefits (be they benefits of toleration or of interaction). According to UND NT , understanding NT has an epistemic-pro-attitude component and a cognitive component. This implies that, if there is a diversity of understandings NT of a phenomenon P, there are different scientists who, first, have epistemic pro-attitudes towards different representations R1, R2, … or Rn of P, and second, are able to perform the characteristic cognitive tasks with regard to either R1, R2, … or Rn of P. So which of these aspects is relevant for producing the epistemic benefits? I submit that the answer is that both can be relevant. The first aspect is relevant, for example, because if different scientists have epistemic pro-attitudes towards different representations, they are motivated to pursue their own favored understanding NT and to criticize their rivals' understandings NT . The second aspect is relevant because one understanding NT may require performing different cognitive tasks than another understanding NT . Even if one does not take a theory to be an adequate representation of a phenomenon, dealing with that theory or learning how it works can enhance one's cognitive capacities. And an enhancement of one's capacities may further improve one's understanding T . It may well be that a healthy scientific discourse requires not only a sufficiently large set of diverging beliefs and ideas, but also a sufficiently large set of cognitive skills that the scientists are able to perform. A diversity of understandings NT has the potential to enlarge this set and thus has significant value for that discourse. In sum, a diversity of understandings NT promotes the achievement of understanding T both because it helps scientists to improve their representations and because it helps them to increase the set of cognitive tasks they can perform. A further relevant question is the following: given a certain phenomenon P, should scientific communities aim at incorporating a diversity of understandings NT of P that are conflicting, or a diversity of understandings NT of P that are competing without being conflicting, or a diversity of understandings NT of P that are merely different without being competing? I think the answer is that scientific communities can benefit from a diversity of different, non-competing understandings NT , they can benefit in further ways from the presence of competing, non-conflicting understandings NT , and there are additional benefits that can accrue from the presence of conflicting understandings NT . For example, if two understandings NT are compatible because, say, they focus on different aspects of a phenomenon, proponents of the understandings NT can learn from each other that none provides a full account of the phenomenon. And if two understandings NT are competing or conflicting, studying the other understanding NT gives their proponents the opportunity to identify potential weaknesses in their own accounts. Allow me finally some brief remarks on a question that has preoccupied many epistemologists in recent years. The central question in the debate on peer disagreement is this: if two epistemic peers learn from each other that they hold contradictory beliefs about a certain proposition p, how should they rationally react? It seems obvious that we can ask a similar question with regard to different understandings NT . If two scientific peers learn that they advocate different understandings NT of P, how should they reasonably react? I shall not here attempt to provide a comprehensive answer to this question. I shall, however, briefly explore some of the challenges facing any attempt to provide a comprehensive answer. Note, first, that in paradigmatic cases of doxastic peer disagreements there are arguably three possible reactions for someone who learns that there is an epistemic peer who disagrees about p: remaining steadfast, i.e. maintaining one's doxastic stance towards p; adopting a neutral or agnostic stance; or adopting the other peer's doxastic stance. 16 In cases in which S1 learns that an epistemic peer S2 has a different understanding NT of a phenomenon, the situation is more complex. For one, which reaction is rational depends on whether S1 and S2's understandings NT are (i) merely different without being competing, (ii) competing without being conflicting, or (iii) conflicting. S1 and S2's understandings NT could be different without being competing because the representations underlying their understandings NT concern different aspects of the phenomenon in question. One possible reason why S1 and S2 prefer their representations is because their epistemic interests differ. S1 might be interested in certain aspects of P, while S2 is interested in other aspects. So, given their interests, it might well be reasonable for the subjects to 'remain steadfast' (if we want to apply this terminology here). If the understandings NT are competing without being conflicting, there seem to be situations in which it is also reasonable for the subjects to remain steadfast, as well as other situations in which it might be reasonable for them to reach some conciliation. As I argued in Sect. 4, understandings NT can be competing because they account for different epistemic or non-epistemic values. Given that different subjects may have divergent preferences for these values, and given that any particular weighting of them is not (or need not necessarily be) more or less rational than any other one, it may well be rational for the subjects to keep to their understandings NT . However, it is also possible that the subjects share the same pattern of preferences for the relevant values. In this case, some conciliation may be called for because one of the understandings NT may objectively be better suited than the other to account for the pattern of preferences that both subjects share. Finally, if the understandings NT are conflicting, the situation arguably is similar to cases of doxastic disagreement. So whatever reasons one might have in favor of either steadfast or conciliatory views about doxastic disagreements will be applicable in this type of situation. Another reason why analyzing cases of differing understandings NT may be more complex than analyzing doxastic disagreements is that the notion I tried to capture in DIFF is more complex than the notion of a doxastic disagreement. Doxastic disagreements involve just one proposition on which the subjects adopt different doxastic stances. By contrast, cases of differing understandings NT involve two different representations towards which the subjects may have a variety of epistemic stances. For example, if belief or acceptance is the epistemic pro-attitude required for understanding NT , S1 might believe/accept that R1 is correct, and S1 might believe, accept, or merely have a positive degree of belief that R2 is incorrect; and so on for S2 and her stances towards R1 and R2. And the situation's complexity increases if we consider the various ways that subjects might change their epistemic stance on the representations (from belief to agnosticism, from belief to acceptance, and so on). Furthermore, possible reactions might also include the cognitive component. S1 might take R2 not to be a correct representation, while still benefiting from dealing with S2's understanding NT . The reason is that, as argued above, one's cognitive capacities can be enhanced even by dealing with an incorrect theory or learning how an incorrect theory is supposed to work. Finally, let me point to yet another difficulty. When we ask which reaction is rational in the face of a peer's different understanding NT of a phenomenon P, we should take into account the possibility of a discrepancy between individual and social rationality. Individually, a certain reaction may be rational for the subject in the sense that the reaction increases the chances of her achieving her individual epistemic goal of understanding T P. For example, suppose that the peer's understanding NT is better suited for understanding T P than the understanding NT she previously favored. In this light, it seems to be rational for her to adopt the peer's understanding NT . On the other hand, I have argued in this section that a diversity of understandings NT within a scientific community is epistemically beneficial because it may increase the probability that the community as a whole achieves its epistemic goals. From the perspective of this social rationality, it may be good if she is steadfast and sticks to her previous understanding NT . This may seem to be the rational behavior in a more social sense of the word, that is, in the sense that it helps the community achieve its epistemic goals. Conclusion I have argued that, typically, sentences of the form "S1 and S2 have different/competing/ conflicting understandings of P" can be uttered with no commitment to whether any of the two subjects understand the phenomenon in the sense of 'understanding' that philosophers usually have in mind. This observation can be explained by distinguishing a tethered notion of understanding, understanding T , from a non-tethered notion, understanding NT . If someone has an understanding NT of some phenomenon P, she has a suitable epistemic proattitude towards a representation R of P and is able to perform a characteristic set of cognitive tasks with respect to R. But unlike understanding T , R need not be correct. A situation in which two subjects are said to have different, competing, or conflicting understandings NT of P is a situation in which one subject has a certain understanding NT of P and the other subject has another understanding NT of P based on a different, competing, or conflicting representation. I have argued that in scientific practice the relevance of a diversity of different, competing, or conflicting understandings NT derives from the fact that understanding T is an important (or maybe the most important) goal in science and that a diversity of different, competing, or conflicting understandings NT can help scientific communities achieve this goal. Specifically, the benefits of such diversity can result (1) from there being different scientists who have epistemic pro-attitudes towards different, competing, or conflicting representations of a certain phenomenon P, and (2) from there being different scientists who are able to perform the characteristic set of cognitive tasks with regard to different, competing, or conflicting representations of P. Moreover, the benefits can be benefits of toleration and of interaction. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
12,093.2
2021-05-20T00:00:00.000
[ "Philosophy" ]
Deep Learning Based Pavement Inspection Using Self-Reconfigurable Robot The pavement inspection task, which mainly includes crack and garbage detection, is essential and carried out frequently. The human-based or dedicated system approach for inspection can be easily carried out by integrating with the pavement sweeping machines. This work proposes a deep learning-based pavement inspection framework for self-reconfigurable robot named Panthera. Semantic segmentation framework SegNet was adopted to segment the pavement region from other objects. Deep Convolutional Neural Network (DCNN) based object detection is used to detect and localize pavement defects and garbage. Furthermore, Mobile Mapping System (MMS) was adopted for the geotagging of the defects. The proposed system was implemented and tested with the Panthera robot having NVIDIA GPU cards. The experimental results showed that the proposed technique identifies the pavement defects and litters or garbage detection with high accuracy. The experimental results on the crack and garbage detection are presented. It is found that the proposed technique is suitable for deployment in real-time for garbage detection and, eventually, sweeping or cleaning tasks. Introduction The development of urban pavement infrastructure systems is an integral part of modern city expansion processes. Every year, the pavement infrastructure has been growing multiple folds due to developing new communities and sustainable transport initiatives. Maintaining a defects free, clean, and hygienic pavement environment is a vital yet formidable Pavement Management System (PMS) task. Pavement inspection, i.e., identifying defects and litter or garbage with cleaning, are mandatory to achieve a defects-free and hygienic pavement environment. Generally, in PMS, human inspectors are widely used for defect and cleanness inspection. However, this method takes a long inspection time and needs a qualified expert to systematically record the severity of defects and mark defects' spatial location. Furthermore, routine cleaning of lengthy pavement is a tedious task for sanitary workers. Autonomous robots are suited for repetitive, dull, dirty, tedious, and time-consuming tasks. The emphasis on automation in construction using robots is reported in [1] where a detailed study on how robots potentially value add to the construction workflow, quality of work and project timeline in less explored areas of construction robotics. Tan et al. [2] introduced robot inclusive framework targeting robots for construction sites, that proposes a measure of robot-inclusiveness, different categories for robot interaction, design criteria and guidelines to improve robot interaction with the environment. with transverse cracks. Another study reported on Sobel and Canny edge detection approach for detecting pavement cracks and attain an Classification Accuracy Rate (CAR) as 79.99% reported in [30]. However, most of the defect inspection scheme was used offline, and very few works have reported on the pavement cleanness inspection using a deep learning scheme. The CNN based approaches were reported in the literature for its effectiveness in recognizing garbage, cleanness inspection, and garbage sorting. Chen et al. proposed a computer vision-based robot grasping system for automatically sorting garbage. Here, Fast Region Convolution Neural Network (F-RCNN) is employed for detecting different objects in a given scene [31]. Gaurav et al. [32] developed a smartphone application called SpotGarbage to detect and locate debris outdoors. A pre-trained AlexNet CNN model was used to detect the garbage in images captured outdoors. The training images were obtained using Bing's image search API. The model has achieved a classification accuracy of 87% for this application. However, it only reports on the garbage detected or not detected as a heap, while not considered the type of objects in the garbage which is also the focus of present work in context of pavements. An alternative approach that involves the use of a Support Vector Machine (SVM) with Scale Invariant Feature Transform (SIFT) functionality to identify the recyclable waste is provided in [33]. Images of solid waste are used in this method for classification, but it fails to identify the location of garbage. However, 94% of accuracy is achieved for the given task. Rad et al. [34] has proposed a model using overfeat-googlenet to outdoor garbage detection. In this work, 18,672 images of various types of garbage are used to train a Convoluted Neural Network (CNN) in the identification of solid waste from outdoor environments, such as newspapers, food containers, cans, etc. The network reached an accuracy of 68.27% for the detection of debris in this application. Similar work was carried out for for identification and classification of solid and liquid debris using the MobileNet V2 Single Shot Detector (SSD) framework and the SVM model was used to estimate the size of liquid spillage [35]. Recently, Fulton et al. [36] have proposed a deep-learning framework based debris detector for underwater vehicles. As an outcome of their study, CNN and SSD have better performance metrics when compared with YOLOV2 and Tiny-YOLO. The above mentioned study ensure that deep learning framework is an optimal method for pavement inspection task, i.e., crack and garbage detection. Objectives Taking account of the above facts,the objectives of present paper fixed as: (a) Incorporating deep learning-based vision system for pavement segmentation on Panthera, (b) Detection of the pavement cracks, (c) Geo tagging of the pavement cracks after detection for effective monitoring, and (d) Deep Convolution Neural Network (DCNN) based vision system for cleanliness inspection task. The rest of the paper is organized in five sections as follows: Section 2 gives the brief overview of the Panthera robot mechanical, sensory, and electrical components. Section 3 describes the defects and garbage detection framework. Experimental results are discussed in Section 4. Conclusions and future work are finally presented in Section 5. Panthera Robot Architecture The design and specifications of the the self-reconfigurable pavement inspection and cleaning robot Panthera are discussed here for brevity. Figure 1 shows the reconfiguration on pavement and the robot specifications are listed Table 1. b) Panthera on pavement in extended state a) Panthera on pavement in contracted state The proposed architecture has adopted the following key features include • Self-reconfigurable pavement sweeping robot Panthera can reconfigure its shape with contracted and extended state shown in Figure 1a,b respectively. This feature enables it to travel on different pavement widths, avoid static obstructions, and response to the pedestrian density. • With reconfigurable mechanism, Panthera can access variable pavement width. • Panthera has omnidirectional locomotion, which helps in taking sharp turns, avoiding the defects, potholes, etc. • The Panthera is equipped with vision sensors to detect the garbage or litters on the pavement and also the cracks present on it using the images taken during daylight conditions. Figure 2 shows the mechanical system and key functional components of Panthera robot. It comprises of (a) reconfigurable mechanism unit (b) locomotion and steering mechanism unit and, (c) sweeping and suction unit. Reconfigurable Mechanism The reconfiguration unit consists of the expanding and contracting mechanism as shown in Figure 2a. The central beam contains a machined shaft with a single-lead Acme threaded screw. The shaft has right-handed threads in one half and left-handed thread in another. The dimension of the Panthera in its full extended and retracted state are 1.75 × 1.70 × 1.65 meters and 1.75 × 0.80 × 1.65 meters respectively and is shown in Figure 1. The power sources, vacuum units suction drum were accommodated on the central, left, and right beam or frames. Locomotion and Steering Units The locomotion and steering action is attained using four steering units having two in-wheel motors in each resulting in the thrust for locomotion provided by eight powered wheels. Each steering units have two in-wheeled motors resulted in a differential wheel, as shown in Figure 2b (iii). The rotation sequence of the eight wheels will result in the steering or locomotion. When all the eight wheels are synchronized to rotate in one direction, then forward or backward locomotion is obtained. For sideways locomotion, the steering units are turned by 90 • first by the rotating the two wheels in each steering unit using differential drive pivot turn. Hence the omnidirectional feature of the platform is achieved. The suspension unit is shown in Figure 2b (iv) which is pivoted to move about an axis as shown in Figure 2b (v). Figure 2c shows the sweeping and vacuum units. The suction motor attached to the collecting box with the inlet opening of the diverging section is shown in Figure 2c (vi). The sweeper brushes, as shown Figure 2c (vii), move the materials from the pavement towards the vacuum cleaner inlet duct with an opening dimension of 20 × 14 cm, which is the height of a typical cool-drinks can. The scrubbing brushes, that are actuated using the rotary motors are made to engage and disengage against the ground by the screw action, as shown in Figure 2c (viii). Note that the identification of the garbage on streets becomes essential for twofold reasons (a) For effective cleaning of dirty areas (b) With the limited dimension of vacuum inlet of these machine objects bigger in dimension should be avoided else it may jam the vacuum inlets. Sensory Units Panthera sensory system comprise of various sensing devices include mechanical limit switch, Intel Realsense depth sensor, absolute encoder and GNSS tracking module. A short description for the sensory units are as follows: • Vision system: Intel Realsense D435 depth camera is utilized in panthera vision sensor as shown in Figure 3a (i). The camera has wide field of view (85.2 • × 58 • × 94 • ) and high pixel resolution 1920 × 1080 which is fixed in center of the front panel of Panthera robot. • Global Navigation Satellite System (GNSS) receiver was used for getting the geographical location of the defect region, as shown in Figure 3a (ii). The GNSS device namely NovAtel's PwrPak7D which is robust and accuracy of 2.5 cm to few meters A 16 GB internal storage device is used to log the locations of the robot. The device is shown in Figure 3a (ii). The accuracy can further be improved by combining the wheel odometry data along with the filters. • Mechanical limit switches: In order to to limit the the reconfiguration of the robot between 0.8 to 1.7 meters the mechanical switch was used Figure 3 a (iii) . Limit switch with roller type plunger is attached at the end of the lead screw to limit the movement of the re-configuring frame safely. The limit switch, when triggered, results in an immediate stop in the rotation of the lead screw shaft. • Absolute encoders: Absolute encoders were used to get the feedback for the steering rotation achieved by differential action of the wheels, as shown in Figure 2b (iii) and Figure 3a (iv). The A2 optical encoder from US Digital was mounted on top of the four steering units and communicated using RS-485 serial bus utilizing US Digital's Serial Encoder Interface (SEI) was used. Electrical Units The electrical unit block with the traction batteries mounted on the frame of the robot is shown in Figure 3b (i). The traction batteries of 24 Volts connected in parallel to power the sweeping robot Panthera. The differential wheels attached with the hub is connected directly to the DC brushed geared motor Figure 3b (ii) of 24 Volts and 130 rpm. To power these actuators, Roboclaw as shown in Figure 3b (v) of 24 and 30 Amperes rating was used to provide electrical pulses as per the control velocity set in logic written for micro-controller board based on the ATmega328P, here Arduino Mega shown in Figure 3b (iii). Figure 4 shows the block diagram of the Panthera control system hardware architecture. The hardware architecture comprises of five control blocks includes the primary control system, Deep Neural Network Processing (DNNP) unit, localization control, reconfiguration control, and cleaning module control unit. The primary control system is built with an industrial PC, which comprises 8 core CPU,16GB RAM, and separate GPU cards for running deep neural network function in real-time. Here, the primary system uses GPU for running the SegNet framework and NVIDIA Jetson nano GPU embedded board for run the inspection CNN module. The NVIDIA Jetson nano comprises of ARM A57 CPU and 128 Core maxwell GPU with 4GB memory and running on Ubuntu 18. The primary control system contains the Robot Operating System (ROS) master function, which generates the control message to the Jetson nano GPU, localization, reconfiguration, and cleaning module. The central processing unit utilizes a server-client communication model to communicate with other modules. The real sense vision module is connected to the primary control system unit using a USB 3.0 communication interface. The TensorFlow an open-source deep learning framework is configured in both units to run the deep learning function. The other control modules include localization, reconfiguration, and cleaning device control unit are powered with Arduino-Mega microcontroller and configured as ROS slave for communicating with the primary control system. The slave units handle the various sensor interface and generate the required control and Pulse Width Modulation (PWM) signal to motor drivers. Defect and Garbage Detection Framework Functional block diagram of pavement inspection framework is shown in Figure 5. It comprise of pavement segmentation module, defect and garbage detection and defect localization module. Input Layer The input layer takes in the raw image obtained after sampling the video into images. The input layer converts each frame into a specific size of the image. Here, input layer that resizes the extracted images into 640 × 480 pixels. Figure 6 shows the SegNet [37] based pavement segmentation module. SegNet is an Deep Learning (DL) based Semantic image segmentation framework which is widely used in autonomous driving vehicle, industrial inspection, medical imaging, and satellite image analysis. In this work, SegNet DCNN module was adopt to segment the pavement region from other objects. The SegNet architecture is comprised of deep convolution based encoder layer and a corresponding set of decoder layers followed by using a pixel-wise classification layer. The encoder and decoder part consists of thirteen convolution layers, Rectified Linear Unit (ReLU) activation function, and 7 × 7 kernels based max-pooling layers. At the encoder side, convolution and max-pooling operation are performed. Similarly, up-sampling and convolutions operation is executed at the decoder side. While performing max-pooling operation in the encoder side, corresponding max-pooling indices (locations) are stored for use in decoding operation. Finally, K-class softmax classifier is connected with decoder output to compute the class probabilities for every pixel individually. For retaining the higher resolution feature maps, fully connected layers are removed at deep encoder output. Furthermore, a Stochastic Gradient Descent(SGD) algorithm is used for train and optimize the SegNet framework with a learning rate and momentum of 0.002 and 0.9, respectively. A training set with 20 samples is used for training CNN. The model with the best performance on the validation data set in each epoch was selected. Pavement Defects and Garbage Detection Multi-layer CNN model for defect and garbage detection task is shown in Figure 7. Here, Dark flow framework has been used to build the multi-layer CNN model which is an python-based network builder with a TensorFlow backend. The difference between a Dark flow and traditional CNN is that the in traditional CNN, the classifier uses an entire image to perform any classification whereas in a Dark Flow based network, the images are split into multiple grids and within each grid for generating multiple bounding boxes. A trained model outputs the probability that an object is present in a bounding box and if that probability goes above a specified value in a specific bounding box, then the algorithm extracts features from the that specific part of the image to locate the object. In short, in a dark flow-based network, the network searches for the desired objects in regions that have higher probability than the threshold value as opposed to searching it throughout the entire image which more exhaustive and prone to error. This makes it a much cleverer CNN for performing object classification and localization and it is faster than many other model builders as it only carries out prediction in selected grid cells which makes it a great candidate for using it in real-time. The multi-layer CNN ( Figure 7) consist of 9 convolutional layers, 6 pooling layers and an output layer with softMax activation function. The number below the curved braces gives the information of all the layers such as padding, and the number inside, i.e., 32,64,192, etc., represents filter size and stride. Convolutional Layers Convolutional layer repeatedly performs the operation of convolution between the input image and chosen filter. The operation of convolution involves performing a elementby-element multiplication of a sub array of input image with the chosen filter and the result of each of these element-by-element multiplication is summed which corresponds to an one single element in the output of the convolutional layer. The size of the sub array is equal to the size of the filter. Once the result is obtained from a sub array of the input image, a different sub-array is chosen to perform the similar operation. The new sub-array is selected based on the stride length (s) i.e., the new sub-array would be chosen by shifting across a specified number of rows and columns from the previous sub-array and this convolution operation is continued until the entire input image is covered. There are various hyperparameters involved in this operation: stride length(s), padding (p), filter size (K). The stride length determines the number of units that will be shifted by the filter at one time. In other words, the stride is the amount by which the filter shifts to perform the convolution. The padding determines the number of zero padding layers applied to the input data before performing the convolution operation. In this work, valid padding was used with null zero padding (p = 0). The operation of the convolution has been represented by the following Equation (1): where, f donates input function and g denotes filter function of convolution. Pooling Layer The general rule of thumb in constructing a CNN network involves a convolutional layer followed by a pooling layer. This is done to enhance the feature extraction process while reducing the spatial dimension of the input from the preceding layer. This process of reducing the spatial dimension is known as down sampling and this process improves the efficiency and speed of the network. Moreover, it is used to generalize the model by reducing overfitting. The pooling layer applies non-linear down sampling on the activation maps which, in turn, reduces the spatial size of the representation. This layer also reduces computation time by reducing the number of parameters required. In this paper, Max pooling was used as a filter for this layer. In max pooling layer, the maximum value within the region covered by the filter is taken and assigned as the output value. SoftMax Layer The final layer in the proposed network is the SoftMax layer. SoftMax layer is usually for classification as it provides a probabilistic distribution of the classes. The class that has the highest probability is provided as the result. This can also be looked at as an activation function. Activation Function In the proposed network, leaky Rectified Linear Unit (ReLU) is used as the activation function for the convolutional layers and the linear for the final convolutional layer. The need for non-linearity in the network requires us to pick leaky ReLU as the activation function. The Equations (2) and (3) indicates the underlying mathematical operation corresponding to Leaky ReLU. where a = 0.01 The Equation (4) indicates the mathematical operation for linear function. Bounding Box This work is focused on object localization. Customized CNN network is used to find the Region of Interest (RoI) or exact location in the color image. The output from the above process is wrapped inside bounding box. The bounding box is constructed to divide the images into segments of equal area and to generate target vector for them using Equation (5). The bounding box for training would be as follows. where P is the binary value which determine the object of interest in the image, X min and Y min is upper left X and Y-coordinate of the bounding box. Similarly, X max and Y max lower right X and Y-coordinate of the bounding box. The value of the argument C 1 will be one if belongs to class 1 else zero. Similarly, the value of C 2 is equal to one if the object belongs to class 2, else zero. Furthermore, Intersection Over Union (IOU) method Equation (6) is utilized to eliminate the overlapping bounding boxes. IOU is the ratio of area of overlap to the area of union. In this work, IOU threshold is fixed to 0.5. If the calculated IOU and actual IOU of bounding box is equal to or more than 0.5, then the obtained output is correct, else the it will be considered as false prediction by bounding box coordinates. Table 2 shows the average IOU matching of all bounding boxes in text set. Mobile Mapping System for Defect Localization Mobile Mapping System (MMS) [38,39] is the final component of proposed system, which is used for tracking the location of the defects. The MMS technique stamped the Geo referencing parameters into defect detected images and forward to a remote monitoring unit or PMS for finding and fixing a pothole or other defects. The MMS function was run in the primary control unit and used three data for accomplishing the defect localization includes physical distance data d estimated by realsense rs-measure API function, and two hardware modules data such as wheel encoder which provide the odometeric distances and Global Navigation Satellite System (GNNS) data which provide latitude and longitude information. The generated MMS data is forward to a remote monitoring unit through a 4G wireless communication module. Figure 8 shows one of the independent test trials, i.e., carried without the Panthera robot being moved in the public park connectors. The image highlights the location of the defects on the map which will assist in the faster maintenance of the pavement. Experiments and Results This section describe the experimental results of proposed system. The experiment has been performed in three phases: data set preparation, training the SegNet and inspection CNN (defects and garbage detection), and validating the trained models. Generally, the detection or segmentation performance of DNN relies on various parameters, including the size of the training data set, data-set class balance, illumination conditions, and hyperparameters (learning rate, batch size, momentum, and weight decay). These parameters play a key role in network performance and computational cost in both training and testing phases. Learning rate directly affects the time taken for the training to converge. A small learning rate makes the training longer, whereas a high learning rate may lead to large weight updates and overshoots. The batch/sample size affects the computational cost and should be chosen according to hardware memory. Hence, these parameters need to be tuned appropriately. After some trials, it is found that a learning rate of 0.02, a momentum of 0.9, and a training set of 20 samples provide satisfactory results in our application. Data set Preparation and Training the Model The dataset preparation, training, and testing flow diagram for proposed system as shown in Figure 9. Here, the data-set preparation process involves collecting the pavement images, pavement defects, and garbage images with different pavement background. The data-set consists of 3000 image samples of pavement collected from different locations in Singapore. The image acquisition is done from the robot perspective under different lighting conditions. The collected images are balanced for two different classes (1) pavement defect (cracks and damages) and (2) garbage (tissue paper, food packing paper, polythene cover, metal bottle, and plastic). For training and testing the model, fixed resolution of the image size 640 × 480 was used throughout the experimental trials. In-order to enhance the CNN learning rate and to control the over-fitting, data expansion techniques such as rotation, scaling, and image flipping were used in the training phase. The K-fold cross-validation process is utilized for the model assessment (In this case K = 10 was fixed). The data-set is divided into 10 sections and one among the 10 sections is used for testing the model and the remaining 9 has been used for model training. To eliminate biasing conditions due to a particular training or testing data-set, this process has been repeated 10 times. Besides, the results from the performance matrix are repeated over 10 times and the mean results are provided. The resulting images from the highest accuracy models are given here. The CNN models SegNet and inspection are realized using Tensor-flow 1.9 module on Ubuntu 18.04 operating system. The models are trained using a computer that uses Intel Core i7-8700k, 64 GB of RAM, and NVIDIA GeForce GTX 1080 Ti Graphics Card. To assess the performance of the proposed scheme, standard statistical methods such as accuracy, precision, and recall was adopted. Equations (7)-(10) shows the accuracy, precision, recall and F measure respectively. As per the standard confusion matrix, the variables tp, f p, tn and f n are true positives, false positives, true negatives and false negatives respectively. The η miss represents the target object not recognized by the network and η f alse represents objects detected as target object. Validation of Defect and Garbage Detection Framework After training the model, the effectiveness of the trained CNN framework was tested in offline and real time mode with Panthera. To carry out the offline experiments, the trained model was loaded into Jetson nano and tested in laboratory with locally collected 1200 defect and garbage's images. Figure 10 shows the detection results of defect and garbage's tested in offline. In this experiment the detection model detect most of defect and garbage with higher confident level with mean average precision (mAP) of 92% for defect and 95% for garbage. In order to evaluate the real time defect and cleanness inspection, six hundred meter pavement was selected as a test bed which is located near our institution. Before carrying out the experiment, the pavement defect region are manually notified which is used to compare the detection results of proposed system. For cleanness inspection, different type of garbage are randomly drop on the test bed and its detection was tested by model. Figure 10. Offline test results for defect detection (a-c) and garbage detection (d-f). In this experiment, the images are capture from the Panthera robot operating inside the university campus pavement. The vision module runs at 10 frames per second (fps), and image resolution was set to 640 × 480. The robot was moved at a slow speed on the pavement, and the detection results are recorded from a remote monitoring unit. Figure 11 shows the segmentation and real-time detection of the garbage on the pavement. Figures 12 and 13 shows the defects detected on pavement along with geotagging and google mapped results. Furthermore, Figure 14 shows the garbage detection resulst of inspection framework along with their Geotagging information. The defects on the pavement detected are marked by sky-red rectangle box and garbage detected are marked by green rectangle box. In this analysis, the detection model detects a defect with the mean Average Precision (mAP) of 88% to 92% confident level and garbage with 91% to 96% confidence level, respectively on the data set used. Furthermore, statistical measures has been performed for estimating the robustness of the detection model for both online and offline experiments. Table 3 shows the statistical measures result for online and offline experiments. Figure 11. Pavement segmentation and the garbage detection results in (a-d). Figure 12. Defects with geotagging information in (a-h). The statistical measure indicate that the trained CNN model has detect the defect with 93% precision for offline test and 89.5% precision for online test. Furthermore, the model miss rate is 2% for offline test and vary 4% to 7% for online test due to different lighting conditions. Similarly, the garbage's are detected at precision of 96% for offline test and 91.5% for online test. However its miss detection ratio is quite low compare to defect detection for different lighting conditions. It is due to higher visibility of garbage compare to defects. The study ensure that the proposed system was not heavily affect by environment factor like varying lighting condition and shadows. It was ensured by miss detection metrics. In this study, miss detection ratio difference is less than 4% for environmental changes. However, 4% of miss detection are due to the invisible defects or the defects region have been heavily covered by shadows. Further computational cost of the models was tested by time taken for processing one 640 × 480 resolution image frame on execution hardware. Here, SegNet model was executed on primary control system and detection model was executed on Jetson Nano embedded GPU board. The experiments was tested for 100 images and its detection times are recorded. The experimental results shows that the SegNet model takes average of 120 ms to segment out the pavement region from 640 × 480 image frame. Furthermore, the detection model and MMS function took average 12.2 ms for detect the defect and garbage's. The experiment results shows that the trained model process seven to eight frames per second in average. Tables 4 and 5 shows the performance comparison of proposed frameworks for semantic segmentation and object detection framework with other popular other framework. FCN-8 and UNet framework are consider for semantic segmentation model comparison analyses. Similarly, Faster RCNN ResNet, SSD MobileNet are taken as object detection models for comparison. All models are trained using the same dataset consisting of 3000 images and similar training time on the same GPU card. The comparison has been detailed in Tables 4 and 5. The performance of comparison is made based on standard performance comparison matrices. The experimental analysis indicate that SSD-MobileNet and proposed system have comparable accuracy of 94.64% and 95.00%. However, in terms of execution time and detection accuracy, Faster RCNN ResNet, has the upper hand over SSD-MobilNet and proposed system. Here, the trade-off between the models' computational expense and accuracy are key parameter for chosen dark flow based proposed system as the candidate for trash detection task, considering the possibilities of enhancing the detection accuracy of proposed system with a lower training rate. Similarly, performance comparison of segmentation frameworks SegNet yields better pixel-wise classification than other networks. Besides, the accuracy of FCN8 and UNet drops significantly when it comes to the object classes with small pixel areas. The effectiveness of defect and cleanness inspection model was tested with Cui et al crack forest [40], Fan Yang pavement crack [41], Hiroya Maeda road damage [42], taco [43], and Mindy yang trashnet [33] image data sets. The defect image databases [41,42,44] contains annotated pavement and road crack images captured at different pavement and urban road surfaces. The taco and Mindy yang image data set contains various kinds of paper trash, plastic and metal cans taken under diverse environments. Performance Comparison with Other Semantic and Object Detection Framework The experiment results for defect and garbage image database are shown in Figures 15-19 and its statistical measure are reported in Table 3. Over 150 images are taken from each of the database for perform the statistical analysis. Figure 15. Defect detection results for Fan Yang cracks dataset in (a-d). (a) (b) (c) (d) Figure 18. Garbage detection results for Taco trash image dataset in (a-d). The statistical results, given in Table 6 shows an average of 94.6% confidence level for detecting the defects and average of 95.5% confidence level for detecting the garbage. Comparison with Existing Schemes This section describes the comparative analysis of the proposed system with existing pavement defect and garbage detection case studies in the literature. The comparison has been made based on a deep learning framework used for both pavement defects and garbage (dirt, garbage, marine debris) detection task. The detection result of various defect and garbage detection schemes are shown in Tables 7 and 8. The difference analysis has been reported based on CNN topology and detection accuracy. Case Study Algorithm Detection Accuracy Garbage detection on grass [51] SegNet + ResNet 96.00 Floor trash detection [35] Mobilenet V2 SSD 95.00 Garbage detection on marine [52] Faster RCNN Inception v2 8100 Garbage detection on marine [52] MobileNet v2 with SSD 69.00 Proposed system 16 layer CNN 95.00 In this comparison analysis, we try to provide some fair comparison with proposed system and existing scheme based on some key differences. From above table, Crack Unet [50] and Deep encoder-decoder [25] are based on pixel-wise crack detection architecture and trained with 3000 and 600 defect image respectively. Here, Crack U-net obtained detection accuracy of 99.01%, which follows the FCN style CNN layers. Similarly, the deep encoder-decoder framework was constructed with a residual network convolution layer and gained quite lower precision 59.65% than Crack U-net. However, both models need a larger computing resource and less suitable for in-situ inspection. In [25,48] implementations are developed for real-time remote defect inspection where multi-layer CNN and drone are used by Naddaf-Sh et al. and model process 5 frames per second and achieve 96.00% detection accuracy [25]. other-hand FCN -Gaussian-conditional random field combination was used by Zheng Tong et al.. The model was tested with high computing device NVIDIA GeForce GTX 1080 GPU fitted on multi-function testing vehicle and takes 0.162 inference time and obtained a precision of 82.20% [48]. In contrast with the above implementation, the proposed model work in real-time pavement defect detection, and its average detection accuracy is 93.3%. The proposed CNN model was constructed with low number of convolution layers (16), which help to reduced computational demand and results in real-time processing. The number of the hidden layer has also reduced after convolution and max-pooling layers. In addition to that, the training data in each class made equivalent to training to avoid data skew. Furthermore, MMS based localized defects have an added advantage for the proposed framework and help in the pavement monitoring system (PMS). Table 8 shows the overview of CNN based garbage detection which are developed various garbage cleaning application. Here, SegNet and ResNet combination [51] was developed for garden cleaning robot for detecting garbage on grass and obtain 96% detection accuracy. Mobilenet V2 SSD was used in floor trash detection and achieve 95.0% detection accuracy. However that implementation not tested in any cleaning robot. The other two implementation are based on marine garbage detection using two different framework such as Faster RCNN InceptionV2 and MobileNetV2 with SSD. Here, under water vehicle was used for capture the marine garbage and tested in offline and models obtained moderate detection accuracy 81.0% and 69.0% respectively. In contrast with above mention scheme, the proposed model detect the garbage with an average of 95% detection accuracy. The preset framework is limited to the inspection task of pavements during the day light conditions. The detection model is limited to very few objects, namely tin can, papers, and plastic bottles. Also the inspection is limited to the crack detection which can not differentiate between pothole, bumps, etc. Also the speed at which the detection task was carried out by the robot is limited to 0.1 m/s and with a vision feedback decimated at a rate of 10 frame per seconds. Further cleaning module has some generic limitations, including it cannot pick up garbage bigger than the dimension of the vacuum inlet opening dimensions as indicated in the section "Sweeping and vacuum units." Furthermore, it is unable to clean up heavy objects which are not classified as garbage. Some heavy objects which are small might be stones. The weight that the vacuum can carry depends on the vacuum motor power. However, it is noted that vacuum motor power can be changed easily by changing the motor. Conclusions and Future Work In this article, the pavement defect and cleanness inspection using a deep learning based framework was proposed and implemented in the self-reconfigurable pavement sweeping robot Panthera. A lightweight DCNN model was developed and trained with 6000 defect and garbage images. The framework was configured in Jetson Nano NVIDIA GPU and took approximately 132.2 milliseconds for detecting both pavement defects and garbage. Moreover, the geotagging of the pavement defects was presented during locomotion on the pavement. The experimental results show that the proposed method identifies the pavement defects and garbage with 93.3% and 95.0% detection accuracy, respectively. The framework is an initial attempt to use a DCNN framework on pavement cleaning robots for inspection, which includes defect and garbage detection tasks. The application of this work is in a pavement management system where the existing sweeping vehicle can use its sensory feedback from the vision system for the inspection task using a machine learning framework. Our ongoing efforts focus on implementing a depth sensor feedback in the pavement monitoring system for further classification of localization of garbage and crack detection. Also, the enhance detection and segmentation framework by train the network rich data sets taken under different lighting conditions are targeted. Furthermore, the scheme for crossing or avoiding the cracks, collecting or not collecting the detected garbage, are being developed for Panthera. Also the framework to scaled for the identifying defects in drains using drain inspection robot. Acknowledgments: We sincerely thanks to Temasek Junior College students Aasish Mamidi, Ethan Ng, Oliver Kong Tim Lok, Shi Qilun Toh Ee Sen and Izen for collecting the trash and defect data set. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations Lists the abbreviations and nomenclature used in this paper.
8,709.4
2021-04-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Differential regulation of muscarinic acetylcholine receptor-sensitive polyphosphoinositide pools and consequences for signaling in human neuroblastoma cells. In this study we have quantitatively assessed the basal turnover of phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P2) and M3-muscarinic receptor-mediated changes in phosphoinositides in the human neuroblastoma cell line, SH-SY5Y. We demonstrate that the polyphosphoinositides represent a minor fraction of the total cellular phosphoinositide pool and that in addition to rapid, sustained increases in [3H]inositol phosphates dependent upon the extent of receptor activation by carbachol, there are equally rapid and sustained reductions in the levels of polyphosphoinositides. Compared with phosphatidylinositol 4-phosphate (PtdIns(4)P), PtdIns(4,5)P2 was reduced with less potency by carbachol and recovered faster following agonist removal suggesting protection of PtdIns(4,5)P2 at the expense of PtdIns(4)P and indicating specific regulatory mechanism(s). This does not involve a pertussis toxin-sensitive G-protein regulation of PtdIns(4)P 5-kinase. Using wortmannin to inhibit PtdIns 4-kinase activity, we demonstrate that the immediate consequence of blocking the supply of PtdIns(4)P (and therefore PtdIns(4,5)P2) is a failure of agonist-mediated phosphoinositide and Ca2+ signaling. The use of wortmannin also indicated that PtdIns is not a substrate for receptor-activated phospholipase C and that 15% of the basal level of PtdIns(4,5)P2 is in an agonist-insensitive pool. We estimate that the agonist-sensitive pool of PtdIns(4,5)P2 turns over every 5 s (0.23 fmol/cell/min) during sustained receptor activation by a maximally effective concentration of carbachol. Immediately following agonist addition, PtdIns(4,5)P2 is consumed >3 times faster (0.76 fmol/cell/min) than during sustained receptor activation which represents, therefore, utilization by a partially desensitized receptor. These data indicate that resynthesis of PtdIns(4,5)P2 is required to allow full early and sustained phases of receptor signaling. Despite the critical dependence of phosphoinositide and Ca2+ signaling on PtdIns(4,5)P2 resynthesis, we find no evidence that this rate resynthesis is limiting for agonist-mediated responses. Phosphatidylinositol 4,5-bisphosphate (PtdIns(4,5)P 2 ) 1 is a minor membrane-associated phospholipid that is a substrate for enzymes involved in important cellular signal transduction pathways (1). Thus, PtdIns(4,5)P 2 is a substrate for both phospholipase C (PLC) and phosphoinositide 3-kinase (PI3-K) activities. The importance of the signaling pathway initiated by PI3-K and resulting in the generation of phosphatidylinositol 3,4,5-trisphosphate (PtdIns(3,4,5)P 3 ) is rapidly emerging (2,3). However, all estimates so far suggest that the PI3-K pathway utilizes only a small fraction of the PtdIns(4,5)P 2 pool compared with that hydrolyzed by PLC in a signaling cascade (4,5). The latter enzyme liberates both inositol 1,4,5-trisphosphate (Ins(1,4,5)P 3 ) and 1,2-diacylglycerol, which mobilize Ca 2ϩ from intracellular stores and activate several isoforms of protein kinase C, respectively (6). Both Ins(1,4,5)P 3 and diacylglycerol are recycled to provide the substrates (myo-inositol and CMPphosphatidate) necessary for phosphatidylinositol (PtdIns) resynthesis. PtdIns(4,5)P 2 can then be regenerated by the sequential phosphorylation of PtdIns and phosphatidylinositol 4-phosphate (PtdIns(4)P) by phosphatidylinositol 4-kinase (PtdIns 4-kinase) and phosphatidylinositol 4-phosphate 5-kinase (PtdIns(4)P 5-kinase), respectively (6). This resynthesis is vital in maintaining an agonist-sensitive PtdIns(4,5)P 2 pool, as the cellular content of PtdIns(4,5)P 2 is small in comparison to the rate at which it may be consumed during receptor-mediated activation of PLC (7). Although it is possible that changes in substrate/product concentrations, particularly the dramatic changes that might occur in the immediate vicinity of activated PLC, may play a role in regulating PtdIns(4,5)P 2 supply, the pathway is likely to possess more sophisticated regulatory features that enable supply to be matched to demand under agonist-stimulated conditions. In this context a number of potential regulatory mechanisms for PtdIns 4-kinase and/or PtdIns(4)P 5-kinase activities have been proposed (8 -15), although a true understanding of the mechanisms and roles of such regulation remains elusive (2). Furthermore, although the resupply of PtdIns(4,5)P 2 must occur to allow sustained or repetitive phosphoinositide signaling, there is little information to indicate the extent to which its resynthesis contributes to regulatory aspects of signaling mediated by PLC-coupled receptors. It is unclear, for example, whether resynthesis of PtdIns(4,5)P 2 is required during acute agonist-mediated responses and indeed whether depletion of this substrate can contribute to the rapid receptor desensitization that is an almost universal feature of receptors activating this signal transduction pathway (16,17). Cells of the human SH-SY5Y neuroblastoma cell line have many features characteristic of fetal sympathetic ganglion cells and have been used extensively in studies from signal transduction to neurotransmitter release (18 -22). Our laboratory has used this neuroblastoma extensively to examine mechanistic and regulatory aspects of muscarinic receptor-mediated phosphoinositide and Ca 2ϩ signaling (19 -22), and these have proved to be representative of other PLC-linked receptor types in differing cellular backgrounds (16,17). In the current study we have, therefore, used SH-SY5Y cells to examine quantitatively the regulation of PtdIns(4,5)P 2 hydrolysis and resynthesis during stimulation of their muscarinic acetylcholine receptors which are predominantly of the M 3 subtype (23). In addition we have sought to assess the impact of reduced PtdIns-(4,5)P 2 supply on transmembrane signaling via PLC and to determine whether agonist-mediated depletion of PtdIns(4,5)P 2 contributes to the rapid desensitization of muscarinic receptormediated phosphoinositide signaling. EXPERIMENTAL PROCEDURES Cell Culture-Experiments were performed on SH-SY5Y cells (originally a gift from Dr. J. Biedler, Sloan-Kettering Institute, New York) between passages 70 and 90. Cells were maintained in minimum essential medium supplemented with 50 IU ml Ϫ1 penicillin, 50 g ml Ϫ1 streptomycin, 2.5 g ml Ϫ1 amphotericin B, 2 mM L-glutamine, and 10% (v/v) newborn calf serum. Cultures were maintained at 37°C in 5% CO 2 Buffer was then aspirated, and the cells were challenged with 200 l of buffer (Ϯ agonist). Where carbachol was removed to examine recovery, the agonist-containing buffer was aspirated, each well washed (2 ϫ 1 ml Krebs/HEPES), and the incubation continued in the presence of 200 l of buffer. In antagonist reversal experiments, agonist incubation was in 100 l followed by addition of 100 l of atropine (final concentration, 10 M). Reactions were terminated with an equal volume of ice-cold 1 M trichloroacetic acid. After 15 min on ice the aqueous phase was removed and 100 l of 10 mM EDTA added. After vortexing with 0.5 ml of a 1:1 (v/v) freshly prepared mixture of tri-n-octylamine and 1,1,2-trichlorotrifluoroethane, 20 l of 250 mM NaHCO 3 was added to a 300-l aliquot of the aqueous phase. This was applied to a Dowex (AG1-X8) formate column which was then washed with 20 ml of water and 10 ml of 25 H]GroPIns(4,5)P 2 ) as indices of PtdIns, PtdIns(4)P, and PtdIns(4,5)P 2 , respectively, were prepared from cell monolayers based upon previously described methods (24). After removal of the acidified aqueous phase for the determination of [ 3 H]InsP x as described above, lipids were extracted into 0.94 ml of acidified chloroform/methanol (40:80:1 v/v, 10 M HCl). Chloroform (0.31 ml) and 0.1 M HCl (0.56 ml) were then added to induce phase partition. A sample of the lower phase (400 l) was removed, dried in a stream of N 2 , and stored at Ϫ20°C prior to further processing. These samples were dissolved in 1 ml of chloroform and 0.2 ml of methanol and hydrolyzed by addition of 0.4 ml of 0.5 M NaOH in methanol/water (19:1, v/v). Samples were vortex mixed at regular intervals during a 20-min incubation at room temperature. Chloroform (1 ml), methanol (0.6 ml), and water (0.6 ml) were then added, and the samples were mixed and centrifuged (3,000 ϫ g, 10 min). A 1-ml aliquot of the upper phase was neutralized using 1-ml bed volume Dowex-50 (H ϩ form) columns that were washed with 2 ϫ 2 ml of water. The pooled eluate was brought to pH 7 by addition of NaHCO 3 and applied to a Dowex (AG1-X8) formate anion exchange column. The [ 3 H]GroPIns, [ 3 H]GroPIns(4)P, and [ 3 H]GroPIns(4,5)P 2 were then eluted as described elsewhere (24) and quantified by liquid scintillation spectrometry. Measurement of PtdIns(4,5)P 2 Mass-PtdIns(4,5)P 2 mass was determined by assay of Ins(1,4,5)P 3 released by alkaline hydrolysis following a previously described protocol (25). Briefly, dried lipid extracts, prepared as described above from cells not labeled with [ 3 H]inositol, were dissolved in 0.25 ml of 1 M KOH and heated to 100°C for 15 min during which time they were vortex mixed at regular intervals. Tubes were then placed on ice for 15 min and then samples added to 0.5-ml bed volume Dowex-50 (H ϩ form) columns. Columns were washed (3 ϫ 0.25 ml) with water. NaHCO 3 (100 l, 60 mM) and EDTA (100 l, 30 mM) were then added to the pooled column eluates that were stored at 4°C. The Ins(1,4,5)P 3 which had been released from the PtdIns(4,5)P 2 was assayed as described below within 48 h. Recoveries from each processing step were assessed (25) to allow levels of Ins(1,4,5)P 3 determined to be extrapolated to the amount of PtdIns(4,5)P 2 . Generation and Measurement of Ins(1,4,5)P 3 -Cell monolayers were preincubated in Krebs/HEPES, challenged, and the reaction terminated as described above. Experiments examining recovery following termination of carbachol action by addition of atropine were performed as described above. A series of experiments were also designed to examine the potential desensitization and resensitization of the peak (10 s) Ins(1,4,5)P 3 response. Cells were treated as described above for experiments examining the recovery of inositol phosphates and phosphoinositides. However, following aspiration of the initial carbachol challenge and washing of the monolayer (2 ϫ with 1 ml Krebs/HEPES), incubation was continued for the required recovery time in 1 ml of buffer before aspiration and rechallenge with carbachol (200 l) for 10 s. Reactions were again stopped by the addition of an equal volume of 1 M trichloroacetic acid. A 160-l aliquot of the acidified aqueous phase was removed, processed, and assayed for Ins(1,4,5)P 3 by a radioreceptor assay (26). Measurement of Intracellular [Ca 2ϩ ]-The intracellular [Ca 2ϩ ] ([Ca 2ϩ ] i ) was determined in suspensions of fura-2-loaded cells. Briefly, confluent cells were harvested, washed with Krebs/HEPES, and resuspended in 2.5 ml of the same buffer. A 0.5-ml aliquot was removed and manipulated as below but with the exclusion of the acetoxymethyl ester of fura-2 (fura-2-AM) thereby allowing the determination of cellular autofluorescence. Fura-2-AM was added to the remaining cells at 5 M and the cells left with gentle stirring for 40 -60 min at room temperature. Supernatant containing extracellular fura-2-AM was removed following gentle centrifugation of 0.5-ml aliquots. Cells were resuspended in 1 ml of buffer and incubated at 37°C for 10 min prior to further centrifugation and resuspension in 3 ml of buffer at 37°C. With emission at 509 nm, the 340/380 nm excitation ratio was recorded every 3.8 s as an index of [Ca 2ϩ ] i . Cells were challenged by the addition of 50 l of carbachol to give a final concentration of 1 mM. The 340:380 ratio was converted to [Ca 2ϩ ] i as reported previously (27) using 0.1% Triton X-100 in the presence of 1.3 mM Ca 2ϩ to determine R max followed by the addition of 6.7 mM EGTA to determine R min . Effects of Li ϩ on Carbachol-mediated Phosphoinositide Signaling-The effects of Li ϩ on muscarinic receptor-mediated phosphoinositide signaling was determined as described above with the exception that the preincubation and incubation buffers were inositol-free and contained 10 mM Li ϩ . Effects of Wortmannin on Carbachol-mediated Phosphoinositide Signaling-Although wortmannin is better known for its ability to inhibit PI3-K (28), this fungal metabolite has recently been demonstrated to inhibit some isoforms of PtdIns 4-kinase (29,30). We have, therefore, used wortmannin in an alternative strategy to Li ϩ to examine the immediate consequences of blocking the provision of PtdIns(4)P and PtdIns(4,5)P 2 under basal and agonist-stimulated conditions. For experiments in which cells were pretreated with wortmannin, this was added during the final 10 min of incubation and incorporated with agonist additions. In experiments investigating the time course of the effects of wortmannin on inositol phosphates and phosphoinositides, wortmannin was added in a 10-l volume. Materials-Reagents of analytical grade were obtained from suppliers listed previously (20,21,24 (1,4,5)P 3 were averaged to give a single value representative of one experiment, whereas for [ 3 H]phosphoinositides and PtdIns(4,5)P 2 mass, duplicates were pooled at the lipid extraction stage. All data are presented as mean Ϯ S.E. with the number of experiments given in parentheses. Concentration-response curves were fitted by GraphPad Prism (GraphPad Software, Inc., San Diego, CA) using a standard four-parameter logistic equation. EC 50 and IC 50 values are presented as log 10 M. Statistical comparisons were by Student's two-tailed paired or unpaired t test or, where multiple comparisons were required, by one-way analysis of variance followed by Duncan's multiple range test at p Ͻ 0.05 and p Ͻ 0.01. Agonist-stimulated Changes in Polyphosphoinositides Incubation of SH-SY5Y human neuroblastoma cells with [ 3 H]inositol resulted in marked changes in the absolute and relative amounts of radioactivity incorporated into the phosphoinositides over the first 20 h (data not shown). However, there were little or no differences between cells labeled for either 44 or 48 h under basal or agonist-stimulated (1 mM carbachol, 5 min) conditions (data not shown), and the phosphoinositide pools were judged to be in equilibrium. In all subsequent experiments, cells were therefore labeled for 48 h and under these conditions [ 3 H]PtdIns comprised the major fraction (94.3 Ϯ 0.3% (n ϭ 4)) of the inositol phospholipid pool, whereas [ 3 H]PtdIns(4)P and [ 3 H]PtdIns(4,5)P 2 represented only minor fractions (2.5 Ϯ 0.1% (n ϭ 4) and 3.2 Ϯ 0.2% (n ϭ 4) respectively). Upon addition of a concentration of carbachol that is maximal for the muscarinic receptor-mediated phosphoinositide-linked responses in these cells (1 mM), there was a rapid and marked accumulation of [ 3 H]InsP x (Fig. 1a). We emphasize that this accumulation of [ 3 H]InsP x is in the absence of a Li ϩ block of inositol monophosphatase activity. Accumulation therefore represents the net result of both generation and metabolism to free inositol. Over the first 60 s of stimulation with carbachol there was an increase in [ 3 H]InsP x to 291 Ϯ 23% (n ϭ 4) of basal levels that was sustained throughout the remaining period of agonist stimulation. In the same cell monolayers, there were rapid and marked decreases in the levels of [ 3 H]PtdIns(4)P and [ 3 H]PtdIns(4,5)P 2 upon agonist challenge to 32.4 Ϯ 2.9% (n ϭ 4) and 24.7 Ϯ 2.8% (n ϭ 4) of basal levels, respectively, by 60 s (Fig. 1b). These reductions were again sustained throughout the period of agonist challenge. In contrast, the level of [ 3 H]PtdIns decreased to only 85.8 Ϯ 4.2% (n ϭ 4) of basal levels over the experimental period (900 s) (Fig. 1b). Given this relatively small reduction of [ 3 H]Pt-dIns, it is unlikely that the specific activities of the 3 H-polyphosphoinositides change greatly during this period of stimulation, and therefore the changes in radioactivity are likely to accurately reflect the changes in mass. The ability of carbachol to mediate the accumulation of [ 3 H]InsP x and depletion of the phosphoinositides was concentration-related (Fig. 2). EC 50 values (log 10 M, determined at 60 s following agonist addition) were Ϫ5. 43 Reversal of carbachol-mediated changes occurring after washing the cells indicated a dependence of the sustained alterations on persistent muscarinic receptor occupation. This was confirmed by the ability of the muscarinic antagonist, atropine, to reverse these changes. Addition of 10 M atropine, 1 min following addition of a maximally effective concentration of carbachol (1 mM) reduced [ 3 H]InsP x accumulation from 378 Ϯ 20% (n ϭ 4) to 229 Ϯ 11% (n ϭ 4) of basal over a 100-s period. During this time [ 3 H]PtdIns(4,5)P 2 levels were restored from 23.6 Ϯ 2.4% (n ϭ 4) to 81.8 Ϯ 4.1% (n ϭ 4) of basal. However, the rate and extent of recovery of [ 3 H]PtdIns(4)P was less, returning from 25.7 Ϯ 4.4% (n ϭ 4) to 49.5 Ϯ 5.4% (n ϭ 4) of basal over the same period. The addition of atropine during challenge with carbachol had no effect upon the small agonistinduced reduction of [ 3 H]PtdIns (data not shown). That changes in [ 3 H]inositol-labeled phospholipids reflect changes in mass levels was further indicated by the measurement of PtdIns(4,5)P 2 mass during carbachol stimulation and following addition of atropine. Thus, challenge with carbachol for 60 s resulted in an approximately 80% reduction in the mass of PtdIns(4,5)P 2 which was approximately restored within 100 s of atropine addition (Table I). These experiments also allowed the measurement of Ins(1,4,5)P 3 mass from the same cells. This demonstrated the characteristic "peak and plateau" response characteristic of muscarinic receptor activation in these cells (19 -22) and the return of Ins(1,4,5)P 3 levels to basal by 100 s following atropine addition (Table I, Extending the period of exposure to carbachol from 1 min (as above) to 5 min before washing had no effect on the rate or extent of recovery of either the [ 3 H]inositol-labeled inositol phosphates or phosphoinositides (data not shown and Fig. 8). Effect of Pertussis Toxin on the Extent and Rate of Recovery of Inositol Phosphates and Phosphoinositides Pretreatment of cells for 24 h with 100 ng/ml pertussis toxin had no effect on either the extent of depletion or the rate and extent of recovery of the [ 3 H]inositol-labeled inositol phosphates or phosphoinositides following 5 min exposure to 1 mM carbachol ( Fig. 4 and data not shown). Effects of Li ϩ on Agonist-stimulated Inositol Phosphate and Phosphoinositide Levels Incubation of cells with carbachol in the presence of 10 mM Li ϩ resulted in an accumulation of [ 3 H]InsP x that was rapid over the 1st min of stimulation (increase of 90% of the basal value which was 13,877 Ϯ 1495 (n ϭ 4) dpm/well). Following this there was a slower but sustained accumulation over a further 19 min of carbachol stimulation (increase of 50% of basal/min). The sustained linear increase between 1 and 20 min of stimulation demonstrate an effective block of inositol monophosphatase activity that will prevent the recycling of inositol back into the phospholipids. Despite this the carbacholmediated depletion of the phospholipids was identical in the presence and absence of Li ϩ (data not shown). We were, therefore, unable to use Li ϩ to manipulate cellular levels of the phosphoinositides, and an alternative strategy using inhibitors of PtdIns 4-kinase (wortmannin and LY-294002) was employed. Effects of Wortmannin and LY-294002 on Basal and Agonist-stimulated Inositol Phosphate and Polyphosphoinositide Levels Basal Levels-Addition of 10 M wortmannin to unstimulated cells produced a decrease in [ 3 H]InsP x over a 10-min time course to 81.6 Ϯ 20.8% (n ϭ 3) of initial values (Fig. 5a). Over this time frame there was no effect of wortmannin on [ 3 H]Pt-dIns levels (data not shown), but [ 3 H]PtdIns(4,5)P 2 fell to 84.9 Ϯ 7.3% (n ϭ 3) of basal levels (Fig. 5c). In contrast there was a more dramatic decrease in [ 3 H]PtdIns(4)P to 25.4 Ϯ 1.1% (n ϭ 3) of basal levels (Fig. 5b). The ability of wortmannin to induce these changes over a 10-min period was concentrationrelated ( Fig. 6, a and Agonist-stimulated Levels-When wortmannin was added 60 s after 1 mM carbachol there was an immediate reduction in the agonist-mediated accumulation of [ 3 H]InsP x which approached unstimulated levels by 10 min (Fig. 5a), whereas the carbachol-mediated decreases in [ 3 H]PtdIns(4)P and [ 3 H]Ptd-Ins(4,5)P 2 levels were further decreased below levels seen in the presence of agonist alone (Fig. 5, b and c). There was no significant effect on the carbachol-mediated depletion of [ 3 H]PtdIns levels (data not shown). We also employed LY-294002 in an identical experimental protocol on the basis that this compound has been reported to inhibit PI3-K but not PtdIns 4-kinase (31) and would, therefore, enable distinction between the inhibitory effects of wortmannin on PtdIns 4-kinase and PI3-K. However, it has emerged that this compound also inhibits the wortmannin-sensitive PtdIns 4-kinase (30). Indeed the addition of 100 M LY-294002 in the presence or absence of 1 mM carbachol produced identical patterns of change to those caused by wortmannin (data not shown). The ability of wortmannin to influence carbachol-mediated changes in [ 3 H]InsP x and 3 H-polyphosphoinositides was concentrationrelated (Fig. 6, a and c), and wortmannin appeared to be equipotent with respect to all responses (IC 50 Wortmannin has been shown previously to inhibit partially purified PtdIns 4-kinase but not PtdIns(4)P 5-kinase isolated from bovine adrenal cortical cells (29). This suggests that its ability to deplete cellular levels of PtdIns(4,5)P 2 in the current study was dependent upon a reduced supply of PtdIns(4)P and not an inhibition of PtdIns(4)P 5-kinase. This was supported by experiments in ␤-escin permeabilized SH-SY5Y cells in which we demonstrated that 10 M wortmannin reduced the incorporation of 32 P from [ 32 P]ATP into PtdIns(4)P but not PtdIns(4,5)P 2 over a 10-min period under basal conditions (data not shown). Addition of 10 M wortmannin 10 min prior to challenge with 1 mM carbachol markedly attenuated the transient peak of Ins(1,4,5)P 3 accumulation and abolished the sustained component of the response (Fig. 7a). Challenge of cells with 1 mM carbachol also resulted in a biphasic elevation of [Ca 2ϩ ] i consisting of a rapid transient peak (901 Ϯ 59 nM (n ϭ 4)) followed by a lower but sustained elevation (367 Ϯ 43 nM (n ϭ 4)) (Fig. 7b). Addition of 10 M wortmannin, 10 min prior to carbachol challenge, markedly attenuated the transient peak of [Ca 2ϩ ] i elevation (500 Ϯ 12 nM (n ϭ 4)) and abolished the sustained phase (Fig. 7b). This sustained phase of [Ca 2ϩ ] i elevation is dependent upon the influx of extracellular Ca 2ϩ via an as yet undefined mechanism but which most likely involves capacita- tive entry (20). Addition of 1 M thapsigargin to these cells also resulted in capacitative Ca 2ϩ entry and a sustained elevation of [Ca 2ϩ ] i of similar magnitude to that mediated by 1 mM carbachol, but this was unaffected by 10 M wortmannin (data not shown). Thus, a direct block of Ca 2ϩ entry does not underlie the ability of wortmannin to inhibit Ca 2ϩ signaling (and potentially also phosphoinositide signaling through a reduction in the Ca 2ϩ feed-forward activation/facilitation of PLC). Desensitization and Recovery of Carbachol-mediated Ins(1,4,5)P 3 Responses By using a similar experimental protocol to that outlined above in which cells underwent a potentially desensitizing challenge with 1 mM carbachol for 5 min, we sought to examine recovery of the carbachol-mediated peak (10 s) Ins(1,4,5)P 3 responses. This peak response showed desensitization with washout and recovery periods of less than 2 min which approximately followed the time course of recovery of PtdIns(4,5)P 2 levels. By 2 min the response was fully restored (Fig. 8). Desensitization is often reflected in a reduction in agonist potency rather than a reduction in the maximal response, and we therefore conducted an identical series of experiments in which cells were rechallenged with a concentration of carbachol (25 M) equivalent to the EC 50 for the peak Ins(1,4,5)P 3 response in these cells (20). Compared with rechallenge with 1 mM carba- chol, these experiments demonstrated a more prolonged delay in the recovery of the peak Ins(1,4,5)P 3 responses which took greater than 5 min to fully recover and clearly lagged behind the recovery of PtdIns(4,5)P 2 (Fig. 8). Experiments in which the initial carbachol challenge was for 1 min rather than 5 min gave similar results (data not shown). Derivation of Quantitative Aspects of PtdIns(4,5)P 2 Turnover during Sustained Agonist Activation Under the pseudo-steady-state conditions that exist during sustained muscarinic acetylcholine receptor activation, the rate of PtdIns(4,5)P 2 synthesis must be equivalent to the rate of its breakdown. The initial rate of [ 3 H]PtdIns(4,5)P 2 recovery following removal of carbachol therefore provides an index of the rate of PtdIns(4,5)P 2 breakdown which pertained during sustained receptor activation. From the initial recovery rate (10,962 dpm/15 s/well ϭ 43,848 dpm/min/well, see Fig. 3), we calculated that the PtdIns(4,5)P 2 present during sustained receptor activation (8877 dpm/well) would be completely turned over every 12 s. This assumes that breakdown of PtdIns(4,5)P 2 (by either PLC or PtdIns(4,5)P 2 5-phosphatase activities) is negligible compared with synthesis immediately following agonist removal (any contribution of breakdown resulting in an underestimation of utilization) and that all of the PtdIns(4,5)P 2 exists within an agonist-sensitive pool. However, a fraction of PtdIns(4,5)P 2 (approximately 15% of basal) is resistant to depletion by wortmannin during carbachol stimulation (Fig. 5c) and is unable to support [ 3 H]InsP x accumulation (Fig. 5a). This fraction may not, therefore, be accessible to agonist-stimulated PLC and constitutes a temporary or permanent agonist-insensitive pool. If 15% (5177 dpm/well) of basal (34,510 dpm/well) PtdIns(4,5)P 2 represents an agonist-insensitive fraction, then the remaining pool (8877-5177 ϭ 3700 dpm/well) must turn over every 5 s under maximal sustained receptor activation. Quantitative Estimates of PtdIns(4,5)P 2 Utilization Immediately following Receptor Activation A number of studies have demonstrated that the M 3 -muscarinic receptor which is primarily responsible for carbachol-mediated activation of PLC in these cells (23) undergoes a rapid, but partial desensitization within seconds of agonist exposure. This is indicated by a biphasic accumulation of [ 3 H]InsP x in cells in which inositol monophosphatase activity has been blocked with Li ϩ to trap all products of PLC-mediated phosphoinositide hydrolysis (16,20,22,32). In addition, such rapid desensitization is reflected in the biphasic profile of Ins(1,4,5)P 3 accumulation following carbachol challenge as seen in this (Fig. 7a) and other studies (19 -22). It should be noted, therefore, that the utilization of PtdIns(4,5)P 2 during sustained receptor activation, as calculated above, represents that of a partially desensitized receptor. The rate of PtdIns-(4,5)P 2 utilization by non-desensitized receptors immediately upon agonist addition may well be far greater than this. Indeed the initial rate of decrease of PtdIns(4,5)P 2 following addition of carbachol (24,363 dpm/10 s/well ϭ 146,176 dpm/min/well) is 3.3-fold greater than during the sustained phase. This is in agreement with the 2-4-fold difference in the rates of [ 3 H]InsP x accumulation over the 1st min of agonist stimulation and the subsequent sustained phase accumulation in Li ϩblocked SH-SY5Y cells (22,32). However, our calculation assumes that the contribution of resynthesis is minimal during the initial fall of PtdIns(4,5)P 2 . As we demonstrate that resynthesis of PtdIns(4,5)P 2 is required even for the first 10 s of PLC activity (see below), this initial rate, and the difference between acute and sustained phases, is likely to be in excess of that calculated. Quantitative Estimates of Basal PtdIns(4,5)P 2 Utilization and Relative Increases during Receptor Activation PtdIns(4,5)P 2 is hydrolyzed in the absence of added agonist, and we have estimated this basal turnover using wortmannin. Addition of wortmannin under basal conditions caused an immediate reduction in [ 3 H]PtdIns(4)P (Fig. 5b) but no reduction in [ 3 H]PtdIns(4,5)P 2 for at least 2 min (Fig. 5c). Thus, during this 2-min period, although synthesis of [ 3 H]PtdIns(4)P was blocked due to inhibition of a PtdIns 4-kinase by wortmannin, there was sufficient [ 3 H]PtdIns(4)P to maintain the levels of [ 3 H]PtdIns(4,5)P 2 . The initial rate of decrease of [ 3 H]PtdIns-(4)P may, therefore, provide an index of the basal consumption of PtdIns(4,5)P 2 . This rate of 6839 dpm/min/well infers that an amount of PtdIns(4,5)P 2 equivalent to the total cellular pool (34,510 dpm/well) turns over every 5 min under basal (unstimulated) conditions. The rate of basal PtdIns(4,5)P 2 utilization is 6.4-fold less than the estimated rate of consumption during sustained receptor activation and at least 21.4-fold less than that occurring immediately upon agonist addition (see above). These determinations assume that recycling of PtdIns(4)P to PtdIns (by a PtdIns(4)P 4-phosphatase) and hydrolysis of PtdIns(4)P by PLC are negligible following addition of wortmannin. Any contribution would lead to an overestimation of basal activity and an underestimation, therefore, of the relative increase in the consumption of PtdIns(4,5)P 2 following agonist addition. As a consequence of the potential overestimation of basal hydrolysis, our values of 21-and 6-fold over basal stimulation of PtdIns(4,5)P 2 hydrolysis during the immediate and sustained components of maximal muscarinic-receptor activation, respectively, are likely to be underestimates. Indeed these values contrast with 150-and 60-fold stimulations reported previously (32) in these cells in which basal PtdIns(4,5)P 2 hydrolysis was assessed by the determination of [ 3 H]InsP x accumulation in Li ϩ -blocked cells. However, due to the uncompetitive nature of Li ϩ action, complete block may not occur under conditions of low flux through the enzyme (33) resulting in an underestimation of basal activity. It is likely that the true extent of stimulation lies between these values. Absolute Changes in Levels of PtdIns(4,5)P 2 The measured basal level of PtdIns(4,5)P 2 was 87.7 pmol/ well, whereas cells labeled to equilibrium with [ 3 H]inositol had [ 3 H]PtdIns(4,5)P 2 present at 34,510 dpm/well. This relationship has enabled us to derive estimates of PtdIns(4,5)P 2 levels and utilization in mass units. Furthermore, our measurement of cell number and protein content of a typical well (4.9 ϫ 10 5 cells, 250 g of protein) has allowed conversion of these estimates to a per cell or per unit cellular protein basis. Thus, the basal level of PtdIns(4,5)P 2 is 0.18 fmol/cell (1.1 ϫ 10 8 molecules/cell) of which 0.15 fmol/cell exists in an agonist-sensitive pool. During sustained receptor activation with a maximally effective concentration of carbachol, PtdIns(4,5)P 2 levels fall to 0.05 fmol/cell of which 0.02 fmol/cell exists in an agonist-sensitive pool. During such sustained receptor activation, PtdIns(4,5)P 2 turns over at a rate of 0.23 fmol/cell/min (1.4 ϫ 10 8 molecules/cell/min), while during the acute phase of receptor activation this rate is at least 0.76 fmol/cell/min (4.6 ϫ 10 8 molecules/cell/min). From previous estimates of the catalytic activity of PLC-␤ (200 mol/min/mg protein (34)), we can estimate that the equivalent of approximately 15,000 molecules of PLC would require to be maximally activated to achieve this hydrolytic rate. DISCUSSION PtdIns(4,5)P 2 is considered to be the physiological substrate for both PLC and PI3-K activities. Despite the dependence upon PtdIns(4,5)P 2 as the substrate for Ins(1,4,5)P 3 , diacylglycerol, and PtdIns(3,4,5)P 3 synthesis, this polyphosphoinositide constitutes a minor fraction (ϳ5%) of the cellular inositol phospholipid content under basal conditions. The increasing realization that PtdIns(4,5)P 2 , and perhaps PtdIns(4)P, can modulate other aspects of cellular homeostasis (1, 35-37) might provide one explanation for the relatively low cellular levels of the polyphosphoinositides. Irrespective of what sets the cellular PtdIns(4,5)P 2 concentration, it is clear that efficient regulatory mechanisms must exist if supply and demand are to be matched, particularly during agonist-stimulated PLC and/or PI3-K activation. The current study demonstrates that marked and sustained depletions of both PtdIns(4)P and PtdIns(4,5)P 2 are caused by a maximally effective concentration of carbachol. In SH-SY5Y cells PtdIns(4,5)P 2 is present at 360 Ϯ 27 (n ϭ 4) pmol/mg protein, and as shown here, this level can be decreased by Ն70% for prolonged periods under conditions of maximal agonist activation of M 3 -muscarinic acetylcholine receptors. The precipitous fall in the cellular levels of these phospholipids corresponds temporally with the peak of Ins(1,4,5)P 3 accumulation in these cells and indicates that conversion of PtdIns(4)P to PtdIns(4,5)P 2 is required to support both the initial and sustained components of transmembrane signaling via PLC. The sustained depletions of the polyphosphoinositides are related to the extent of agonist stimulation although carbachol was more potent at depleting PtdIns(4)P than PtdIns(4,5)P 2 . This infers the presence of mechanisms that serve to protect the pool of PtdIns(4,5)P 2 at the expense of PtdIns(4)P. Such protection is also indicated by the markedly different rates of recovery of PtdIns(4)P and PtdIns(4,5)P 2 following abrupt termination of receptor activation. Thus, PtdIns(4,5)P 2 not only recovers to basal levels with a shorter half-time but also recovers by Ͼ60% before PtdIns(4)P recovery commences. Such protection of PtdIns(4,5)P 2 is in accord with studies in other systems (24,38,39) and suggests that substrate availability and product inhibition are unlikely to be the only factors governing the supply of PtdIns(4,5)P 2 . Thus, mechanisms exist that coordinate polyphosphoinositide synthesis in accordance with de-mand, and the current data indicate that these mechanisms are of relevance in intact cells. Regulation of polyphosphoinositide supply could potentially occur via an activation of the kinases responsible for their generation and/or a reduction in the activities of the phosphatases that appear to be involved in substrate cycling (13). However, it seems unlikely that the massive increase in demand for PtdIns(4,5)P 2 could be met by changes in polyphosphoinositide-phosphatase activities alone as this would require a persistently high level of substrate or "futile" cycling (13). While coordinated changes in phosphatase and kinase activities may occur, there is little information on the regulation of phosphatase activities. There is, however, a variety of experimental evidence to implicate regulation of both PtdIns 4-kinase and PtdIns(4)P 5-kinase (8 -15). Of particular relevance to the current study is the suggestion that a G i -mediated activation of rho is responsible for the regulation of PtdIns(4)P 5-kinase (15,40). Thus, in HEK cells expressing recombinant muscarinic M 3 receptors, PtdIns(4,5)P 2 recovers to above basal levels in a pertussis toxin-sensitive fashion following removal of agonist, and it has been argued that this allows an enhanced phosphoinositide response upon rechallenge (40). Such results are in contrast to those of the current study in which we observed a desensitization of the Ins(1,4,5)P 3 response and a complete lack of effect of pertussis toxin on the rate or extent of recovery of PtdIns(4,5)P 2 following muscarinic receptor activation. Whether the difference in these studies reflects cell specificity or a promiscuous coupling of M 3 receptors to G i at high expression levels in HEK cells is unclear. However, the current study demonstrates that at physiologically relevant levels of muscarinic receptor expression (300 fmol/mg protein (32)), pertussis toxin-sensitive G-proteins play no significant role in regulating resynthesis of PtdIns(4,5)P 2 or setting the absolute level of this substrate in SH-SY5Y cells indicating that this is not a universal phenomenon. By the use of wortmannin we have demonstrated that approximately 15% (of basal levels) of PtdIns(4,5)P 2 is unavailable to agonist-stimulated PLC. Similar values have been obtained using Li ϩ block in Chinese hamster ovary cells expressing recombinant muscarinic M 1 receptors (24). The reason for the inaccessibility of this fraction is unclear but may well relate to localization within the cell (41,42), and this has the potential to impart regulatory control on the signaling pathway. Taking the agonist-insensitive pool into account, we calculate that the agonist-sensitive pool of PtdIns(4,5)P 2 turns over at a rate of approximately 12 times per min under sustained stimulation with a maximally effective concentration of carbachol. In the absence of resynthesis, the existing substrate pool would be consumed in less than 2 s of agonist addition. This contrasts somewhat with a previous estimate for PtdIns-(4,5)P 2 turnover in these cells of 13-19 s (32) and emphasizes the critical dependence of peak (at 10 s) Ins(1,4,5)P 3 responses on PtdIns(4,5)P 2 resynthesis. Small changes in the absolute basal level of PtdIns(4,5)P 2 will, therefore, have little impact on phosphoinositide responses. Wortmannin treatment of SH-SY5Y cells results in the loss of muscarinic receptor-mediated phosphoinositide and [Ca 2ϩ ] i responses demonstrating the critical dependence of this signaling pathway on a wortmannin-(and LY-294002-) sensitive PtdIns 4-kinase as reported in bovine adrenal glomerulosa cells (29,43). A recent report has indicated that phosphorylation of PtdIns(5)P at the D-4 position by the enzyme, previously identified as type II PtdInsP-5-OH kinase, provides another potential pathway for the generation of PtdIns(4,5)P 2 (44). Unless this route of synthesis also relies upon a wortmannin-sensitive enzyme, the current data suggest that this pathway contrib-utes little or not at all to the provision of PtdIns(4,5)P 2 in these cells under basal or agonist-stimulated conditions. The ability of wortmannin to further deplete PtdIns(4,5)P 2 during sustained receptor activation is also paralleled by a loss of [ 3 H]InsP x generation. This suggests that despite the ability of most PLC isoforms to hydrolyze PtdIns, PtdIns(4)P, and PtdIns(4,5)P 2 (45), the large cellular pool of PtdIns does not provide a substrate for PLC during muscarinic-receptor activation in SH-SY5Y cells. The issue of the substrate specificity of receptor-activated PLC is difficult to address in situ, and the ability of agents such as wortmannin to manipulate the phosphoinositide pools represents a novel approach to this problem. Furthermore, the ability of wortmannin to abolish the sustained [Ca 2ϩ ] i elevation during muscarinic receptor stimulation suggests that there is no Ca 2ϩ entry via a receptoroperated Ca 2ϩ channel (rather than a capacitative mechanism) which has been inferred in other systems (46,47). Although both wortmannin and LY-294002 are better known for their ability to block PI3-K, inhibition of this enzyme cannot account for the results of the present study. Thus, the effect of wortmannin on both basal and carbachol-stimulated (poly)phosphoinositide levels is minimal at concentrations that have been reported to abolish totally PI3-K-dependent responses (30 -100 nM (29,43)). Wortmannin (and LY-294002) appears to inhibit one isozyme of the multiple PtdIns 4-kinase activities that are present in many cell types (3), possibly the recently cloned type III PtdIns 4-kinase (48,49). Although we acknowledge that wortmannin is not a specific inhibitor of PI-4 kinase (28,29,43), the data in the present study are totally consistent with an inhibition of this enzyme and indeed are in agreement with studies in which Li ϩ has been used to manipulate cellular levels of the polyphosphoinositides in Chinese hamster ovary cells during stimulation of muscarinic M 1 receptors (24). The current study demonstrates that while Li ϩ may be used to manipulate phosphoinositide pools in an agonist-dependent manner in some cell lines, this is not the case in SH-SY5Y cells. This is probably as a consequence of a large intracellular pool of free inositol, and a chronic inositol depletion strategy is required to render the cells sensitive to Li ϩ (50). Given the similar temporal profiles and susceptibility to desensitization of PLC signaling mediated by muscarinic M 3 receptors expressed in Chinese hamster ovary cells (Li ϩ -sensitive) and SH-SY5Y cells (Li ϩ -insensitive) (16,20,22,32), it is unlikely that the size of the inositol pool has a major impact on these aspects of phosphoinositide metabolism. The present data emphasize the dynamic nature of the agonist-sensitive PtdIns(4,5)P 2 pool, and the use of wortmannin demonstrates that the immediate consequence of blocking the synthesis of PtdIns(4)P (and also therefore PtdIns(4,5)P 2 ) is the failure of agonist-induced Ins(1,4,5)P 3 generation and Ca 2ϩ mobilization. These data clearly indicate that the rate of supply of PtdIns(4,5)P 2 must be tightly and adequately matched to demand to prevent a full or partial failure of agonist-mediated phosphoinositide signaling. Indeed, a reduction in the availability of PtdIns(4,5)P 2 is a possible mechanism underlying the rapid, although often partial, desensitization of PLC activity that occurs within seconds of agonist occupation of many types of PLC-linked receptors including muscarinic receptors of SH-SY5Y cells (20,22,32). Whether depletion of PtdIns(4,5)P 2 contributes to or underlies such desensitization has proved difficult to resolve. However, we demonstrate here that the sustained reduction in the level of PtdIns(4,5)P 2 is related to the concentration of carbachol. This suggests that at submaximal agonist concentrations, there is PtdIns(4,5)P 2 available for hydrolysis and implies that limited substrate supply is not responsible for the desensitization phenomenon unless each molecule of PLC has access to a limited amount of PtdIns(4,5)P 2 and that this is replaced at a rate related to the extent of receptor activation. The Ins(1,4,5)P 3 response to activation of muscarinic M 3 receptors in SH-SY5Y cells and indeed other cells consists of a rapid transient peak followed by a lower but sustained plateau phase (19 -22). We have argued previously that these two phases of Ins(1,4,5)P 3 accumulation represent desensitizationsensitive and desensitization-resistant phases, respectively (16). In the current study we also demonstrate desensitization of the peak Ins(1,4,5)P 3 response using a rechallenge protocol. These studies indicate that prestimulation with a maximal concentration of carbachol results in desensitization of the Ins(1,4,5)P 3 peak response upon rechallenge with either a maximal or submaximal concentration of carbachol. Although recovery of the response to a maximal concentration of carbachol paralleled the recovery of PtdIns(4,5)P 2 , recovery of the response to a submaximal concentration was slower. Thus, there are instances when there is sufficient PtdIns(4,5)P 2 to support a maximal Ins(1,4,5)P 3 response, and yet responses to a submaximal agonist concentration remain partially desensitized. This indicates that there are events unrelated to the recovery of PtdIns(4,5)P 2 (and other constituents of the signal transduction pathway) that are involved in the desensitization phenomenon. We have argued previously that this may be related to phosphorylation of the muscarinic M 3 receptor in a manner analogous to the phosphorylation and desensitization of the ␤ 2 -adrenoreceptor (16). The current data indicate a critical dependence upon a wortmannin-and LY 294002-sensitive PtdIns 4-kinase for the synthesis of PtdIns(4,5)P 2 and indicate that the immediate consequence of blocking the synthesis of PtdIns(4,5)P 2 is the failure of receptor-mediated phosphoinositide and Ca 2ϩ signaling. Mechanisms clearly exist that enable the supply of PtdIns-(4,5)P 2 to be matched to demand, and these mechanisms are effective even under conditions of maximal muscarinic receptor activation in SH-SY5Y cells. Thus, despite the critical dependence upon PtdIns(4,5)P 2 synthesis, our data indicate that agonist-mediated depletion of this lipid substrate is unlikely to account for acute receptor desensitization in SH-SY5Y cells. The pattern of PLC activation and extent of PtdIns(4,5)P 2 depletion directed by muscarinic receptor activation in this cell line is consistent with that mediated by many other receptor types in a variety of cellular systems (5,16,17,51) suggesting that the current findings are likely to have general applicability to PLC signaling systems.
9,770.6
1998-02-27T00:00:00.000
[ "Biology" ]
Universal Metric for Assessing the Magnitude of the Uncertainty in the Measurement of Fundamental Physical Constants In this paper, we aim to verify the numerical magnitude of the relative uncertainty in the measurement of fundamental physical constants. For this purpose, we use a metric called comparative uncertainty with which an a priori mismatch between the selected model and the observed physical object is checked. The comparative uncertainty is caused by the finite number of dimensional variables of the applied model. We show a comparison of the achieved values of the relative and comparative uncertainties of the gravitational constant, Planck’s constant, Boltzmann’s constant, and fine structure constant, according to data published in the scientific literature over the last 7 15 years. The results generally agree well with CODATA recommendations. We show that the comparative uncertainty as a universal metric can be used for the identification of recommended target values of the relative uncertainty in the field experiments. Introduction The physical laws express the quantitative relationship between different physical quantities in mathematical form.They are set on the basis of generalization of obtained experimental data, and reflect the objective laws existing in nature. So fundamentally important is it that all physical laws are an approximation to reality, since the construction of the theories is formulated by certain models B. Menin of phenomena and processes.Outside these models, the laws do not work or work poorly.Therefore, they have certain limits of applicability.In other words, physical laws give good predictions in a specific area of experimental conditions, and the corresponding theory explains them.A more accurate or more correct theory has a wider range of applications.Scientists believe that physical laws, at least, enable us to predict results to an arbitrary accuracy.For example, classical mechanics, based on Newton's three laws and the law of universal gravitation, is valid only for the motion of bodies with velocities much smaller than the speed of light.If the velocities of the bodies are comparable to the speed of light, predictions of classical mechanics are wrong.Special relativity has successfully coped with these problems.In fact, all physical theories are limited.The correspondence principle requires that a new theory with the broader area of applicability be limited to the old theory within the limits of its applicability.Turning to the theory of new concepts creates important preconditions for further development. Among the various explanations for the admissibility of the possible limits of applicability of physical laws, the following reasons are the most used.The first is the assumption that there is the limited detalization of phenomena, for which the Heisenberg inequality gives a quantitative expression.Secondly, the restrictions are determined by the real nature of the macroscopic instrument or measuring system.Most devices are presented, finally, as a solid.In principle, it could be argued that any device has an educational effect only within the field of reality where it is.Thus, research results should be expressed in terms of the macroscopic.In other words, concepts and images can be identified and are associated only with ordinary macroscopic views.The last argument is the point of view of the principle of the electromagnetic nature of all modern means of measurement and their role in determining the boundaries of experimental and measurement capabilities, and harmonization of the data with the theoretical postulate.Thus, there are possible explanations, but any quantitative approaches to estimate the difference between a model (i.e. a formulated physical law) and existing reality numerically, have not been proposed to date. Regarding the fundamental physical constants, it should be noted that their values are the accuracy of our knowledge of the fundamental properties of matter.On the one hand, very often the verification of physical theories is determined by the accuracy of the measured physical constant.On the other hand, firmly established experimental data are put into the foundation of new physical theories. In the study of physical constants, it is noteworthy that they are measured with very high accuracy, which is steadily increasing, which in itself is a testament to the development and perfection of techniques of physical experiment. Precision research on the measurement and specification of the values of physical constants and meticulous work on harmonization of data obtained by different methods and different groups of researchers are currently carried out. However, there is an urgent need to improve the accuracy of measurement of B. Menin fundamental physical constants further.This is explained by the desire to improve the axiomatic basis of the International System of units (SI). To assess the accuracy achieved in the measurements of fundamental physical constants, the concept of relative uncertainty is used.It should be mentioned that this method for identifying the measurement accuracy does not indicate the direction in which one can find the true value of a particular fundamental physical constant.In addition, it involves an element of subjective judgment [1].For this reason, we offer another method of assessing the credibility of the obtained measurements results. In [2], the specific novel foundations in modeling physical phenomena were established.For this purpose, the author discussed a representation of information theory for the optimal design of the model.A metric called comparative uncertainty, by which a priori discrepancy was between the chosen model and the observed material object, was verified.Moreover, the information quantity inherent in the model could be calculated, and proscribes the required number of variables that should be taken into account.It was thus concluded that in most physically relevant cases (micro-and macro-physics), the comparative uncertainty can be realized by field tests or computer simulations within the prearranged variation of the main recorded variable.This article is our attempt to speculate on how the fundamentally new concept of comparative uncertainty can be applied to assess for the identification of recommended target values of the estimated and required experimental relative uncertainty in the field measurements of fundamental physical constants. Required Techniques In a background paper [2], an approach, called the SI ℵ -hypothesis, for calcu- lating the lowest uncertainty of the researched variable based on principles of information and similarity theories, is formulated.Following it, a certain uncertainty exists before starting an experiment due only to the known recorded number of variables.In turn, the dimensionless (DS) comparative uncertainty ε of the DS variable u , which varies in a predetermined DS interval S, for a given number of selected physical dimensional (DL) variables z′′ , and β ′′ (the number of the recorded primary physical variables), can be determined from the relation where u ∆ is the DS uncertainty of the physical-mathematical model describing the experiment of measurement of DS variable u with the apriority chosen number of variables; ξ is the number of primary physical variables with inde- pendent dimensions; SI includes the following seven ( 7 ξ = ) basic primary variables: L-length, M-weight, Т-time, I-electric current, Θ -thermodynamic temperature, J-luminous intensity, F-amount of substance.The dimension of any secondary variable q can only express a unique combination of dimensions of main primary variables in different degrees [3]: , l m f  are exponents of variables, the range of which has maximum and minimum value; according to [4], integers are as follows: the exponents of variables take only integer values [4], so the number of choices where "−1" corresponds to the occasion when all exponents of variables of primary variables in Equation ( 2) are treated to zero dimension; П is a product of k е ; the value * K includes both required, and inverse variables (for example, L-length, 1 L − -running length).This is why the object can be judged knowing only one of its symmetrical parts, while others structurally duplicating this part may be regarded as information empty [5].Therefore, the number of options of dimensions may be reduced by 2 ω = times.This means that the total number of dimensional physical variables without inverse variables for SI equals * 2 38, 272; z′ is the total number of DL physical variables in the chosen class of phenomena (COP); in SI frames, every researcher selects a particular COP to study a material object.COP is a set of physical phenomena and processes described by a finite number of primary and secondary variables that characterize certain features of material object from the position with qualitative and quantitative aspects [6].In studying mechanics, for example, which is widely applied for the Newtonian gravitational constant measurements with a torsion balance, the base units of SI are typically used: L, M, Т (LMT).In publications that study the Boltzmann constant, COP SI ≡ LMТθ is usually realized; β ′ is the number of primary physical variables in the chosen COP. Equation (1) quantifies u S ∆ caused by the limited number of variables taken into account in the theoretical or experimental analysis of fundamental physical constant value.It also sets a limit on the expedient increasing of the measurement accuracy in conducting experimental studies.In turn, u S ∆ is not a purely mathematical abstraction.It has a physical meaning, consisting of the fact that in nature there is a fundamental limit to the accuracy of displaying any observed material object, which cannot be surpassed by any improvement of instruments and methods of measurement.The reality of the environment is the obvious a priori condition for modeling the investigated material object.By allocating the interested process or phenomenon, the unknown relationships between the content of object and the environment are "broken".In this context, it is obvious that the overall uncertainty of the model including inaccurate input data, physical assumptions, the approximate solution of the integral-differential equations, etc., will be larger than u ∆ .Thus, u ∆ is only one lowest component of a possible mismatch of a real object and its modeling results. Equating the derivative of u S ∆ (1) with respect to z β ′ ′ − to zero, we obtain the condition for achieving the minimum comparative uncertainty for a particular COP: Several remarks should be noted: 1) For mechanics processes (COP SI ≡ LMТ), taking into account the aforementioned explanations and ( 4), the lowest comparative uncertainty LMT ε can be reached at the following conditions: where "−1" corresponds to the case when all the primary variable exponents are zero in formula (2); dividing by 2 indicates that there are direct and inverse variables, e.g., L 1 -length, L −1 -run length, and 3 corresponds to the three primary variables L, M, T. In other words, according to (9), even one DS main variable does not allow one to reach the lowest comparative uncertainty.Therefore, in the frame of the suggested approach, nobody can realize the original first-born comparative uncertainty using any mechanistic model (COP SI ≡ LMТ).Moreover, the greater the number of mechanical parameters, the greater the first-born embedded uncertainty.In other words, for example, the Cavendish method, in the frame of the suggested approach, is not recommended for measurements of the Newtonian gravitational constant. Such statements appear to be highly controversial, and one might even say, very unprofessional, not credible and far from current reality.However, as we shall see below, the proposed approach allows the obvious conclusions not to be made, consistent with practice. 2) For electromagnetism processes (COP SI ≡ LMТI), taking into account (4), the lowest comparative uncertainty can be reached at the following conditions: where "−1" corresponds to the case when all the primary variable exponents are zero in formula (2); dividing by 2 indicates that there are direct and inverse variables, e.g., L 1 -length, L −1 -run length, and 4 corresponds to the four primary variables L, M, T, I. 3) For combined heat and electromagnetism processes (COP SI ≡ LMТθI), taking into account (4), the lowest comparative uncertainty LMTI ε can be reached at the following conditions: 2 5 7 3 9 9 5 1 2 5 4247, where "−1" corresponds to the case when all the primary variable exponents are zero in formula (2); dividing by 2 indicates that there are direct and inverse variables, e.g., L 1 -length, L −1 -run length, and 5 corresponds to the three primary variables L, M, T, Θ, I. where "−1" corresponds to the case when all the primary variable exponents are zero in formula (2); dividing by 2 indicates that there are direct and inverse variables, e.g., L 1 -length, L −1 -run length, and 4 corresponds to the four primary variables L, M, T, Θ. Let us now try to apply the aforementioned method for the analysis of the accuracy of some physical laws, and determine the minimum possible, relative, measurement uncertainty of several fundamental physical constants. Law of Gravitation According to the law of universal gravitation, which is the greatest generalization achieved by the human mind, two bodies act on each other with a force that is inversely proportional to the square of the distance between them, and directly proportional to the product of their masses.The coefficient of proportionality is the gravitational constant G.Then, in accordance with the theory of measurement, the total absolute measurement uncertainty of force will be greater than the uncertainty of each variable included in this formula.In this case, we could draw the following conclusion.If we can find out the value of the smallest uncertainty in the measurement of the gravitational constant, then we could declare the appropriate experimental accuracy in the determination of the forces of attraction in the law of gravitation. In [7], scientific publications and CODATA (Committee on Data for Science and Technology) recommendations over the past 15 years (2000-2014) were discussed from the position of the reached relative and comparative uncertainties values.In none of the current experiments of the calculation of Newtonian gravitational constant value was the prospective interval not declared, in which its true value could be placed.In other words, the exact trace of the placement of G is lost somewhere.Therefore, in order to apply the stated approach, as a possible measurement interval of the Newtonian gravitational constant, we chose the difference of its value attained by the experimental results of two projects: S G G The obtained data are summarized in Table 1.Judging the data by the relative Table 1.Summary of the partial history of Newtonian gravitational constant measurements in terms of reaching its value, and absolute, relative and comparative uncertainties.and comparative uncertainties, one can see that the measurement accuracy had not significantly changed during the last 15 years.It should be noted that the relationship between relative uncertainty G r and absolute uncertainty G ∆ is the following According to these data, the development of a larger number of designs and improvement of the various experimental facilities for the measurement of the gravitational constant using schemes combining a torsion balance and electromagnetic equipment is an absolute must in order to achieve results closer to the minimum comparative uncertainty ( ) Heisenberg's Uncertainty Relation We have applied the SI ℵ -hypothesis for a novel theoretical evaluation of Hei- senberg's uncertainty relation in one-dimensional space based on a mathematical formulation of the comparative uncertainty. The theoretical limit of accuracy of any measurements for the DL standard deviation of coordinates x ∆ (uncertainty of position) and DL standard deviation of the momentum p ∆ (uncertainty of momentum) is the following 2, where ( ) , and h denotes Planck's constant.We can obtain: where X ∆ denotes the DS standard deviation of coordinate X,.P ∆ denotes the DS uncertainty of the DS momentum P, x S denotes the DS considered range of changes of the measured DS variable X, p S denotes the DS considered range of changes of the DS momentum P. Equation (22) and Equation ( 23) are realized due to the following ( ) ( ) or where * x S is the DL considered range of changes of the measured DL variable x , x S denotes the DS considered range of changes of the measured DS varia- ble X, * r denotes the DL scale parameter with the same dimension that x and where * p S is the DL considered range of changes of the measured DL variable p. Then the modified Heisenberg's uncertainty relation for one-dimensional space can be introduced with a relative uncertainty of where ( ) , and h denotes Planck's constant, This Equation ( 31) is objective and independent from the presence of the conscious observer conducting measurements.Thus, according to equation (36) the interval of the particle location and the interval of the particle momentum cannot be known with absolute precision simultaneously.The more precisely one specifies the location of the particle, the larger the degree of uncertainty of B. Menin the particle's momentum, and vice versa. Equation (31) is in fact a possible interpretation of a general principle of W. Heisenberg (21) in another form, which holds for any investigated object.At the same time, it is understood that without enough comparison with previous results, the readers cannot evaluate whether the introduced results are good or bad.That is why further examples, maybe, can convince researchers for the appropriateness of the ℵ -hypothesis for experiments in engineering and physics. Fine Structure Constant The desire to measure the fine-structure constant α with a great accuracy has gained paramount importance, because α is directly connected to the problem of understanding the physical nature of elementary particles; it appears not to be separated from them, and is their depth property. Analyses of the fine structure constant measurements during 2006-2014 were realized.Principles of research and treatment of results are similar to the scheme set out in Chapter 3.1.The data are summarized in Table 2 and Figure 1. The fact is that the measurement accuracy of α has not significantly changed during the last nine years in terms of the attained relative r α and comparative S α α ∆ uncertainties.We calculated the recommended value of the relative uncertainty ( ) Plank's Constant The great importance of the Planck's constant in modern physics causes a situation where many studies have been dedicated to its measurement [32] [33]. However, the experiment results show discrepancies. We analyzed several scientific publications of 2007-2014 from the position of the attained relative and comparative uncertainties values following the same scheme that is introduced in Chapter 3.1.The data are summarized in Table 3. It can be argued that, according to the results presented, as in the case of the gravitational constant and the fine structure constant measurements, the accuracy of measurements of Planck's constant has not improved significantly over the past seven years.All studies were conducted as part of COP SI ≡ LMТI.The minimum comparative uncertainty equal to 0.0244 (13) was not achieved.According to data of Table 3, there is calculated an attainable value of the relative uncertainty min h r equaled Boltzmann Constant The analysis of the Boltzmann constant b k plays an increasingly important role B. Menin [39] in our physics today to ensure the correct contribution to the next CODATA value and to the new definition of the Kelvin.This task is more difficult and crucial when its true-target value is not known.This is the case for any methodologies intended to look at the problem from a possible another view, and which, maybe, have different constraints and need special discussion. Analysis of the Boltzmann constant measurements made during 2007-2015 shows that none of the current experimental measurements that calculate k b have declared an uncertainty interval in which the true value can be placed. Therefore, in order to apply the stated approach, as the estimated interval of k b changes, we choose the difference of its value reached by the experimental results of two projects: Following the same line of reasoning that was introduced in Sections 3.3 and 3.4, and taking into account (22), we analyzed several scientific publications and CODATA recommendations over six years from the perspective of the achieved relative and comparative uncertainties values.The data are summarized in Table 4 and Figures 2-4.By analyzing theoretical methods and experimental schemes, one can declare that results were obtained using COP SI ≡ LMТΘ or COP SI ≡ LMТΘI. It can be seen from the data given in Table 4 and more than twenty times.A similar situation exists for the spread of the value of the comparative uncertainty.Without going into analysis of the uncertainties nature, a part of which the researchers have already identified, we can say with great confidence that, under the proposed approach, one of the reasons for the created unsatisfactory situation is a number of variables taken into account in the measurement or the chosen model for calculation of the Boltzmann constant.So, for COP SI ≡ LMТΘ, in order to achieve the minimum comparative uncertainty there must be taken into account 19 variables (18), and for COP SI ≡ LMТΘI already 471 (15) variables.In all these works the number of variables taken into account is much smaller.Thus, to improve the accuracy of measurement of the Boltzmann constant there need to complicate experimental stands. To realize this goal, scientists must be prepared to spend sufficient resources. However, the key data that provide the 2010 recommended value of b k would appear to be close to meeting CODATA requirements [19].At the same time, the development of a larger number of designs and improvements of various experimental facilities for the measurement of the Boltzmann constant is required in order to bring the results closer to the minimum comparative errors ( ) We can argue about the order of the desired value of the relative uncertainty of COP SI ≡ LMТΘ that is usually used for measurements of the Boltzmann constant.For this purpose, we take into account the following data: ( ) Discussion Under the proposed approach, for each mathematical model of physical law, there is an uncertainty, which initially, before the full-scale experimental studies, or computer simulations, describes its proximity to the examined physical phenomenon or process.This value is called the comparative uncertainty.It depends only on the number of selected variables and the observation interval of the selected primary variable.One of the interesting features of the proposed hypothesis is that the minimum achievable comparative uncertainty is not constant, and varies depending on the class of phenomena choice.Moreover, theory B. Menin can predict its value.In particular, this means that when switching from a mechanistic model (LMТ) to COP with a larger number of the primary variables, this uncertainty grows.This change is due to the potential effects of the interaction between the increased number of variables that can or cannot be taken into account by the researcher. On the one hand, well-known physical laws are valid in a certain area and serve as a reliable tool in everyday life.At the same time, taking into account the experience of the creation of special relativity theory, we know that the achieved accuracy of the description of the world is not satisfactory.On the other hand, fundamental physical constants are currently measured with high accuracy.However, this is not sufficient to be able to modify the International System of Units (SI).The proposed approach allows us to estimate the limits of our knowledge and to reveal an insurmountable barrier for identifying compliance of model and the object studied.Clear evidence of this is the possibility of estimating the minimum attainable value of the relative uncertainty for the gravitational constant, Planck's constant, the fine structure constant, and Boltzmann's constant.In addition, it was possible, to a first approximation, to introduce the Heisenberg inequality for one-dimensional space as a function of the fundamental physical constants, which does not depend on the observer's presence, space and time. The proposed approach can still not overcome inherent shortcomings, such as: • This approach is based on the use of interval changes of observed or studied variables.In practice, this parameter is not defined in any serious experimental research in the field of physics.Sometimes, the changes interval of, for example, a gravitational constant, the Boltzmann constant, and other fundamental physical constants, is referred to in review articles only to prove the convergence of the experimental data to a certain value, or to reduce the spread of the results; • This approach requires the probable appearance of the variables selected by a conscious observer.It ignores factors such as knowledge, intuition, and experience inherent to the researcher.This is why it seems physically impossible; • The method does not give any recommendations about the selection of specific physical variables, but only imposes a limitation on their number. Nevertheless, since the proposed method is not associated with the specific structure of the model, which may change, it is more common, simple, and leads to the solution of specific problems, without requiring detailed information about the set of variables.However, the internal mechanism of phenomena is not disclosed.From this study, we can conclude that fundamentally new analysis is an attempt to use comparative uncertainty instead of relative uncertainty in order to verify the accuracy of the physical laws and comparing the results of various measurements of fundamental physical constants.It can be implemented, assuming that their values are within the selected range.However, this idea cannot be proven, because the choice of the interval depends on our previous knowledge, which can be flawed.At the same time, the approach determines the most simple and reliable way to select a model with the optimal number of selected variables.This will greatly diminish the duration of the studies as well as the design stage, thereby reducing the cost of the project. Of course, the choice of the range of variations is controversial because of its apparent subjectivity.However, our ability to predict the values of the fundamental physical constants by use of comparative uncertainty allows us to improve the scientific understanding of complex phenomena, and to apply this understanding to solve specific problems.In addition, the prospect of an additional solution to the problem will help researchers to understand the current situation and to identify specific ways to solve it. Conclusions In addition to the relative uncertainty analysis, the introduced approach could enable a new methodology that will help the additional monitoring accuracy of physical laws and fundamental physical constants.The use of the ℵ -hypothesis only limits the domain of applicability of measurement theory for uncertainties that are much larger than the uncertainty of the physical-mathematical model due to its finiteness. By introducing the comparative uncertainty concept along with known physical laws, we can verify the required relative uncertainty values of fundamental physical constants that must be recommended for identifying concrete ways to perfect SI. The suggested approach is a mathematical tool that allows one to describe a physical system with the lowest uncertainty, which is a surprisingly simple relation. The ℵ -hypothesis might be applicable to experimental verification.In general, it is available when the researcher has all the information about the changes interval of the main variable.Moreover, the ℵ -hypothesis provides new functionalities useful for micro-and macro-physics, including engineering, astronomy, and quantum electrodynamics. The comparative uncertainty is a peculiar metric for assessing the measurement accuracy of physical laws and fundamental physical constants. 4 ) the total number of dimensional options of physical variables equals * 1 . Then, the possible observed range * S of G variations equals 5 1 . 4 10 − × .Thus, the use of the SI ℵ -hypothesis and the concept of comparative uncertainty allow us to give recommendations on which direction to carry out experimental investigations and identify achievable minimum relative uncertainty in the calculations of the gravitational law in classical mechanics.At the same, an estimated value of comparative uncertainty allows only for better calculations, but does not change our understanding of gravity. limit of accuracy of any measurements connects the DL considered range of changes of the measured DL variable x and the DL considered range of changes of the measured DL variable p.According to the aforementioned investigation the expression * x S • * p S (31) can be regarded as a first approximation, as a real constant in space and time because its value depends essentially on , e, , α γ β and  . Figure 1 . Figure 1.Graph summarizing the partial history of fine structure constant measurement, displaying the changes of comparative uncertainty. questions remain unanswered.Nevertheless, it is hoped that min h r could be satisfactory for the existing mass standards community. Figures 2 - 4 that a dramatic improvement in the accuracy of measurement of the Boltzmann constant has not been achieved during the last decade, judging the data by both relative and comparative uncertainties and two different COP SI : LMТΘ, LMТΘI.Despite the fact that the authors of the mentioned studies stated on account of all possible sources of error, the value of the absolute and relative uncertainties can differ by B. Menin Figure 2 . Figure 2. Graph summarizing the partial history of Boltzmann constant measurement. Figure 3 . Figure 3. Graph summarizing the partial history of Boltzmann constant measurement, displaying changes in the relative uncertainty. Figure 4 . Figure 4. Graph summarizing the partial history of Boltzmann constant measurement displaying changes in the comparative uncertainty. In this case, the lowest possible relative uncertainty ( ) min LMT r θ for COP SI ≡ LMТΘ is as follows: can be used for the new definition of the Kelvin and a significant revision of the International System of Units (SI). Qualitative and quantitative conclusions from the relations obtained are consistent with practice.They are as follows: We conducted a theoretical evaluation of the Heisenberg uncertainty relation, based on the mathematical formulation of the comparative uncertainty.The modified Heisenberg uncertainty relation is introduced, as the first approximation, in a form that depends on only fundamental physical constants.Its value B. Menin does not change in time and space, and is independent of the presence of the conscious observer conducting measurements. Then one can calculate the minimum achievable comparative uncertainty (19), for which the comparative uncertainty equals 0.0246(19).This is explained by the fact (Section 2) that nobody can realize the original firstborn comparative uncertainty using any mechanistic model (COP SI ≡ LMТ). In addition, we calculated the lowest relative uncertainty ( ) Table 2 . Results of the fine structure constant measurements during 2006-2014 in terms of attaining its value, and absolute, relative and comparative uncertainties. Table 3 . Results of Planck's constant measurements during 2007-2014 in terms of having attained its value, and absolute, relative and comparative uncertainties. Table 4 . Results of Boltzmann's constant measurements during 2007-2012 in terms of its value attained, and absolute, relative and comparative uncertainties.
7,015.4
2017-02-15T00:00:00.000
[ "Physics" ]
Therapeutic Targeting of Proteostasis in Amyotrophic Lateral Sclerosis—a Systematic Review and Meta-Analysis of Preclinical Research Background: Amyotrophic lateral sclerosis (ALS) is a rapidly progressive fatal neurodegenerative condition. There are no effective treatments. The only globally licensed medication, that prolongs life by 2–3 months, was approved by the FDA in 1995. One reason for the absence of effective treatments is disease heterogeneity noting that ALS is clinically heterogeneous and can be considered to exist on a neuropathological spectrum with frontotemporal dementia. Despite this significant clinical heterogeneity, protein misfolding has been identified as a unifying pathological feature in these cases. Based on this shared pathophysiology, we carried out a systematic review and meta-analysis to assess the therapeutic efficacy of compounds that specifically target protein misfolding in preclinical studies of both ALS and FTD. Methods: Three databases: (i) PubMed, (ii) MEDLINE, and (iii) EMBASE were searched. All studies comparing the effect of treatments targeting protein misfolding in pre-clinical ALS or FTD models to a control group were retrieved. Results: Systematic review identified 70 pre-clinical studies investigating the effects of therapies targeting protein misfolding on survival. Meta-analysis revealed that targeting protein misfolding did significantly improve survival compared to untreated controls (p < 0.001, df = 68, α = 0.05, CI 1.05–1.16), with no evidence of heterogeneity between studies (I2 = 0%). Further subgroup analyses, evaluating the effect of timing of these interventions, showed that, only treating prior to symptom onset (n = 33), significantly improved survival (p < 0.001, df = 31, α = 0.05, CI 1.08–1.29), although this likely reflects the inadequate sample size of later time points. Furthermore, arimoclomol was found to significantly reduce secondary outcome measures including: (i) histological outcomes, (ii) behavioral outcomes, and (iii) biochemical outcomes (p < 0.005). Conclusions: This analysis supports the hypothesis that protein misfolding plays an important role in the pathogenesis of ALS and FTD and that targeting protein misfolding, at least in pre-clinical models, can significantly improve survival, especially if such an intervention is administered prior to symptom onset. INTRODUCTION Amyotrophic lateral sclerosis (ALS) is a progressive multi-system neurodegenerative disease caused by the selective degeneration of brain and spinal cord motor neurons. The condition is markedly heterogeneous with respect to both the genetic and pathologic basis and the phenotypic manifestations. Survival ranges from only a few months to decades. The motor manifestations of the condition also vary widely according to both the site of onset and the rate of disease progression (Pupillo et al., 2014;Martin et al., 2017). It is now known that 50% of ALS patients are also affected by cognitive impairment and that the criteria for co-morbid dementia in ALS-Frontotemporal dementia (ALS-FTD) is met in 10-15% of cases (Goldstein and Abrahams, 2013;Crockford et al., 2018). The identification of a misfolded protein (TDP−43), common to both ALS and FTD has redefined these diseases as existing on a clinico-pathological spectrum (Arai et al., 2006;Neumann et al., 2006). TDP-43 proteinopathy is identified at post mortem in almost all cases of ALS and up to 50% of FTD cases (Neumann et al., 2006;Cairns et al., 2018). While ALS disease causing mutations have now been identified in over 25 genes involved in biological processes ranging from transcriptional regulation, energy metabolism to proteostasis (Nguyen et al., 2018), mutations in only four genes account for over 50% of all familial ALS cases (Ling et al., 2013). The corresponding protein products or associated proteins for these four genes; TDP-43, SOD1, FUS and C9orf72 have been identified in pathological ubiquitinated aggregates across mutant and sporadic cases alike (Ramesh and Pandey, 2017). Transactivation response DNA binding protein 43 kDa, TAR DNA-binding protein 43 (TDP-43), the neuropathological hallmark of ALS (Arai et al., 2006;Neumann et al., 2006) is a ubiquitously expressed DNA and RNA binding protein from the heterogeneous nuclear ribonucleoprotein (HnRNP) family, which regulates transcription and splicing. The full length 414 amino acid protein contains an N-terminal domain, two RNA-recognition motifs (RRM) and a glycine rich C-terminal sequence (Ayala et al., 2005). In cases of ALS and FTD, TDP-43 is truncated in to c-terminal fragments (25 and 35 kDa in size), hyperphosphorylated, ubiquitinated and mislocalised from the nucleus forming cytoplasmic aggregates (Neumann et al., 2006;Van Deerlin et al., 2008). Fused in sarcoma (FUS), is another ubiquitously expressed HnRNP protein and consists of an N-terminal low complexity domain, RGG-rich domains, a RRM, a zinc finger domain and a nuclear localization signal [(Monahan et al., 2017); NLS]. FUS pathology was identified within a subgroup of FTD cases (Neumann et al., 2009) and subsequently FUS cytoplasmic inclusions have been identified in human neuronal and glial cells from spinal cord tissue of ALS cases with known FUS mutations, sporadic ALS, ALS/FTD and non-SOD1 familial ALS cases (Deng et al., 2010). FUS mislocalisation affects approximately 5% of familial cases of ALS and fewer than 1% of sporadic cases. This is in stark contrast to TDP-43 inclusions, which are present in the majority of cases of ALS. Whilst FUS protein inclusions do not co-occur in the presence of TDP-43 inclusions, both TDP-43 and FUS proteins contain a highly aggregation prone domain with a high density of exposed hydrophobic residues. Mutations within this aggregation-prone region further increase the hydrophobicity and thus its tendency to aggregate (Patel et al., 2015). Copper-zinc superoxide dismutase protein (SOD1) is a dimeric anti-oxidant enzyme, the pathological neuronal and glial aggregates of which have also been identified in post mortem sporadic and familial ALS cortical and spinal cord tissue (Shibata et al., 1996;Banci et al., 1998;Valentine et al., 2005;Forsberg et al., 2010Forsberg et al., , 2019. SOD1 mutations have been found to destabilize the protein structure increasing hydrophobic surface exposure thereby promoting aggregation (Münch and Bertolotti, 2010;Tompa and Kadhirvel, 2019). Other factors influencing the propensity of wild type protein to aggregate have been studied including oxidative damage or metalation status which may account for sporadic cases (Watanabe et al., 2001;Rakhit et al., 2004;Tompa and Kadhirvel, 2019). The commonest cause of ALS/FTD, the C9orf72 hexanucleotide repeat expansion has also been identified as a prominent proteinopathy (Al-Sarraj et al., 2011). This hexanucleotide repeat has been found to lead to non-ATG translation producing highly aggregation-prone dipeptide repeat (DPR) proteins (Ash et al., 2013). Concomitant TDP-43 aggregates are also a prominent part of C9orf72 pathology found at post mortem. Additionally, TDP-43 negative inclusions comprised of DPRs as well as markers of the ubiquitin proteasome system (UPS) have been identified in FTD and ALS cases with a C9orf72 mutation (Mackenzie et al., 2013). Accumulation of p62 otherwise known as Sequestosome 1 (SQSTM1), a ubiquitin binding protein and adapter molecule for autophagy is also a pathological feature of neuronal and glial inclusions across a broad range of neurodegenerative diseases (Kuusisto et al., 2001), however extra motor p62 neuronal and glial inclusions (in particular the granule cell layers of the cerebellum and hippocampus) have been found to be over-represented in C9orf72 cases (Al-Sarraj et al., 2011;Cooper-Knock et al., 2012). Proteostasis consists of parallel complementary mechanisms operating to continuously reduce the burden of intra-and extracellular protein misfolding. The main intracellular systems are the ubiquitin proteasome system (UPS) and autophagy where the UPS encompasses the initial tagging of proteins with ubiquitin molecules followed by degradation by the 26S proteasome complex (Glickman and Ciechanover, 2002). Autophagy is a lysosomal degradation system and includes the subtypes; macroautophagy to degrade large protein aggregates and damaged organelles, microautophagy as the direct uptake of cytosolic components in to the lysosome and chaperonemediated autophagy (Guo et al., 2018). Chaperone-mediated autophagy, is the major pathway for maintaining both intraand extracellular proteostasis and involves the formation of target protein-chaperone complexes which are then transported in to the lysosome (Glick et al., 2010). The heat shock proteins (HSPs) as chaperone proteins are also involved in protein maintenance more broadly; facilitating normal protein folding as well as monitoring protein quality (Kalmar and Greensmith, 2017). Over-arching regulatory signaling pathways are activated in response to an increased burden of misfolded proteins (ER stress) to increase the ER proteinfolding capacity, termed the unfolded protein response (UPR). However, prolonged UPR activation has a paradoxical effect and correlates with cell death (Walter and Ron, 2011). Studies have demonstrated that the main ALS causative genes TDP-43, SOD1, FUS and C9orf72 are each mechanistically linked to proteostatic systems (Kalmar and Greensmith, 2017). Increasingly other mutations associated with dysfunction of proteostatic mechanisms in ALS/FTD include either the proteasome and/or autophagy (UBQLN2, TBK1, OPTN, VCP) (Taylor et al., 2016). Despite research efforts to trial ALS treatments for over two decades, an effective therapy for ALS is yet to be identified. Only one drug, the anti-glutamate treatment Riluzole, has been shown to confer a modest survival benefit and the search for novel therapies continues (Bensimon et al., 1994;Amyotrophic Lateral Sclerosis/Riluzole Study Group et al., 1996). One of the major barriers to the trialing of potential ALS therapies lies within the genetic and phenotypic heterogeneity of the disease, limiting the ability to conduct stratified and therefore statistically robust studies. The observation of protein misfolding as a unifying pathological feature across almost all cases of ALS and FTD despite the heterogeneity of these conditions presents a major therapeutic opportunity. A large body of research assessing the experimental effects of therapies which target and upregulate proteostatic mechanisms now exists. The primary aim of this systematic review and metaanalysis is to formally evaluate this literature to date and to assess the therapeutic potential of such interventions. A secondary aim of this work is to perform a structured quality assessment of the literature and to assess for publication bias to identify areas of improvement for future preclinical studies in this field. Aims and Hypotheses The aim of this piece of work was to assess the therapeutic potential of compounds tested in preclinical studies that target protein misfolding in ALS and FTD. Primary Aim 1. Does targeting protein misfolding improve survival in ALS/FTD preclinical models? Secondary Aims 1. Does the timing of intervention affect the efficacy of treatments targeting protein misfolding in ALS/FTD preclinical models? 2. Does targeting protein misfolding affect outcomes other than survival in ALS/FTD preclinical models? 3. Structured quality assessment of the included studies and assessment for publication bias PICOS Framework Population: preclinical ALS/FTD animal models. Intervention: therapeutics targeting protein misfolding. Comparison: treatment vs. control group. Outcome measure: primary outcome: mortality in animal models. Secondary outcome measures: mean biochemical, histological and behavioral measurements. Study design: all preclinical ALS or FTD animal or yeast model studies assessing the efficacy of therapies targeting proteostasis on survival compared to controls. Search Methods Pre-clinical data were obtained, with no restrictions on publication date or language, from three databases; PubMed, MEDLINE and EMBASE using the following search terms (search date: 16/2/18). The references obtained from these searches were collated and imported onto Endnote, where duplicate studies were removed and full text articles retrieved. Language restrictions were applied and only articles written in plain English were included. Eligibility Criteria Studies were included if they compared the effect of treatments targeting protein misfolding to a control group, in the specified animal or yeast model, in ALS or FTD. Studies were excluded if co-treatments were administered, if studies weren't investigating a therapeutic stated to target protein misfolding or didn't compare the treatment to a control group and didn't investigate therapeutics in ALS/FTD models or in the specified animal or yeast models. Human trials, reviews and case reports were also excluded. Screening Studies were exported to SyRF, an online platform (http:// syrf.org.uk) for performing systematic reviews where titles and abstracts were screened by two independent reviewers to determine their relevance. For studies that were disputed between the two reviewers a third reviewer assessed its relevance to reduce bias. The SyRF software is designed to limit learners bias by showing studies later in the screening process to one reviewer that had been screened early by the other reviewer. Data Extraction The primary outcome measure was defined as mortality in animal models. The following data were extracted from included papers: paper identification (surname of first author and year of publication; if more than one paper was present for that author, papers were numbered in brackets following the year of the study), model used, intervention, the number of models used, the survival measurement, the mean biochemical, histological and behavioral measurements and whether the intervention was efficacious for each of these measures was also recorded. Biochemical outcomes included any molecular tests to identify biochemical differences, such as Western blot performed to detect levels of proteins, or markers of autophagy. Histological outcomes included data derived from tissue sections (frozen or formalin fixed, paraffin embedded), for example in situ hybridization or immunohistochemistry. Behavioral outcomes included any behavior in animals, e.g., locomotion or maze activity. The timing of the intervention, if stated, was categorized into one of four categories: (i) prior to symptom onset, (ii) at symptom onset, (iii) after symptom onset, or (iv) end-stage of disease. Finally, the quality of each study was categorized based on the number of checklist items scored from a modified version of the CAMARADES quality checklist including: peer review publication; statement of potential conflict of interests; sample size calculation; random allocation to treatment; allocation concealment; blinded assessment of outcome; appropriate control group identified; compliance with animal welfare regulations; statement of temperature control. Data Analysis Survival/mortality data extracted from the included papers were included on a forest plot using the freely available Review Manager (RevMan 5.0) software. Given the variety of model organisms included in the analysis, we expressed effect sizes for the primary outcome data (survival summary data) as odds ratios (OR; Vesterinen et al., 2014). For non-survival outcomes standardized mean difference (SMD) was used. This allowed us to compare the magnitude of the effect size rather than absolute effect sizes, meaning that all data were on the same scale irrespective of the animal model used. The OR/SMD were included on a forest plot using Review Manager 5 (RevMan5.0) software. We calculated summary estimates using a random effects model using the RevMan 5.0 software, weighted by study size, using Hedges g statistic to account for bias from small sample sizes. We reported heterogeneity using I 2 values and used a funnel plot to assess for the presence of publication bias. Predetermined subgroup analyses were performed on compounds tested in three or more separate studies (rapamycin and arimoclomol and lithium were the only three compounds to fit this a priori criterion). Other predefined subgroup analyses included an assessment of timing of intervention and quality of studies, where studies were grouped by the (i) timing of the intervention tested (pre-symptom onset, at symptom onset, after symptom onset, end-stage disease) or (ii) quality score and the number of studies showing efficacy were compared between groups using a 2-way ANOVA. RESULTS Database searching identified 2,709 papers meeting the search criteria, which after duplicate removal, resulted in 1,609 articles that underwent screening (Figure 1). Abstracts and titles of articles were screened by three independent screeners using the online SyRF facility resulting in 141 articles that met the predefined inclusion criteria. Following full-text retrieval only 81 of these articles were included in the quantitative analysis ( Figure 1A). The majority of animal models investigated were mouse models, with particular emphasis on the SOD1 G93A mutant mouse model. Other models used included rat, Drosophila, C. elegans and yeast ( Figure 1B). Targeting Protein Misfolding Confers a Statistically Significant Improvement of Survival Out of the 81 articles which met the eligibility criteria, 70 studies assessed survival/mortality as an outcome measure and were included in the meta-analysis to assess the hypothesis that targeting protein misfolding improves survival in ALS/FTD preclinical models. Targeting protein misfolding did significantly extend survival compared to untreated controls (p<0.001, degrees of freedom (df) = 68, α = 0.05 and 95% confidence intervals (CI) 1.05-1.16; Figure 2). The heterogeneity of these included studies was estimated by calculating an I 2 value. The I 2 value in this analysis was 0%, indicating a high level of agreement between studies favoring treatment over control (Figure 2). Timing of Intervention Favors Early Treatment To assess the hypothesis that targeting protein misfolding at different stages of the disease process will affect the efficacy of treatments on survival, studies were included in a forest plot grouped by timing of intervention and repeat sub-group metaanalyses were performed (Figures 3A-D). These analyses found that only treating prior to symptom onset (n = 33) significantly FIGURE 2 | Meta-analysis of preclinical studies shows therapeutic potential for targeting proteostasis in ALS. Forest plot showing the odds ratio and confidence intervals calculated from survival summary data study from each study, weighted by study size. Overall effect estimate is demonstrated (with 95% confidence intervals) as a black diamond at the bottom of the graph. Heterogeneity is displayed as an I 2 value. Results demonstrate an overall statistically significant effect favoring the targeting of proteostasis in ALS. Overall Quality of Studies Did Not Affect Likelihood of Efficacy Number of checklist items scored was evaluated for all studies and then plotted on a frequency distribution to evaluate the likelihood of those studies demonstrating efficacy. The hypothesis was that poorer quality studies would score fewer checklist items and result in inappropriate demonstrations of efficacy (i.e., false positive/false negative). The two-way ANOVA comparing frequency of efficacy in the studies of different quality and the post-hoc Bonferroni test accounting for multiple testing found there was no significant difference in the frequency of efficacy across differences in study quality (p = 0.118; Figure 7A). Finally, to assess whether there was publication bias in studies included in this review a funnel plot was created showing one outlier study, but no evidence of publication bias (Figure 7B). Targeting Protein Misfolding Confers Significant Survival Improvement The results of this study support the hypothesis that protein misfolding plays a major role in the pathogenesis of ALS and FTD. Meta-analysis of 70 studies (assessing survival outcomes) identified that targeting protein misfolding significantly extended survival compared to untreated controls with minimal heterogeneity detected between studies (I 2 = 0). A list of the interventions that were tested in these studies, including the putative mechanism of action of those interventions has been provided in Table 1. Further analysis of interventions investigated in at least three independent studies identified that rapamycin did not confer an improvement in either survival or secondary outcome measures (histological or behavioral). The mammalian target of rapamycin (mTOR) is a ubiquitously expressed kinase with a key role in mediation of major processes such as; metabolism, immunoregulation and the inhibition of autophagy. Rapamycin belongs to a class of drugs which inhibit mTOR activity, the pro-autophagic effect of these drugs has been explored in animal models of degenerative proteinopathic disease such as Alzheimer's disease (Dolcetta and Dominici, 2019). The neuroprotective role of mTOR has also been identified in SOD1 mouse models of ALS as a key mechanism to prevent motor neuron degeneration and disease progression (Saxena et al., 2013). One possible explanation for our study findings may be the development of tolerance which has been identified previously after continuous administration of an mTOR inhibitor resulted in pathway hyper-stimulation and an initial short-term stimulation of autophagy preceded by a longer-term decrease in autophagy markers (Kurdi et al., 2016). Another potential mechanism to explain the broader inconsistency in mTOR inhibition efficacy in ALS models (Zhang et al., 2011) is the observation in AD models of impaired autophagosome clearance, as the pro-autophagic effects would be counteracted by the accumulation of these autophagosomes (Bove et al., 2011). The small sample size included in the secondary meta-analysis of rapamycin may also limit the ability to reach definitive conclusions. Arimoclomol, a hydroxylamine derivative is a heat shock protein (HSP) co-inducer and promotor of native protein folding, the finding that arimoclomol treatment demonstrated an improvement in behavioral outcomes may indicate that it could FIGURE 4 | Rapamycin does not demonstrate efficacy in targeting survival, behavioral or histological outcomes. Forest plots showing the odds ratio (for survival) and SMD (for histological, behavioral and biochemical outcomes) and confidence intervals calculated from each study, weighted by study size. Overall effect estimate is demonstrated (with 95% confidence intervals) as a black diamond at the bottom of the graph. Heterogeneity is displayed as an I 2 value. (A) Studies assessing the (Continued) be implemented as a promising therapeutic aimed at alleviating symptoms and improving quality of life in ALS (Dairin et al., 2004). However, it is worth noting that arimoclomol had no effect on survival outcomes. Importantly, arimoclomol, as a co-inducer rather than an activator can stimulate HSP network upregulation within cells that are already stressed. This selectivity is important mechanistically to prevent imbalances caused by single HSP upregulation (for example selective Hsp90 or Hsp70 induction) and also to target the HSPs only where required (Kalmar et al., 2008). Indeed the potential therapeutic benefit of arimoclomol has also been suggested by a recent phase II clinical trial including a cohort of patients with rapidly progressive SOD1 ALS (Benatar et al., 2018). The diverse neuro-protective effects of lithium have been explored in preclinical in vivo models of dementia and are attributed to a range of mechanisms including oxidative stress, neuro-inflammation, cell survival, neurogenesis, synaptic plasticity and proteostasis. The main regulatory pathways are via inhibition of inositol monophosphatase (IMPA) and glycogen synthase kinase-3 (GSK-3), however multiple direct interactions with downstream key mediators have also been identified (Kerr et al., 2017). While inhibition of IMPA results in the mTOR pathway-independent induction of autophagy, the inhibition of GSK-3 results in modulation of multiple neuroprotective mediators including Nrf2 and STAT3 (Kerr et al., 2018). GSK-3β inhibition also has the therapeutically undesirable effect of activating the mTOR pathway and attenuating autophagy (Sarkar et al., 2008). The resultant opposing mTOR independent/dependent effects on autophagy may reflect, in part, why attempts to assess efficacy in SOD1G93A mice treated with lithium mono-therapy have produced conflicting results (Fornai et al., 2008;Gill et al., 2009;Pizzasegola et al., 2009). To address the therapeutic limitation of lithium mediated GSK-3β inhibition and resultant m-TOR activation, a targeted strategy using combination therapy with lithium and rapamycin has been assessed in a Huntington's disease in vivo model demonstrating an additive neuroprotective effect (Sarkar et al., 2008). The combined approach of selectively manipulating separate pathways simultaneously has been extended to the use of lithium coupled with valproate and also a novel antioxidant, in both studies concurrent administration resulted in improved motor function and survival in SOD1G93A mice (Shin et al., 2007;Feng et al., 2008). Furthermore, daily lithium treatment coadministered with riluzole has been identified to reduce disease progression in a study of ALS patients followed up over 15 months (Fornai et al., 2008). Timing of Intervention No significant improvement in survival was found when treating at or after symptom onset, further analysis identified that only treatment with therapies targeting protein misfolding prior to symptom onset significantly improved survival. While this is in keeping with the neurodegenerative model; to treat before the accumulation of higher stability neurotoxic protein aggregates (Glickman and Ciechanover, 2002), the practical application of this to rapidly progressive diseases such as ALS presents a major challenge. This raises the issue of genetic screening within a genetically heterogeneous, variably penetrant disease. Interestingly, the delivery of arimoclomol after symptom onset and even at late-symptomatic disease stage has been found to improve muscle function in a SOD1 mouse model, it has been suggested that this reflects the stress-dependent mechanism of action of arimoclomol. In this study, an increase in lifespan was however only observed in treated mice from symptom onset (Kalmar et al., 2008). Therapies designed to target different proteostatic subsystems are likely to have differing windows of optimal efficacy, for this reason the timing of therapy initiation is a critical factor for further therapeutic research. Furthermore, the small sample size of studies conducted at later time points is a major limitation of the field of neurodegenerative diseases and is likely impacting upon our ability to assess these time points with confidence in our study. Future work should aim to address this in the field with studies conducted at later time points (especially after symptom onset) being more relevant to our ability to effectively translate these interventions to the clinic. Limitations A key limitation of studies of this nature is that study identification is based on a statement by the authors that the intervention was tested on protein misfolding pathways. We specifically employed this method to minimize bias and to identify a comprehensive dataset, aware that drugs rarely have a single mode of action, to enable us to study drugs where their commonality is an effect on proteostasis, rather than their sole effect. We note that no approach to identify such interventions is perfect and if a study did not mention an action on proteostasis, we were not able to include it in our analysis. Furthermore, the majority of papers identified in this study are carried out in the mutant SOD1 rodent model. Whilst this is a well-established animal model used widely in the field, it does not necessarily model the most common pathology seen in ALS patients, which is the misfolding and mislocalisation of TDP-43. Whilst many pathological consequences of proteostasis may be shared between these misfolded proteins, wider use of other animal models assessing the effects of modulating proteostasis on other misfolded proteins is clearly warranted. As anticipated, the heterogeneity of the dichotomous primary outcome measure, survival, was consistently low. In contrast, biochemical and histological outcomes (continuous rather than dichotomous outcome measures) demonstrate varying levels of heterogeneity, ranging from 0-100% across the three therapies assessed in our subgroup analyses. This finding of higher levels of FIGURE 7 | Structured quality assessment shows no demonstrable effect on study efficacy and no evidence of publication bias. (A) Frequency distribution demonstrating that the structured quality score is not significant in determining intervention efficacy (2-way ANOVA p > 0.05). (B) Funnel plot, each point is a study, plotted against the effect size of that study (x-axis) and precision of that study (SE(log[OR]); y-axis). Publication bias has not been detected in this analysis. heterogeneity with continuous outcomes measures is consistent with the literature and for this reason it has been suggested that different standards should be applied to the interpretation of I 2 for each (Alba et al., 2016). Where large variability in effect estimates across continuous measures is observed, the consistency in the direction of the effect is also a relevant factor to consider. We did not detect any publication bias in the studies included in this review. Publication bias can result in effect size estimates being skewed to find a greater effect than is true due to positive results being favored for publishing compared to negative results. Indeed, the Amyotrophic Lateral Sclerosis Therapy Development Institute found that on average, studies report significantly greater effects of therapeutics on extending survival than is true when compared to rigorous testing in controlled laboratory conditions with conscious efforts to reduce the risk of unconscious bias. This is thought to have contributed, in part, to disappointing translation when these therapeutics have been trialed in humans. These studies, taken together with our data indicate that we should exercise caution in interpreting preclinical studies and be aware that they could reflect an overestimate of the true effect size. This highlights the importance of quality assessments and risk of bias scoring done as a part of systematic reviews such as ours. However, measures A table listing the therapeutic interventions analyzed as part of this systematic review including the putative pathway implicated in the mechanism of action. It is important to note however that drugs can modulate multiple potential targets, and that the dogma of 'single drug, single target' is overly simplistic. This table is provided to give an overview of the interventions tested with information of a putative target with respect to its role in proteostasis, noting that many of these interventions will have other target effects not listed here. of quality, like the CAMARADES checklist, do not necessarily reflect the importance of each quality control measure. For example, lack of experimental blinding may affect the efficacy and/or reproducibility of results more than peer review. This means that even papers that have scored the highest number of checklist items in our analysis are not achieving all of the necessary quality control steps to improve reliability of results. The limited number of high quality studies may also explain why results are often overstated in the field because if fundamental quality control is not implemented results can be unreliable and this is likely to contribute to poor clinical translation of potential therapeutics. Implications and Recommendations for Future Research Protein misfolding, as a pathological entity, links many diverse neurodegenerative diseases which have previously been considered to be distinct clinical entities and may even contribute to normal aging. The risk of pathological protein misfolding can be increased by environmental stressors and disease-causing mutations and the innate proteostatic mechanisms to counter this are known to be compromised by aging (Kampinga and Bergink, 2016;Kalmar and Greensmith, 2017). Therefore, it is plausible that targeting of this common pathophysiology, irrespective of the underlying genetic basis of the condition, may be therapeutically beneficial for many other currently untreatable neurodegenerative diseases. CONCLUSIONS This is the first study to formally assess the therapeutic potential of targeting protein misfolding in ALS and FTD and to perform a structured quality of assessment of the ALS and FTD preclinical literature. Our data support the hypothesis that protein misfolding contributes to pathology in ALS/FTD. We show that targeting protein misfolding in ALS/FTD significantly improves survival however, treatments only conclusively demonstrate efficacy when administered prior to symptom onset. This limits the clinical potential of these therapies targeting protein misfolding, as genetic screening and pre-treatment of patients has many ethical implications. Notwithstanding the potential risk of bias owing to overestimation of effect sizes and risk of bias between studies, we identified promising therapeutics, such as arimoclomol and lithium, which demonstrated a statistically significant improvement in histological, behavioral and biochemical outcome measures. Further investigations should be aimed at identifying specific therapeutics that target protein misfolding and improve symptoms or survival, so they can be developed for clinical trials to treat ALS/FTD and future studies should employ multiple therapeutic intervention timepoints to improve clinical translation in the future. DATA AVAILABILITY STATEMENT All data have been made available within the publication material. AUTHOR CONTRIBUTIONS OB, FW and JG contributed to manuscript screening and data extraction. All authors contributed to conceptualisation and experimental design, data interpretation, and manuscript authorship.
7,072.8
2020-05-25T00:00:00.000
[ "Biology", "Medicine" ]
Knock-in of Oncogenic Kras Does Not Transform Mouse Somatic Cells But Triggers a Transcriptional Response that Classifies Human Cancers KRAS mutations are present at a high frequency in human cancers. The development of therapies targeting mutated KRAS requires cellular and animal preclinical models. We exploited adeno-associated virus-mediated homologous recombination to insert the Kras G12D allele in the genome of mouse somatic cells. Heterozygous mutant cells displayed a constitutively active Kras protein, marked morphologic changes, increased proliferation and motility but were not transformed. On the contrary, mouse cells in which we overexpressed the corresponding Kras cDNA were readily transformed. The levels of Kras activation in knock-in cells were comparable with those present in human cancer cells carrying the corresponding mutation. Kras-mutated cells were compared with their wild-type counterparts by gene expression profiling, leading to the definition of a "mutated Kras-KI signature" of 345 genes. This signature was capable of classifying mouse and human cancers according to their KRAS mutational status, with an accuracy similar to or better than published Ras signatures. The isogenic cells that we have developed recapitulate the oncogenic activation of KRAS occurring in cancer and represent new models for studying Kras-mediated transformation. Our results have implications for the identification of human tumors in which the oncogenic KRAS transcriptional response is activated and suggest new strategies to build mouse models of tumor progression. Introduction The protein encoded by the KRAS gene is a prototypic small GTPase that acts as a molecular switch, transducing signals initiated at the cell surface from receptor and non-receptor tyrosine kinases to the nucleus. KRAS works by cycling between active GTP-bound and inactive GDP-bound conformations (Ras-GTP and Ras-GDP).The competing activities of guanosine nucleotide exchange factors and GTPase-activating proteins regulate RAS-GTP levels (1). According to its GTP status, KRAS activates multiple downstream effectors including the mitogenic Raf/MEK/ERK pathway, and the survival-invasive phosphatidylinositol-3-kinase pathway (reviewed in ref. 2). Mutations of the KRAS gene are among the most common genetic alterations in human cancer occurring in f50% of colorectal cancers, 75% to 100% of pancreatic cancers, 50% of cholangiocarcinomas, and almost 50% of lung adenocarcinomas (2,3).Interestingly, activation of the Ras/Raf/Mek/Erk pathway seems also present in cancer lacking KRAS mutations (4).This suggests that the mutational profiling of the KRAS gene might not identify the entire spectrum of tumors in which the Ras pathway is active. Oncogenic activation of KRAS is often associated with poor prognosis and chemoresistance of the respective malignancies (5,6).Multiple studies indicate that oncogenic KRAS is required for the proliferation, adhesion, motility, and invasion of cancer cells (reviewed in ref. 2).RAS mutations are also associated with a number of human syndromes, such as Costello, Noonan, and cranio-facial-cutaneous disorders (reviewed in ref. 7).In some cases (especially in the Noonan and Costello syndromes), the affected individuals have a higher risk of cancer, suggesting that germ line RAS mutations may increase the susceptibility to tumors (7). Given their importance in human tumorigenesis, a number of cellular and animal models have been developed to study KRAS oncogenic alleles (8,9).These included plasmid-and viralmediated ectopic expression of mutated KRAS cDNAs in a variety of human and mouse cells of stromal and epithelial origin.Although the latter have been undoubtedly effective in dissecting the oncogenic properties of mutated KRAS alleles, they often resulted in the overexpression of the corresponding mutated proteins producing misleading results.For example, it has been clearly shown that ''dominant transformation by mutated human RAS genes requires more than 100 times higher expression than is observed in cancers'' (10). Animal models carrying mutated Kras alleles have also been developed.These included elegant genetic experiments in which ''latent'' Kras mutations were inserted in the genome of mouse embryonic stem cells and ''activated'' in somatic tissues (11,12).Mouse models of Kras mutations have been instrumental in understanding the molecular mechanisms of Kras-mediated tumorigenesis.In addition, they have been used to derive mutated KRAS-specific transcriptional signatures (13).Because Kras oncogenic mutations are incompatible with embryonic development, the oncogenic alleles had to be engineered to be activated or expressed only in adult mouse tissues (11,12). To overcome the limitations of the abovementioned models, we have exploited the use of somatic homologous recombination to build mouse tumor progression models closely recapitulating the genetic alterations found in human tumors.Our approach is based on the distinct properties of adeno-associated viruses (AAV) promoting homologous recombination in mammalian somatic cells.This strategy allows the introduction of point mutations or deletions in a given genomic locus, thus allowing its expression under physiologic conditions similar to the situation found in human disease. As a test case, we used AAV-mediated homologous recombination to modify the genome of mouse somatic cells by introducing Kras G12D point mutation.We present evidence that this strategy successfully generates mutant Kras mouse cell models that can be exploited to evaluate the biological and the oncogenic properties of the corresponding mutations.Furthermore, we show that the isogenic wild-type and mutant cells can be used to identify a mutated Kras signature that classifies human tumors on the basis of their KRAS genetic status. Materials and Methods Cell culture and reagents.MLP29 have been previously described (14) and cultured following standard procedures.Geneticin (G418) was purchased from Life Technologies.Antibodies used for immunoblotting included anti-AKT, anti-phospho-AKT S473, anti-mitogen-activated protein kinase (MAPK), and anti-phospho-MAPK (Cell Signaling Technology); the anti-actin antibody was purchased from Sigma. Construction of the targeting vector and identification of recombinant clones.Details on the experimental strategies used to generate the Kras knock-in cells are available in the Supplementary Methods.Construction of the lentiviral vector expressing mKRAS WT and G12D.Full-length cDNAs coding for mKras WT and mKras G12D were amplified from cDNA extracted from MLP29 KI G12D and then cloned in pRRLsin.PPT.CMV.MCS MM.WPRE.The lentivirus production, cell infection, and transduction procedures have been described elsewhere (15). RNA extraction and reverse transcription-PCR analysis.To verify the expression of the mutated alleles in the targeted MLP29 clones, total RNA was extracted using RNeasy Mini/Midi Kit (Qiagen).Retrotranscription was carried out with Moloney murine leukemia virus reverse-transcriptase RNase H minus (Promega).Expression analysis of the mutated mKras allele was done by PCR with the following primers: FW, 5 ¶-gcctgctgaaaatgactgag-3 ¶; RV, 5 ¶-tgctgaggtctcaatgaacg-3 ¶.For expression profiling, the RNA extraction was done using Trizol plus RNA purification kit (Invitrogen Life Technology).RNA qualitative and quantitative assessment was done using the Bioanalyzer 2100 (Agilent Technologies). RAS activation assay.Details on the experimental strategies used to assess the amount of RAS-GTP in cellular lysates are available in the Supplementary Methods. Morphologic analysis.Representative pictures of MLP29 clones were taken with ImageReady software (Adobe) using a microscope (DMIL; Leica) and a 20 Â 0.30 objective (Leica) equipped with a digital camera (DFC320; Leica). Cell viability, wound healing, motility, and anchorage independence assays.Details on the experimental strategies used to assess the biological properties of the targeted cells are available in the Supplementary Methods. Results Construction of the AAV Kras KI vector.AAV is a human, replication-defective parvovirus.The wild-type genome possesses two open reading frames, termed rep and cap, flanked by two inverted terminal repeats.We built a recombinant AAV, in which both of the open reading frames were deleted and replaced with exogenous mouse Kras sequences.The inverted terminal repeats, necessary for the packaging of the vector, were the only elements maintained from the wild-type virus.The homologous recombination cassette cloned within the inverted terminal repeats consisted of two 1 kb sequences (''homology arms''), one of which contained the mutated Kras exon.The selectable marker neomycin transferase gene was introduced between the homology arms.The Neo cassette was flanked by two LoxP sites to allow Cre recombinasemediated excision of the Neo cassette from the targeted cells' genome (Fig. 1A).The regions of homology of the Kras locus were amplified from genomic DNA obtained from the target mouse somatic cells (MLP29 cells; see below).The Kras G12D mutation was introduced in the exon containing homology arm by standard site-directed mutagenesis (Fig. 1A).Details on the construction of infective AAV carrying the Kras homology arms are reported in Materials and Methods. Knock-in of the Kras G12D mutation in mouse liver progenitor cells.As a recipient cellular model to test our knock-in approach, we selected mouse liver progenitor cells (MLP29).MLP29 are somatic mouse cells previously derived from embryonic liver (14).MLP29 display a number of features making them appealing for genetic and biological manipulation.They can be propagated indefinitely in vitro but are not oncogenic.Furthermore, they can be used to assess cellular phenotypes including growth factor-dependent proliferation, motility, and ''invasive growth'' (14).Their transcriptional profile has been previously ascertained, highlighting genes involved in cancer progression (17). KRAS is mutated at a very high frequency in cholangiocarcinomas which originate from liver biliary ducts (3).MLP29 cells are bipotent progenitors expressing markers of both hepatocytic and bile duct lineage, which makes them an appealing model to study KRASmediated carcinogenesis (14).We also verified that the Kras sequence is wild-type in MLP29 cells, confirming that they are a suitable model for genetic manipulation of the corresponding genomic locus. MLP29 cells were infected with the recombinant AAV-Kras-G12D vector described above and selected for 21 days in the presence of neomycin until single resistant colonies appeared.Clones in which homologous recombination had occurred were identified by PCR analyses on the corresponding genomic DNA as described in Fig. 1B.Targeting frequencies were 1:300.Correct targeting of the Kras point mutation into the corresponding genomic locus was verified by sequencing at the DNA level (data not shown) and at the mRNA level (Fig. 1C).PCR-positive clones displayed a heterozygous G > A nucleotide change corresponding to a glycine to glutamic acid modification at codon 12 of the Kras coding sequence (Fig. 1C). Two independently identified clones (denominated KI-1 and KI-2) carrying the Kras G12D mutation were amplified and subjected to further analysis.Interestingly, one of the clones (referred to as NEO+) that scored positively at the PCR screening, did not display the G > A nucleotide change.This indicates that the homologous recombination had involved the NEO cassette but only a portion of the 5 ¶ homology arm (Supplementary Fig. S1).We reasoned that clone NEO+ differs from clones KI-1 and KI-2 only for the Kras mutation, and would therefore represent the ideal isogenic control for the mutated clones.The NEO+ clone was therefore amplified and used throughout the study. To exclude the possibility that the Neo resistance cassette might affect the expression of the mutated allele purified recombinant TAT-CRE protein was used to excise the Neo cassette using the flanking loxP sites (Supplementary Fig. S2A).Allele-specific quantitative PCR was then used to assess the expression of the mutant G12D allele (Supplementary Fig. S2B).These experiments showed that the presence of the Neo cassette did not negatively affect the expression of the mutated allele.The ''cre-out'' cells were included as controls in some of the biochemical and biological experiments (see below). Kras knock-in cells display distinct biochemical and biological features.The Kras protein cycles between the active GTP-bound and the inactive GDP-bound conformations.The G12D oncogenic mutation is thought to lock Kras into the GTP-bound Cancer Research Cancer Res 2007; 67: (18).September 15, 2007 conformation by reducing its constitutive GTPase activity, and thus, constitutively activating Ras signaling.To assess whether the knock-in of the Kras G12D mutation would activate the Kras protein in MLP29 cells, we did a pull-down assay using the recombinant CRIB domain of BRAF, which is known to bind Ras only in its GTP-bound conformation.Figure 2A shows that both mutant clones (KI-1 and KI-2) display constitutively active Kras, whereas in the matched wild-type cells and in the perfectly isogenic NEO+ clone, Kras is inactive. Next, we compared the levels of Kras activation in the KI clones with those present in naturally occurring colorectal cancer cells carrying the corresponding mutations.The levels of Kras activation in mouse KI cells were comparable with those present in human cancer cells (Fig. 2A).Interestingly, when we ectopically expressed (by lentiviral-mediated transduction) a mutated Kras G12D cDNA, the targeted cells displayed levels of GTP-RAS higher than those present in the knock-in or in the cancer cells (Fig. 2A).Overall, these data show that the knock-in system directly phenocopies the oncogenic activation of KRAS present in human tumors. We then did biochemical analysis to assess activation of the Kras pathway in WT and knock-in cells.To this end, we measured activation of MAPK and AKT using anti-phospho-specific antibodies (Supplementary Fig. S3).The results indicate that knock-in of the G12D allele does not affect or even reduce the activation of the RAS downstream signaling pathways.These data are in accordance with previous work by Guerra et al. and Tuveson et al., which also reported that knock-in of mutated Kras in mouse cells does not increase (but can actually reduce) the levels of MAPK activation (12,18). Next, we assessed whether MLP29 cells expressing Kras mutant alleles might display distinct biological properties.Oncogenic KRAS mutations are thought to affect a number of cellular features including cell morphology, proliferation, and motility.MLP29 Kras knock-in clones growing in vitro could be readily distinguished from the wild-type or the NEO+ isogenic counterparts based on their morphology (Fig. 2B; Supplementary Fig. S4).Upon reaching confluence, the knock-in clones grew in a disorganized manner, whereas the wild-type cells formed a tight and organized monolayer (Fig. 2B; Supplementary Fig. S4).The proliferative capability of Kras mutant cells was slightly but significantly increased (Fig. 2C).Mutant cells were more motile than their wildtype counterparts as measured in a transwell motility assay (Fig. 3A).Additionally, the knock-in cells displayed an increased wound-healing potential (Fig. 3B). These results showed that the presence of a mutant Kras alleles confers distinct biological properties to MLP29 cells including morphologic changes, increased motility, and proliferation.Importantly, in all the biochemical and biological assays that we did, the two mutated clones and the wild-type and the NEO+ cells showed homogeneous behavior.This indicates that clonal variability does not significantly affect the properties of the different cell lines. Knock-in of the Kras G12D mutation does not transform mouse liver progenitor cells.A number of studies suggest that KRAS mutations can lead to the transformation of mammalian cells.Most of the experiments supporting this notion have been done by transfection-mediated ectopic expression of mutated KRAS cDNAs under the control of viral promoters.A previous report indicated that transformation by mutated human RAS genes requires more than ''100 times higher expression than is observed in cancers'' (10).To directly compare the knock-in and the transfection methodology, we engineered MLP29 cells that expressed a G12D Kras-mutated cDNA under the control of a cytomegalovirus promoter.The results were unequivocal; whereas MLP29 cells carrying the G12D heterozygous mutation in the Kras genomic locus were not capable of growing in soft agar, the corresponding cells expressing the G12D cDNA readily formed colonies in the same conditions (Fig. 4).In these experiments, the ectopic expression of wild-type Kras was sufficient to trigger transformation in MLP29 cells.This is probably due to the high transduction efficiency of lentiviral vectors.Our results indicate that the expression of a heterozygous oncogenic Kras allele under its own promoter is not sufficient to transform mouse liver epithelial (MLP29) cells.The somatic knock-in approach closely recapitulates the genetic status of human tumors (such as cholangiocarcinomas) carrying mutated KRAS. Definition of a gene expression signature for Kras KI.To identify genes whose expression is affected by mutated Kras, total RNA was extracted from the KI-1 and KI-2 clones, and from two controls, the NEO+ clone and wild-type cells.Transcriptome profiling was carried out in parallel on two different DNA microarray platforms, Affymetrix and Illumina, for systematic cross-validation of the results.The data have been deposited in the NCBIs Gene Expression Omnibus4 and are accessible through the GEO Series accession number GSE8711.Preliminary clustering analysis allowed clear segregation of the two independently derived KI clones from the two control samples, confirming that clonal variation does not significantly affect our cellular model systems (data not shown).After filtering for detection, genes with significant differential expressions between KI clones and controls were identified using significance analysis for microarray (19).Comparison of the results obtained in the two microarray platforms yielded a panel of 345 genes, the expressions of which were concordantly up-regulated or down-regulated by mutated Kras (Supplementary Table S1; Supplementary Fig. S5), hereafter referred to as the ''Kras-KI Signature.'' We then assessed how many genes known to be transcriptionally regulated by oncogenic Ras were regulated in our mutated Kras isogenic model, and used the hypergeometric distribution test to assess statistical significance.As a source of RAS-responsive genes, we used two gene lists: one obtained by Bild et al. from human primary cells transiently transduced with adenoviral RAS expression vectors (the ''HMEC-HRAS signature''; ref. 20), and one by Sweet-Cordero et al. from a model of somatic Ras mutation-driven mouse lung carcinogenesis (the ''mouse lung signature''; ref. 13).For the analysis, the two lists were mapped onto our mouse Affymetrix platform, obtaining a total of 1,091 probe sets, 921 for the mouse lung signature and 201 for the HMEC-HRAS signature.We found that a significant fraction of the published Rasresponsive genes were also regulated in MLP29 knock-in cells (P < 1 Â 10 À5 for both published lists when a fold change threshold of 2 is chosen). As a second approach, we directly compared the three lists of signature genes, all mapped on our mouse Affymetrix platform.We found that 52 of the Kras-KI signature genes (15.1%) were also present in the mouse lung signature, with an observed/expected ratio of 2.13, and a hypergeometric P < 1 Â 10 À7 .The overlap between the Kras-KI and the HMEC-HRAS signature was slightly worse in terms of coverage percentage, but significant (15 genes, O/E ratio = 2.98, P < 10 À5 ).However, comparison between the two published signatures revealed an even worse overlap (18 genes, O/E ratio = 2.31, P < 10 À3 ).Overall, 67 genes of the Kras-KI signature are represented in published RAS signatures, showing that knock-in of Kras in somatic cells leads to the modulation of Kras-initiated pathways ultimately driving a specific transcriptional response. The Kras-KI signature classifies mouse and human lung cancers.KRAS mutations are present at a high frequency in nonsmall cell lung cancers, in which they are thought to play a central role in tumor progression and have been associated with poor response to therapy and reduced survival (21).As discussed in the Introduction, sequencing of the KRAS gene might not identify the entire spectrum of tumors in which the KRAS pathway is oncogenically deregulated. To verify whether the Kras-KI signature might be able to reflect and predict the mutational status of KRAS and or the oncogenic activation of the corresponding pathway, we analyzed published microarray data sets of experimentally induced mouse and human lung cancers. We initially focused on microarray data generated in the previously mentioned model of Kras-driven mouse lung carcinogenesis (13).The data set is composed of 50 samples, of which 31 were from lung cancers with mutated Kras and 19 were from normal lungs with wild-type Kras.We mapped 162 genes from the KRAS-KI signature onto 173 probe sets of this data set (Supplementary Table S2), and assessed their ability to discriminate WT and mutated Kras samples using the signal-to-noise ratio (SNR) metric (22).The average SNR for the signature was 0.81.Such values could not be reached by random gene lists of the same size of the signature, as highlighted in a Monte Carlo simulation (P < 0.00005; Supplementary Fig. S6).Strikingly, the hierarchical clustering based on the expression of signature genes divided the 50 samples into two clearly distinct clusters containing either only WT or mutated Kras samples.A marked concordance was observed in the behavior of the signature genes between MLP29 cells and the lung samples, with the Kras-KI Cancer Research Cancer Res 2007; 67: (18).September 15, 2007 clones coclustering with the mouse lung cancer samples, and the WT and NEO+ clones coclustering with the normal lung samples (Fig. 5).These results show that the Kras-KI signature is capable of discriminating mouse lung cancers based on their Kras mutational status. To assess the class prediction ability of the signature in human cancer, we mapped the Kras-KI signature on microarray data generated by Bhattacharjee and colleagues (23) including 94 human lung adenocarcinoma samples for which they annotated the mutational status of codons 12 and 13 of KRAS (see Supplementary Table S2).Also in this case, we observed a significant enrichment for genes with high SNR (P < 0.0005; Supplementary Fig. S6).Notably, the Kras-KI signature was found to have a less significant SNR when used to discriminate normal lung tissue from lung cancers of various types (P = 0.0056; Supplementary Fig. S6), indicating that it specifically discriminates KRAS mutational status rather than neoplastic progression.We then used the nearest mean classifier approach (24) to classify each lung adenocarcinoma based on the expression of the signature genes in all the other samples.Briefly, average expression was calculated for each gene in the KRAS-mutated and WT group, excluding the sample undergoing classification (leave-one-out; see Supplementary Methods).For comparison, we adopted the same classification procedure using genes from the two previously mentioned Ras signatures obtained in the mouse lung model (mouse lung signature; ref. 13) and in HMECs (HMEC-HRAS signature; ref. 20), respectively.The results of this classification are presented in Table 1, and show that all three signatures have a very similar performance.These data suggest that, albeit nontransforming, knock-in of mutated Kras in mouse cells generates a transcriptional signature, the capability of which in predicting the presence of KRAS mutations in human lung cancer is comparable to those previously published. Discussion Omics technologies and the availability of the human genome sequence have allowed unprecedented progress in the identification of genes mutated in human cancers, leading to the detection of hundreds of cancer-associated alleles (16,25,26).Compared with the genomic discovery stage, the functional validation phase of the corresponding alleles is now lagging behind.For example, hundreds of point mutations affecting kinase genes have been recently described (16,25,26), but their biochemical and biological properties are largely unknown. Mammalian cell lines and mouse strains have been widely used as model systems to functionally characterize cancer alleles carrying point mutations or deletions.In the first case, the cDNA corresponding to the mutated alleles are ectopically expressed by means of plasmid transfection or retroviral infection in human or mouse cell lines upon selection with markers.The derivative cells are then used to assess the biochemical and biological properties of the mutated cDNA with a variety of standardized assays.Classic examples of this approach are the focus-forming assay, which measures loss of contact inhibition, and the soft agar growth assay, which evaluates the anchorage-independent potential of neoplastic cells. These studies have yielded remarkable results but are typically hampered by at least three issues.First, the expression is achieved by transient or stable transfection of cDNAs often resulting in the overexpression of the target allele at levels that do not accurately recapitulate what happens in human cancers.Second, the expression of the mutated cDNA is mainly achieved under the control of non-endogenous viral constitutively active promoters such as SV40, cytomegalovirus, and long terminal repeats (27).As a result, the mutated alleles cannot be appropriately modulated in the target cells.Finally, functional experiments have often been done using cells of stromal origin such as embryonal or 3T3 mouse fibroblasts.Stromal cells are not the primary targets of oncogenic transformation in the large majority of solid cancers. Another approach has been to use homologous recombination to introduce the point mutations in a locus-specific fashion into the genome of mouse embryonic stem cells.This strategy works very efficiently and allows the generation of mouse strains carrying specific cancer alleles.Very often, however, mutated dominant oncogenes (such as Kras) cannot be transmitted through the mouse germ line (28) because the resulting embryos are not viable.Complex genetic strategies (such as the use of tissue-specific CRE-LOX systems, or latent alleles) have been devised to allow the expression of the mutated alleles only in adult mouse somatic tissues (12,18).These have been used for a variety of experimental approaches including the demonstration that oncogenic KRAS alleles can immortalize primary fibroblasts (12,18). To overcome these limitations, we exploited a novel approach to introduce cancer alleles in the genome of mouse epithelial somatic cells.Similar to most mammalian somatic cells, most adult mouse cells display a low frequency of homologous recombination that has thus far limited their use for genetic and functional analysis.To overcome this problem, we used a shuttle vector based on the backbone of AAVs.The latter has been found to increase the rate of targeted homologous recombination in human and mouse cells (29,30). As a model system to test our strategy, we chose the Kras G12D mutation, which is commonly found in human tumors such as colorectal, lung and pancreatic cancers. 5Mutated KRAS proteins are, at least in theory, good therapeutic targets because they act as central switches in cellular processes leading to oncogenesis (2).Despite many efforts, the development of inhibitors directly targeting KRAS has thus far failed, apparently due to difficulties in targeting the intrinsic RAS GTPase activity or the lack of specificity of farnesyltransferase inhibitors (31).The development of cellular and animal models that more closely recapitulate human diseases is a prerequisite to identifying molecules inhibiting the Kras signaling pathway rather than Kras itself. AAV-mediated knock-in of the Kras G12D mutation was attempted in mouse liver progenitor cells (MLP29), a somatic immortal epithelial cell line previously established in our laboratory that is not transformed and does not form tumors in immunocompromised mice. Clones displaying correct targeting of the Kras genomic locus were identified at a frequency of 1:300.Knock-in cells displayed constitutively active Kras protein, marked morphologic changes and increased proliferation and motility.Interestingly, these cells were not transformed as measured by the soft agar assay.Importantly, targeted cells did not show oncogene-induced senescence and were not transformed in contrast with results previously reported for mouse cells overexpressing a mutated HRAS cDNA (32).We confirmed that in our cell models, overexpression of mutated Kras cDNA led to full transformation, indicating that the levels of Kras activation are critical for this phenotype.Amplification and/or overexpression of mutated KRAS is very uncommon in human cancers, suggesting that the levels of KRAS activation relevant for tumor progression are those achieved by the presence of point mutations in a heterozygous state.In line with this, the levels of Kras activation (as measured by a GTP load assay) in the knock-in cells were highly comparable with those found in naturally occurring tumor cells carrying the corresponding heterozygous mutations. Our results are the first to show the direct introduction of an oncogenic mutation (such as the Kras G12D allele) in the genome of mouse epithelial somatic cells.Notably, the biochemical and transforming properties of our knock-in cell models are remarkably similar to those obtained from cells derived from mice carrying analogous Kras alleles (12,18).We conclude that the isogenic cells that we have developed recapitulate the oncogenic activation of KRAS occurring in human tumors.To further characterize and validate our cellular models, we did transcriptional profiling of the wild-type and knock-in cells using two independent microarray platforms that resulted in a distinct Kras KI signature of 345 genes.The signature was then tested for its ability to classify mouse and human cancers in which the mutational status of KRAS was also known.The signature unequivocally distinguished mouse lung samples carrying the G12D mutation from the corresponding normal tissues.When applied to a large cohort of human nonsmall cell lung tumors for which the transcriptional profile was available, the signature was able to correctly predict the wild-type status of the KRAS locus in most cases.Interestingly, a large fraction of cases displaying a wild-type KRAS were instead defined as ''mutant'' by the signature.We therefore used two additional Ras signatures, obtained in mouse and human models alongside with that obtained in the mouse KI cells to compare their performance, and the results were very similar. There are two possible interpretations for these results, (a) the signatures might have misclassified the wild-type samples due to poor specificity or (b) the signatures detected the functional status of KRAS hyperactivation, possibly deriving from mutations at KRAS codons other than 12 and 13 or involving activation of signaling molecules acting in the same pathway.The latter hypothesis is consistent with the notion that molecules acting upstream or downstream of Ras could also be deregulated in cancer (2).However, further investigations are required to address this issue. In conclusion, we have developed a new cellular model closely recapitulating the KRAS genetic lesions occurring in human cancers.Our findings support the concept that KRAS mutations, in the absence of overexpression, promote distinct biological effects and drive a specific transcriptional response, but are not sufficient to trigger full transformation in epithelial cells. Given the knock-in efficiency that we observed, our work points to new experimental strategies to build tumor progression models by ''in vivo delivering'' of oncogenic mutations using AAV-mediated homologous recombination in mouse somatic cells.Considering the recent successes and the unique phenotypes seen with mosaic conditional knockout mouse models of cancer (33,34), we also envision that orthotopic transplantation of genetically manipulated somatic murine cells would provide a physiologically relevant model of human carcinogenesis. Figure 1 . Figure 1.Targeted knock-in of the Kras G12D allele in mouse epithelial cells.A, outline of the strategy used for the targeted knock-in of the G12D mutation in the genome of mouse cells.Schematic structure of the Kras genomic locus and the targeting vector.B, electrophoresis analysis of PCR products generated using primers designed to detect successful recombination events.C, nucleotide sequence of the Kras cDNA obtained from mRNA extracted from clone KI-1 confirming that the 35G > G/A (G12D) change is present in the Kras transcript. Figure 2 . Figure 2. Biochemical and biological analysis of cells carrying wild-type or mutated Kras.A, Kras GTP activation assay.The MLP29 clones were serumstarved for 48 h and lysed.Levels of GTP Ras were assessed by pull-down assays with the recombinant RAF-CRIB domain.293 cells transfected with mutated Kras and the colorectal cancer cell line DLD-1 carrying a mutated Kras allele were used as controls.B, morphologic analysis.Cell lines grown in standard culture for 20 d.C, proliferation assay.The indicated cells were seeded at the same concentration on day 0 and cell density was measured for 5 consecutive days.Columns, mean of triplicates; bars, SD.Statistical significance was determined by two tailed Student's t test and wild-type (WT ) cells were used as reference.rlu, relative light units. Figure 3 . Figure 3. Motile and wound-healing potential of the Kras-KI cells.A, the indicated cells were seeded in the upper part of a Boyden chamber and the migrated cells were fixed with 11% glutaraldehyde after 12 h, stained with crystal violet, analyzed with BD Pathway HT bioimager (BD Biosciences) and counted with BD AttoVision 1.5 software (BD Biosciences).Columns, mean areas of four fields; bars, SD.B, cells were grown until they formed a monolayer and then serum-starved for 48 h.An artificial wound was then made using a plastic pipette tip.The proliferative-motile potential of the indicated cells was assessed after 3 d.Bright light microscopy pictures were obtained after staining the cells with crystal violet. Figure 4 . Figure 4. Anchorage-independent growth of cells carrying the Kras G12D mutation (knock-in) or ectopically expressing the same allele.The indicated cells were seeded in soft agar.Colonies appearing after 15 d of incubation were stained with 1 mg/mL of 2,3,5-triphenyl-tetrazolium chloride and photographed (A) or counted (B ).The experiment was repeated twice in triplicate.Columns, mean of triplicates; bars, SD. Figure 5 . Figure 5. Expression of genes of the KRAS -KI signature in normal and neoplastic mouse lung and in Kras knock-in and control clones.The 173 probe sets corresponding to the RAS-KI signature (rows ) were used to cluster samples from mouse lung together with MLP29 clones.The color codes for the samples (bottom ). Table 1 . Classification of human lung cancer samples by the KRAS-KI, mouse lung, and HMEC signatures NOTE: Nearest-mean classification with leave-one-out was used to classify 94 adenocarcinoma samples according to the Kras-KI, mouse lung, and HMEC signatures expression.The results of the classification are compared with the results of KRAS sequencing at positions 12 and 13.
7,334.6
2007-09-15T00:00:00.000
[ "Biology" ]
Single top quark production in $t$-channel at the LHC in Noncommutative Space-Time We study the production cross section of the $t$-channel single top quark at the LHC in the noncommutative space-time. It is shown that the deviation of the $t$-channel single top cross section from the Standard Model value because of noncommutativity is significant when $|\vec{\theta}| \gtrsim 10^{-4}$ GeV$^{-2}$. Using the present experimental precision in measurement of the $t$-channel cross section, we apply upper limit on the noncommutative parameter. When a single top quark decays, there is a significant amount of angular correlation, in the top quark rest frame between the top spin direction and the direction of the charged lepton momentum from its decay. We study the effect of noncommutativity on the spin correlation and we find that depending on the noncommutative scale, the angular correlation can enhance considerably. Then, we provide limits on the noncommutative scale for various possible relative uncertainties on the spin correlation measurement. Introduction The study of single top quark processes at hadron colliders provides the opportunities to investigate the electroweak properties of the top quark, direct measurement of the V tb CKM matrix element, and more importantly it provides the possibility to search for new physics. The standard model (SM) has been found to be in a good agreement with the present experimental measurements in many of its aspects. In the framework of the SM, top quark is the heaviest particle with the mass at the order of the electroweak symmetry breaking scale, v ∼ 246 GeV. This large mass might be a hint that the top quark plays an essential role in the electroweak symmetry breaking. On the other hand, the reported experimental data from Tevatron and LHC on the top quark properties are still limited and no significant deviations from the standard model predictions has been observed yet. Top quarks are mainly produced through two independent mechanisms at hadron colliders: The main production mechanism is via strong interactions where top quarks are produced in pair (gg → tt, qq → tt) [1]. The production cross section of tt at 7 TeV center-of-mass energy at the LHC is 157 pb at next to leading order [3]. Top quark can be produced singly via electroweak interaction. It occurs through three different processes : t-channel (the involved W -boson is space-like, ub → dt), s-channel (the involved W -boson is time-like, ud →bt) and tW -channel (the involved W -boson is real, gb → W − t). The t-channel with the cross section of 60 pb is the largest source of single top at the LHC [1]. The cross sections of tt and single top production, the top quark mass, the helicity of W boson in top decay, the search for flavor changing neutral current, and many other properties of the top quark have been already studied [1], [2]. However, it is expected that top quark properties such as single top quark cross section measurement are going to be measured with high precision at the LHC due to very large statistics [1]. The space-time noncommutativity is a generalization of the usual quantum mechanics and quantum field theory which may describe the physics at short distances of the order of the Planck length, since the nature of the space-time could change at these distances. There are motivations coming from string theory, quantum gravity, Lorentz breaking [4], [5], [6], [7]) to construct models on noncommutative space-time. The noncommutativity in space-time can be described by a set of constant c-number parameters θ µν or equivalently by an energy scale Λ N C and dimensionless parameters C µν : where θ µν is a real anti-symmetric tensor which has the dimension of [M ] −2 . Dimensionless electric and magnetic parameters ( E, B) have been defined for convenience. It is notable that a space-time noncommutativity, θ 0i = 0, might cause some problems with unitarity and causality [8], [9]. It has been shown that the unitarity can be satisfied for the case of θ 0i = 0 provided that θ µν θ µν > 0 [10]. However for simplicity, in this article we take θ 0i = 0 or equivalently E = 0. One can obtain a noncommutative version of an ordinary field theory by replacing all ordinary products among fields with Moyal product defined as [11]: The approach to the noncommutative field theory based on the Moyal product and Seiberg-Witten maps allows the generalization of the standard model to the case of noncommutative space-time, keeping the original gauge group and particle content [12], [13], [14], [15], [16], [17]. Seiberg-Witten maps relate the noncommutative gauge fields and ordinary fields in commutative theory via a power series expansion in θ. Indeed the noncommutative version of the Standard Model is a Lorentz violating theory, but the Seiberg Witten map shows that the zeroth order of the theory is the Lorentz invariant Standard Model. The effects of noncommutative space-time on some rare decay, collider processes, leptonic decay of the W and Z bosons and additional phenomenological results have been presented in [18], [19], [20], [21], [22], [23], [24], [25], [26], [28], [29], [30] and some limits have been set on noncommutative scale. In this article, we calculate the contributions that the t-channel single top quark cross section receive from the noncommutativity in space-time at the LHC in the center-of-mass energy of 7 TeV. Then, we estimate a bound on the noncommutative parameter θ by comparing the recent measurement of the CMS collaboration of the t-channel cross section with the theoretical calculations. In Section 2 of this article, a short introduction for the noncommutative standard model (NCSM) is given. Section 3 presents the calculations of the noncommutative effects on the single top quark cross section and limit on θ from current measured single top production rate. Finally, Section 4 concludes the paper. The Noncommutative Standard Model (NCSM) The NCSM action is obtained by replacing the ordinary products in the action of the classical Standard Model by the Moyal products and then matter and gauge fields are replaced by the appropriate Seiberg-Witten expansions. The action of NCSM can be written as: This action has the same structure group SU (3) C ×SU (2) L ×U (1) Y and the same fields number of coupling parameters as the ordinary SM. The approach which has been used in [13], [14], [15], [16] to build the NCSM is the only known approach that allows to build models of electroweak sector directly based on the structure group SU (2) L × U (1) Y in a noncommutative background. The NCSM is an effective, anomaly free, noncommutative field theory [31], [32]. We just consider the fermions. The fermionic part of the action in a very compact way is: where i is generation index and Ψ i L,R are: where L i L and Q i L are the well-known lepton and quark doublets, respectively. The Seiberg-Witten maps for the noncommutative fermion and vector fields yield: where ψ and V µ are ordinary fermion and gauge fields, respectively. Noncommutative fields are denoted by a hat. For a full description and review of the NCSM, see [13], [14], [15], [16]. 3 The Noncommutative Corrections to the t-Channel Cross Section The q 1 (p) → W (q) + q 2 (k) vertex in the NCSM up to the order of θ 2 can be written as [22]: where P L = 1−γ 5 2 and qθp ≡ q µ θ µν p ν . This vertex is similar to the vertex of W decays into a lepton and anti-neutrino [26]. However, one should note that due to the ambiguities in the SW maps there are additional terms in the above vertex. Since they will not affect the results, we have ignored them [27]. After some algebra the noncommutative corrections to the squared matrix element for the t-channel process (u(p 1 ) + b(p 3 ) → d(p 2 ) + t(p 4 )) is as follows: where, m t is the top quark mass, m b is the b-quark mass and m W is the mass of W -boson.V tb , V ud are the CKM matrix elements. In Eq. 8, the contributions coming from O(θ 2 ) and higher have been ignored as well as the masses of light quarks. s, t, u are the Mandelstam variables. The total cross section of the t-channel signel top production in proton-proton collisions at the LHC is given by: whereσ ab is the partonic level cross section for the process u + b → t + d. The calculation is performed at Q = m W . f a (x, Q 2 ) are the parton distribution functions. CTEQ6 [35] is used as for the proton parton distribution functions. Fig.1 shows the dependence of the single top cross section as a function of noncommutative parameter θ as well as the experimental precision Using an integrated luminosity of 36 pb −1 collected with the CMS detector at the LHC, the value of the single top cross section found to be 83.6 ± 29.8(stat. + syst.) ± 3.3(lumi.) pb [36]. This measurement is consistent with the standard model expectation. Comparing the CMS experimental results on single top cross section measurement with the single top cross section in the noncommutative space-time, we get an upper value on the noncommutative parameter θ of O(0.001) GeV −2 . If we assume | B| = 1, this limit can be translated into Λ and leads to Λ 100 GeV. However, for smaller values of | θ|, the noncommutative corrections get smaller and smaller. Since the LHC experiments are able to measure the t−channel cross section with high precision with larger amount of data [37], this limit can be higher. For example, a measurement with 10% uncertainty leads to the bound of Λ 500 GeV. The Noncommutative Effect on the Spin Correlation One of the important features of t-channel single top quark production is the large polarization for a suitable choice of spin quantization axis [38]. Because of the large mass of the top quark, it decays before hadronizations via weak interactions and no hadronic bound state can be formed. On the other hand, in the SM because of the V − A (Vector-Axial) structure of the top quark couplings to the W boson, the decay products of a polarized top quark have a particular structure of angular correlations. The angular distributions of the decay produtcs of the top quark have the following form [40]: where θ i is defined as the angle between the momentum of the ith decay product and the top quark spin quantization axis in the top quark rest frame. The α i coefficients are called spin correlation coefficients and shows the degree of correlation with the spin direction of the top quark. In the leptonic decay of the top quark (t → l + + ν l + b), with α l = 1, the charged lepton is maximally correlated with the top quark spin direction. Please notice that α ν = −0.32 and In [22], the effects of noncommutative space-time on the charged lepton spin correlation coefficient α l has been calculated. We showed that depending on the value of the noncommutative characteristic scale Λ, α l can deviate significantly from its SM value. In the t−channel single top quark production (u + b → d + t), the direction of the spectator jet (d−type quark) is optimal for the spin quantization axis [38]. A bin by bin precise comparison of the angular distribution with real data could provide much better limit on Λ with respect to the limit obtained from the total cross section measurement. Fig.3, the end point of the thick red curve is corresponding to the lower limit on Λ axis assuming the relative uncertainty on 5% on α l . Table 1 shows the lower limit on Λ in GeV assuming various relative uncertainties on measurement of α l . As you can see, the best lower limit that we can achieve via spin correlation from single top events is Λ 980 GeV. Conclusions In conclusion, the noncommutative effects on the single top quark production cross section at the LHC are very small for most of the parameter space (less than O(0.001) GeV −2 ). Therefore, it seems that there is not much hope of determining possible noncommutative effects directly from the single top quark cross section. However, the distribution of the cosine of the angle between the charged lepton momentum from the top decay and the momentum of the spectator quark in the top quark rest frame is highly sensitive to the noncommutaitve space-time when Λ ∼ 1 TeV. A precise comparison of the angular distribution with real data could provide much better limit on Λ. We find the lower limit on Λ depending on different relative uncertainties on measurement of α, if the experiments could measure α with the precision of 5%, the lower limit on Λ will be 980 GeV.
3,028.2
2012-02-12T00:00:00.000
[ "Physics" ]
Winter Is Coming: Seasonal Variation in Resting Metabolic Rate of the European Badger (Meles meles) Resting metabolic rate (RMR) is a measure of the minimum energy requirements of an animal at rest, and can give an indication of the costs of somatic maintenance. We measured RMR of free-ranging European badgers (Meles meles) to determine whether differences were related to sex, age and season. Badgers were captured in live-traps and placed individually within a metabolic chamber maintained at 20 ± 1°C. Resting metabolic rate was determined using an open-circuit respirometry system. Season was significantly correlated with RMR, but no effects of age or sex were detected. Summer RMR values were significantly higher than winter values (mass-adjusted mean ± standard error: 2366 ± 70 kJ⋅d−1; 1845 ± 109 kJ⋅d−1, respectively), with the percentage difference being 24.7%. While under the influence of anaesthesia, RMR was estimated to be 25.5% lower than the combined average value before administration, and after recovery from anaesthesia. Resting metabolic rate during the autumn and winter was not significantly different to allometric predictions of basal metabolic rate for mustelid species weighing 1 kg or greater, but badgers measured in the summer had values that were higher than predicted. Results suggest that a seasonal reduction in RMR coincides with apparent reductions in physical activity and body temperature as part of the overwintering strategy (‘winter lethargy’) in badgers. This study contributes to an expanding dataset on the ecophysiology of medium-sized carnivores, and emphasises the importance of considering season when making predictions of metabolic rate. Introduction Measurements of energy expenditure can provide insights into the form, function and macroecology of organisms [1][2][3]. Energy expenditure is dependent on a variety of extrinsic and intrinsic factors [4]. In mammals, fundamental measurements of energy metabolism are basal or resting metabolic rate (BMR or RMR). These parameters describe the minimum energy expenditure of an animal at rest and are a function of the cost of somatic maintenance, a key component of the allocation of energy in an organism's life history [5]. In terms of intrinsic factors, a large proportion of the variation in RMR can be explained by body mass [6], but other important factors include age and sex. Basal metabolic rate or RMR are often higher during periods of elevated growth as experienced by juvenile animals [7], and slow with age [8]. Reproduction in mammals entails a range of direct (e.g. gestation and lactation) and indirect (e.g. remodelling of organ morphology) physiological costs, which consequently pose a range of trade-offs [9]. Hence, different levels of investment by either sex may be reflected as variation in RMR. In terms of extrinsic factors, spatial and temporal differences in RMR are associated with variation in environmental temperature. In temperate climates, most mammals maintain a core body temperature that is warmer than their surroundings and heat loss to the environment must be balanced with heat production [10]. Consequently, seasonal variation in ambient temperature can generate corresponding changes in heat production and also adaptations that manifest as differences in RMR [11,12]. In the present study we measure RMR of free-living European badgers (Meles meles) of differing body mass, age, and sex, across different seasons. Measuring the energy requirements of badgers is relevant ecologically, as they are one of the largest free-living carnivores in the UK and hence affect a number of other species [13,14]. There have also been relatively few studies on the metabolic rate of wild carnivores and the sample sizes for larger species tend to be small, possibly owing to logistical constraints [15,16]. We investigate whether age affects RMR by testing if badger cubs had a higher RMR than adults owing to high growth rates during the first year of life [17]. We also examine whether there are sex-related differences in RMR, potentially originating from reproductive investment. In addition, we measure the effects of season, postulating that RMR would be lowest during the winter, as badgers are known to exhibit reduced physical activity [18] and body temperature [19][20][21] during 'winter lethargy' when food intake is reduced [17]. Anaesthesia is often necessary when handling wild animals; however, it is known to affect a wide range of physiological functions including oxygen consumption [22,23]. Therefore, we also consider its effect on RMR. Finally, we draw comparisons with allometric predictions for similarly sized mammals to consider badger RMR values in relation to other species. Small mustelid species are thought to have higher metabolic rates for any given body mass (often more than 100% greater) compared with other mammals due to the energetic costs of maintaining a relatively long, thin body. Elevated metabolic rates (approximately 20%) have also been found in large mustelid species [24,25]. However, BMR has only previously been measured in a single badger and this yielded a relatively low value for a mammal, and a mustelid in particular [25,26]. This low value could be a consequence of their semi-fossorial lifestyle, which has been linked to low metabolic rates in other mammals [27], or indeed may simply reflect morphological differences. While season is known to influence metabolic rate in a range of species [28,29], it is rarely controlled for when making allometric predictions of BMR or RMR [30]. Here we examine the phenotypic plasticity of badger RMR in relation to various life-history parameters and season. Study sites and animals The study was conducted primarily at Woodchester Park, Gloucestershire, South-West England, where a free-living badger population has been the subject of long-term research. This was supplemented with badgers captured from nearby short-term study areas at Cirencester (approximately 25 km to the east) and Bath (approximately 40 km to the south). During routine capture-mark-recapture studies (see [31]) data were collected from a total of 28 badger social groups. Badgers were caught in steel mesh live-traps which had been pre-baited with peanuts for 4-8 days prior to setting [32]. Captured badgers were then transferred to holding cages for transport to an examination facility with quiet, darkened surroundings in order to minimise stress. Each animal was anaesthetised with two parts butorphanol tartrate (Torbugesic, Wyeth, Ontario, Canada), two parts ketamine hydrochloride (Ketaset, Wyeth, Ontario, Canada) and one part medetomidine (Domitor, Orion Corporation, Espoo, Finland) administered intramuscularly [33]. Individuals were marked with a unique tattoo at first capture, and information on location, sex, age class, and body mass were recorded [32]. Badgers were assigned to one of two age classes: 'cub' (in the first year of life up to January the following year); and 'adult' (>one year old). Individuals that were trapped between mid-June and mid-September were classed as 'summer' samples; those captured between the second half of September and mid-December were classed as 'autumn' samples; those caught between the second half of December and the end of January were classed as 'winter' samples. Trapping was suspended from February to April inclusive owing to the presence of dependent cubs. As the numbers of captured badgers varied with each trapping event, individuals were initially randomly selected for measurement, although during the later stages of the study underrepresented groups were then targeted. Following examination, recovery from anaesthesia, and RMR measurement, all animals were released at their point of capture. Data were also collected from a captive adult female badger resident at a wildlife rehabilitation centre in Somerset, UK. The study was conducted between July 2009 and January 2013 incorporating data from a total of 57 badgers. The same individuals were not frequently re-captured hence repeated measurements were excluded. Resting metabolic rate (RMR) An open-circuit respirometry system [34,35] was used to measure RMR. A metabolic chamber (internal dimensions: 49 cm (H) × 47 cm (W) × 102 cm (L)) made of clear Perspex with an internal volume of 235 L was maintained at 20 ± 1°C by a temperature controller (PELT-5, Sable Systems International, Las Vegas, NV, USA). Fresh air from outside was pumped into the chamber by a mass flow meter (FlowKit 100M, Sable Systems International, Las Vegas, NV, USA) at a rate of 46.12 ± 8.57 LÁmin −1 (mean ± SD, range: 35-70 LÁmin −1 ) corrected to standard temperature and pressure. The flow rate was adjusted according to the mass of the animal within the chamber in an attempt to maintain oxygen depletion between 0.2-0.8% [36]. Within the chamber, a fan (Proline Mini Fan MF10) ensured thorough mixing and rapid equilibration of the gases inside. Rate of oxygen consumption ( _ VO 2 ) and carbon dioxide production ( _ VCO 2 ) were determined using an oxygen and carbon dioxide analyser (FoxBox Field Gas Analysis System, Sable Systems International, Las Vegas, NV, USA) which sub-sampled dried air (Drierite Laboratory Gas Drying Unit, W.A. Hammond Drierite Co. Ltd., OH, USA) from the metabolism chamber at a rate of 500 mlÁmin −1 . The analyser was calibrated to 20.95% O 2 (outside air) prior to the measurement of each animal. Percent O 2 and CO 2 were recorded every five minutes after a badger was placed inside the chamber within a holding cage, allowing it to become accustomed to the experimental conditions (most animals lay down immediately). Subsequently, readings were taken every one to three minutes until a stable value was obtained (Fig 1a, time until stable reading: 86.86 ± 44.35 min (mean ± SD)). Badgers were then removed from the chamber, which was flushed with fresh air at a flow rate of 85 LÁmin −1 for 15 min which was sufficient time to allow the levels of O 2 and CO 2 to return to atmospheric readings. The chamber was cleaned with a disinfectant before the next animal was measured. Both _ VO 2 ðml Á min À1 Þ and _ VCO 2 ðml Á min À1 Þ were calculated accounting for dilution and concentration effects: where FR is incurrent flow rate (mlÁmin −1 ), F e O 2 is excurrent O 2 fraction, F i O 2 is incurrent O 2 fraction, F e CO 2 is excurrent CO 2 fraction and F i CO 2 is incurrent CO 2 fraction [37]. Any drift Effect of anaesthesia on RMR To assess the effect of anaesthesia on RMR, measurements were taken during three distinct phases: (1) 'before' anaesthesia ( _ VO 2 prior to injection of anaesthetic agents); (2) 'during' anaesthesia (whilst the animal was recumbent); and (3) 'after' apparent recovery from anaesthesia. The latter measurement was taken after at least two to three hours after the badger had initially woken up and was able to move/stand (Fig 1b). For each animal, these three measurements were taken on the same day. Effect of ambient temperature on RMR We measured the metabolic rate of a captive badger at a range of ambient temperatures: 15, 19, 20, and 23°C. Measurements were carried out in January between approximately 12:00 and 17:00, during the period of minimal activity [39]. The badger was placed in the respirometry chamber where it was allowed to become accustomed to its surroundings. The flow rate was 40 LÁmin −1 . Beginning at the lowest temperature (15°C), percent O 2 and CO 2 were recorded every one to three minutes until a stable value was obtained (as above, time until stable reading: 39.75 ± 18.75 min (mean ± SD)). The temperature of the chamber was then increased to the next increment (i.e. from 15 to 19°C) and the animal was then allowed to settle. We then took measurements every one to three minutes until a stable value was obtained. These procedures were repeated for each of the above temperatures to estimate the range of the thermoneutral zone (TNZ). Comparison of RMR with allometric predictions Resting metabolic rate measurements were compared with the allometric prediction for large mustelid species (weighing 1 kg or greater) from [25] (Eq 3) and for mammals following Kleiber (1961) [40] (Eq 4): In Eqs 3 and 4, M is basal metabolic rate (BMR) in kcalÁd −1 and W is body mass (kg). Measurements were then compared with the prediction from White and Seymour (2003) [41] (Eq 5) which accounts for the variation associated with body temperature, digestive state, and phylogeny. Comparisons were also made with the prediction from McNab (2008) [6] (Eq 6) for carnivores. In Eqs 5 and 6, M is BMR (ml O 2 Áh −1 ) and W is body mass (g). When deriving energetic equivalents from other studies, an oxyjoule conversion factor of 19.9 kJÁL O 2 corresponding to an RQ of 0.76 (the average RQ from the present study) was used if possible. Statistical analyses All analyses were performed using R version 3.2.1 [42]. The lower critical temperature (LCT) was estimated as the temperature at which RMR values ceased to decrease with increasing temperature. The effects of anaesthesia on RMR were examined using a linear mixed effects model (RMR (kJÁd −1 ) * body mass + anaesthesia state + badger ID {random}) with the 'nlme' package [43], followed by a Tukey contrasts post hoc test using the 'multcomp' package [44]. A stepwise Akaike's Information Criterion (AIC) approach was adopted for model selection to investigate the effects of age, sex, and season on body mass. The effect of body mass on RMR (kJÁd −1 ) was examined by linear regression. To investigate the effects of age, sex, and season on RMR (kJÁd −1 ), multiple analysis of covariance (ANCOVA) models were fitted with body mass as a covariate. Resting metabolic rate values were compared with the various allometric predictions using a Welch corrected one-way analysis of variance (ANOVA), followed by a Dunnett's many-to-one contrasts post hoc test [45]. Cubs were excluded from comparisons with allometric predictions, as juveniles of other mustelid species have previously been found to have elevated metabolic rates due to growth [7]. The percentage difference of RMR between seasons was calculated by dividing the absolute difference between mass-adjusted means by their arithmetic mean. The percentage error of allometric predictions was calculated by dividing the difference between measured and predicted values by the predicted value. Normality and variance assumptions were checked using Shapiro-Wilk and Levene's tests, respectively. Statistical significance was accepted at p 0.05. Body mass is reported as the mean ± standard deviation. Resting metabolic rate values are reported as the mass-adjusted mean ± standard error (using the 'effects' package [46]) of the energetic equivalents unless otherwise stated. Effects of ambient temperature and anaesthesia on RMR Metabolic rate was found to decrease with increasing temperature until 20°C, which was deemed to be within thermoneutrality (Fig 2). There was a significant effect of anaesthesia on RMR (F 2,10 = 11.9, p = 0.0023; before: 2522 ± 108 kJÁd −1 , n = 6; during: 1841 ± 109 kJÁd −1 , n = 6; after: 2431 ± 94 kJÁd −1 , n = 8; adult badgers n = 8). There was no interaction between body mass and anaesthesia state (p = 0.74). Post hoc tests indicated that there were significant reductions in RMR during anaesthesia compared with both before (p < 0.001) and after (p < 0.001) anaesthesia. In contrast, RMR measurements taken before and after anaesthesia did not differ from one another (p = 0.8; Figs 1 and 3). Thus, for the purposes of analysis measurements recorded before and after anaesthesia were combined. Comparison of RMR with allometric predictions The RMR values measured resulted in the equations: Summer: Autumn: Comparison of RMR (kJÁd −1 ) from 8 adult badgers 'before', 'during', and 'after' anaesthesia (intramuscular administration of two parts butorphanol tartrate, two parts ketamine hydrochloride and one part medetomidine). There was a significant depression in RMR during anaesthesia compared with measurements taken before (p < 0.001) and after (p < 0.001). However, between two to three hours after waking up ('after') there was no longer a significant effect (p = 0.8). doi:10.1371/journal.pone.0135920.g003 Winter: Fig 5). Resting metabolic The mass-adjusted mean ± standard error summer RMR was 2366 ± 70 kJÁd −1 , n = 18. This was higher than, but not significantly different to the mean autumn RMR (2129 ± 91 kJÁd −1 , n = 11, p = 0.11). Resting metabolic rate in winter was significantly lower than in summer (1845 ± 109 kJÁd −1 , n = 8, p < 0.001), but not significantly different to autumn (p = 0.13). The regression lines are of the values predicted by the ANCOVA model (RMR (kJÁd −1 ) * body mass + season) coefficients. The badger BMR value from Iversen (1970 [25,26] is also shown for comparison (▴) with a value of 1389 kJÁd −1 . Variation in RMR Season. Resting metabolic rate was significantly higher during summer than winter. This concurs with previous studies that indicate a reduction in body temperature in badgers during cold winter months in order to conserve energy and rely on fat reserves [19][20][21]. The observed change in metabolic rate in this study might be explained by annual variation of the thyroid hormone thyroxine. Thyroxine is known to increase BMR in humans [47,48], and is associated with hibernation in other species such as the European hedgehog (Erinaceus europaeus) and black bear (Ursus americanus) [49,50]. High thyroxine utilisation rates have also been suggested to explain the unusually high BMR of the shrew (Sorex vagrans) [51]. In badgers, blood plasma concentrations of thyroxine have previously been shown to fall rapidly to low levels by December until a rise occurs during March [52]. Thus, a winter depression in thyroxine would correlate with the reduction in metabolic rate seen in the current study. In addition, relatively high levels of thyroxine have been shown to coincide with the onset of moulting and maximal hair growth in adult badgers [53]. Moulting is an energetically costly process in other species [54,55], which may also contribute to the seasonal RMR differences in this study. [41] (Eq 5) for mammals accounting for body temperature, digestive state and phylogeny are included for comparison. The badger BMR value from Iversen (1970 [25,26] is also shown (▴). doi:10.1371/journal.pone.0135920.g005 Resting Metabolic Rate in Badgers It is possible that seasonal differences in metabolic rate vary across the wide geographic distribution of badgers, as does the extent of winter lethargy [18]. Badgers are known to undergo seasonal changes in body mass largely due to the accumulation of subcutaneous fat reserves [56], which may also be of significance given that both fat mass and fat-free mass contribute separately to BMR [57,58]. Higher summer BMR has been reported in other mustelid species, the American mink (Neovison vison), and Siberian polecat (Mustela eversmannii) [59,60]. Age and sex. We did not observe age related differences in RMR, which was unexpected as juvenile animals generally experience elevated levels of growth [7,61], and badger cubs are known to gain body mass and moult continuously [17,62]. Further work with a larger number of cubs may reveal differences. We also found no difference in RMR between males and females. This may be because we did not trap badgers from February to May (for welfare reasons), which is when most cubs are born and suckled [63] and, therefore, when high energetic costs in females might be expected. Although pregnancy commonly occurs from December to February [17], which was within our measurement period, it has previously been found that gestation in badgers is not associated with a loss in body condition, in contrast to lactation [64]. In the American badger (Taxidea taxus), the energy requirements of gestation are thought to be relatively small, and up to 16 times less than those of lactation [65]. Hence, sex differences in RMR might not have been detectable under the conditions of this study. Previous measurements and allometry In the current study, we compared badger RMR values with a range of predictions. Metabolic rate of the European badger has only previously been measured in a single animal (during summer months, mean body mass: 10.30 kg) which recorded a lower value than that predicted for mammals with a BMR of 1389 kJÁd −1 (17.5% lower than the Kleiber (1961) [40] prediction (Eq 4)) [25,26]. This is equivalent to a food intake of approximately 110 earthworms (Lumbricus terrestris) per day (with an energetic value of 12.59 kJ per average 4.27 g worm) [66]. In the current study, we found that the majority of individual badger RMR readings were higher than that reported by Iversen (1970 [25,26]. Our values were also significantly greater than predicted for large mustelids (Eq 3) during summer, but not in autumn or winter. Morphologically, badgers are long and wedge-shaped, with a low-slung and elongated skeleton that conforms to general mustelid proportions [63]. Although less pronounced than in smaller mustelid species, it is possible that this leads to an energetically disadvantageous surface-to-volume ratio and may contribute to the relatively high RMR values in the present study. Future work It is important that experimental procedures do not bias results by influencing the metabolic rate of animals as they are being measured. Lower than expected BMR (compared with [25]) has been reported in the black-footed ferret (M. nigripes) and Siberian polecat (M. eversmannii), which was attributed to handling procedures that reduced stress. In the present study, minimization of stress was an important consideration and consequently captured badgers remained inactive for long periods, although a passive or reactive [67] stress response cannot be discounted. Trapping has previously been shown to influence levels of fecal cortisol metabolites in badgers [68]. Variation in measurements can also arise if the animals were not in a truly post-absorptive state. We were not able to determine the time badgers had spent in traps before collection from the field, which could have ranged between two and 18 hours before RMR was measured. Hence, although some food could have remained in the digestive tract these potential sources of error are likely to have been random with respect to age, sex and season and so should not have influenced our results. The current study measured RMR whilst each of the previously published allometric predictions are of BMR and thus not directly comparable. Nevertheless, our measured values were closest to those predicted for large mustelids, and did not differ significantly during autumn and winter. Finally, the full shape of the _ VO 2 -temperature response curve should be clarified. In Iversen (1970) [26], the measurements used to calculate BMR were recorded at 22°C. When the temperature decreased to 18-16°C there was an increase in metabolic rate indicating that this range was below the lower critical point. In the present study, we selected 20 ± 1°C based on our measurements of a captive badger at a range of ambient temperatures. Clearly there is a need for data from more individuals at different times of year to account for possible age and season related influences on the TNZ [69,70], such as those resulting from changes in body composition and levels of subcutaneous fat. Conclusions In cold regions, badgers are known to become less active during winter when they spend longer periods within their setts, often huddling [71], in what is a temperature buffered microclimate [72]. This reduced activity, termed winter lethargy, is associated with several physiological changes including depressed thyroxine plasma concentrations, increased utilisation of adipose reserves, and reductions in body temperature [19,52,73]. Possible environmental cues for these behavioural and physiological adaptations include photoperiod, winter temperature, and precipitation [18,19,74,75]. In the present study, we provide evidence that a reduction in RMR occurs in winter, suggesting that badgers adopt an energy conservation strategy at times when thermoregulatory challenges are greatest and food is least available. Such adaptations may also be of importance when making allometric predictions of metabolic rate. Supporting Information S1 Dataset. Compressed.zip containing data used in anaesthesia, body mass, RMR, and allometric prediction analyses. (ZIP)
5,476.8
2015-09-09T00:00:00.000
[ "Biology" ]
Functional Charge Transfer Plasmon Metadevices Reducing the capacitive opening between subwavelength metallic objects down to atomic scales or bridging the gap by a conductive path reveals new plasmonic spectral features, known as charge transfer plasmon (CTP). We review the origin, properties, and trending applications of this modes and show how they can be well-understood by classical electrodynamics and quantum mechanics principles. Particularly important is the excitation mechanisms and practical approaches of such a unique resonance in tailoring high-response and efficient extreme-subwavelength hybrid nanophotonic devices. While the quantum tunneling-induced CTP mode possesses the ability to turn on and off the charge transition by varying the intensity of an external light source, the excited CTP in conductively bridged plasmonic systems suffers from the lack of tunability. To address this, the integration of bulk plasmonic nanostructures with optothermally and optoelectronically controllable components has been introduced as promising techniques for developing multifunctional and high-performance CTP-resonant tools. Ultimate tunable plasmonic devices such as metamodulators and metafilters are thus in prospect. There have been ongoing theoretical and experimental advancements to understand the possibility of direct dynamic charge transport between plasmonic objects in the absence of capacitive coupling between plasmonic particles. Relatively, researchers have shown the possibility of plasmon-driven charge transfer process in DNA-mediated metallic NP dimers [31,34]. In this context, one can fill the gap between metallic NPs with DNA (as a scaffold for the growth of the NPs within the gap), and by increasing the metallic NP concentration, the transition from capacitive to conductive coupling can be formed. Besides, for the conductively bridged NPs, it is confirmed that due to the shuttling of photoexcited electrons across the junction, the CTP spectral feature appears in the lower energies far from the superradiant dipole moment [41]. On the other hand, as an alternative route to this method, it is well-accepted that particles with atomic gaps in between (below~0.5 nm) are able to sustain CTPs through molecular quantum tunneling principle [40,42]. Theoretically, the electron tunneling across an atomicscale gap at optical frequencies cannot be explained using classical electrodynamic theory and requires quantum mechanical description. In spite of possessing interesting properties, CTPs are inherently suffering from lack of spectral tunability. Recent demonstrations that the functionality of CTP feature can be optimized by integrating bulk metallic systems with thermally and electronically controllable materials show why tunable CTP spectral features are important for implementing novel and advanced plasmonic tools [36,[43][44][45][46]. More precisely, this shortcoming has successfully been addressed by combining plasmonic structures with, for example, phase-change materials (PCMs) [43,44,47], graphene [45,46], and thermally tunable substances (i.e., InSb). In this review, we summarize recently introduced mechanisms that underpin the electron transition between plasmonic particles and trending applications of functional CTPs. One of the goals of this article is to present a comprehensive depiction of various modalities developed for the excitation of tunable CTPs. Besides, we explain the state-ofthe-art approaches that have been proposed for enhancing the tunability of CTPs and the use of this spectral phenomenon in designing practical photonic tools such as modulators, switches, and nonlinear harmonic signal generators. Next, we illustrate how a careful integration of subwavelength plasmonic resonators with optically, thermally, and electronically functional materials leads to the emergence of multifunctional nanooptoelectronic and terahertz (THz) devices. Charge Transfer Plasmons in Conductively Bridged Particles and Resonators Direct manipulation of charges via redistribution and transfer has been introduced as a simple approach to excite CTP spectral features and studied both theoretically and experimentally in nanoscale plasmonic systems [31][32][33][34][35][36]. Possessing a different principle compared to capacitive resonance interference, the direct charge transfer between plasmonic resonators and particles enables tailoring advanced plasmonic tools. Additionally, this direct transition of charges between the metallic structures delivers exceptional abilities to actively control the charge shuttling by varying the intensity of the incoming beam, which could be used for the development of near-infrared and THz plasmonic devices. A remarkable example for the excitation of CTP mode is observed in a two-member dimer consisting of a pair of proximal metallic NPs that are connected with a conductive junction (insets in Figure 1(a)) [32]. In the method developed by Wen et al. [32], a dimer antenna with a conductive junction has been considered as the intermediate case between a conventional metallic dimer with capacitive opening and a nanorod. This allows to understand the formation of new spectral features due to the presence of a metallic bridge between NPs. Figure 1(a) demonstrates the numerically obtained scattering efficiencies of three different structures with judiciously defined geometries that are stated in the figure caption. Under p-polarized beam illumination, in the dimer limit, the capacitive coupling between the two nanodisks leads to strong hybridization of plasmons and excitation of a superradiant mode around ∼1.95 eV. On the other hand, bridging the dimer with a conductive nanowire gives rise to the formation of two peaks in the spectrum correlating with the dipolar mode (at ∼2.1 eV) and a narrow CTP mode at lower energies far from the classical dipole (around ∼0.96 eV). Lastly, for the plasmonic nanorod, compared to the dimer structure, the dipolar resonance is red-shifted to ∼1.3 eV. As can be seen in the charge plots in Figure 1(b), the fundamental difference between the CTP extreme and dipolar mode is the distinguished oscillation of the electric current density in the junction of the conductive nanowire-bridged plasmonic dimer, while there is no such a feature in both dimer and nanorod structures. Moreover, the charge distribution at the energy of CTP explicitly represents the splitting of charges across the structure (see Figure 1(b), (2)). To further study the role of the conductive junction on the excitation of CTP, Wen and teammates have shown that the junction geometry and conductance strictly determine the position and intensity of the CTP feature. To this end, the researchers employed CTP-resonant nanostructures based on aluminum and gold substances. Considering a nanowire with the frequency-dependent AC conductivity of σðωÞ, and specific length (l), width (w), and thickness (t), the corresponding conductance can be written as As a critical parameter, in Figure 1(c), the junction conductance as a function of nanowire width is plotted, numerically and experimentally. Obviously, by increasing the width of the junction, the relative conductance increases slightly for gold structure, while this value sharply increases for the aluminum nanowire. Here, for structures with the equivalent geometries of the bridging nanowire, aluminum structures hold a higher junction conductance than gold structures. Further analysis for the junction properties and plotting the CTP position as a function of junction conductance reveal that the CTP mode exhibits a dramatic sensitivity to the junction conductance in the small conductance limit (Figure 1(d)). However, this sensitivity becomes strikingly less as the conductance increases beyond a specific value (~50) for all the studied cases. Recent studies have also shown that CTP mode can be viewed as a spectral feature at much lower energies (e.g., far infrared, THz) when the coupling between neighbor resonators is in the weak regime. As a leading work in the THz regime, Ahmadivand et al. [33] have represented the transition from capacitive coupling to direct charge transfer using a symmetric cluster of V-shaped metallic microblocks. The insets in Figures 1(e)-1(g) illustrate the scanning electron microscopy (SEM) images of the unit cell, where the diameter of the central disk gradually increases. As obvious in the normalized transmission amplitude (NTA) profile (Figure 1(e)), the capacitive coupling plays a fundamental role in the emergence of two pronounced minima in the spectral response. The deeper minimum around 1.22 THz correlates with the dipolar resonant mode, while the quadrupole mode appears as a weaker dip around 1.85 THz. Here, the middle disk enhances the strength of the induced multipolar modes via the dipole-dipole interaction [48]. To continue, increasing the diameter of the disk monotonically and providing a touching regime between the disk and V-shaped assemblies lead to the elimination of the quadrupolar moment and emerging of the CTP dip around~0.5 THz (Figure 1(f)). Due to the formation of insignificant touching spots between the blocks and the central disk, one can expect the direct flow of charges across the unit cell similar to the optical systems and quantum transitions. Further increases in the diameter of the central disk to realize the overlapping regime give rise to a damping in the energy of both dipolar and CTP modes projected from the spectral characteristics of the entire unit cell (Figure 1(g)). The cross-sectional electric-field (E-field) intensity along the unit cell axis helps to perceive the role of the conductive disk in the excitation of CTP feature. As plotted in Figures 1(h)-1(j), by inserting and increasing the size of the disk, the Efield confinement reduces dramatically owing to losing the capacitive coupling between the V-shaped blocks. Additionally, the overlapping areas between the V-shape and central resonators allow to induce sharper and boosted plasmonic Figure 1: (a) Numerically computed scattering efficiencies of a single dimer (blue), a conductive nanowire-linked dimer (red), and a nanorod (black). The insets show the schematics of the studied nanostructures. The diameter of the disk is 95 nm, the width and the length of junction wire in the junction are 15 and 30 nm, and the thickness of all structures is 35 nm. (b) Charge plots at the position of scattering peaks for the structures in (a): a dipolar plasmon for the nanorod, capacitively coupled superradiant dipolar resonance (1) and CTP resonance (2) for the nanowire-bridged dimer, and a CTP resonance for the dimer. (c) Experimentally and numerically obtained junction conductances of nanowire-bridged dimers at CTP resonances as a function of nanowire width for aluminum and gold nanostructures. (d) CTP resonance as a function of the junction conductance for nanowire-bridged dimers with varying junction substances [32]. Copyright 2016, American Chemical Society. Characterized and simulated normalized transmission spectra for the metallic assembly for the presence of (e) a nanodisk between gaps, (f) a touching disk to the V-shaped resonators, and (g) an overlapping disk. Insets are the corresponding SEM graphs for disk diameter variations. (h-j) Cross-sectional E-field concentration (|E|) diagrams for the presence of a nontouching disk, presence of a touching disk, and presence of an overlapping disk in the middle of the unit cell, respectively. (k) Junction resistance variations as a function of the intermediate disk diameter. Inset is the CTP position as a function of conductive disk diameter [33]. Copyright 2016, Optical Society of America. dipolar and CTP modes in the THz limit. The geometry of the merging regions enables the shuttling of charges between the linked V-shaped pixels of a standalone assembly. This can be better understood by defining the resistance (R) of the disk as a function of the junction geometry [49]: where σðωÞ is the frequency-dependent AC conductivity, d is the length between junctions, and δ is the contact width at the junctions due to overlapping. Moreover, according to Ohm's law, the conductance is defined by [50] where n m is the electron density of metal, e is the elementary charge, τ is the mean-time between collisions, and m e is the mass of electron. Enlarging the diameter of the overlapping disk lengthens the distance between junctions, and the corresponding resistance increases accordingly (Figure 1(m)). Although the dynamic charges would be able to transit across the trail, the dissipative losses can be significant because of the inherent disk resistance. When the incident THz beam is resonant with the spectral line shape, the fairly strong confinement of charges across the lossy metallic resonators leads to a drastic electron decay. Thus, by extending the size of the middle disk, the induced dipolar and CTP modes shift towards the lower energies due to the enhanced dissipative absorption losses [51,52]. Charge Transfer Plasmons in Nonlocal and Quantum Regimes As discussed in Introduction, the general idea to model the light-matter interactions and the resulting electromagnetic field distribution is mainly based on solving coherent charge density oscillations using classical electromagnetic theory by applying Maxwell's equations [4]. By shrinking the interparticle distance down to nanoscales, localization and intensity of the incident electric field can be enhanced [53][54][55]. Nevertheless, when the system requires to operate beyond the nanometer gap regime (i.e., subnanometer openings) or contains a quantum topology (e.g., quantum dots, molecules, or atoms), nonlocal screening effects (due to quantum nature of electron) modify the plasmonic response of structure [56]. In this limit, quantum mechanical effects (e.g., collective quantum electron tunneling, nonlocal transport) occur and the whole system must be described under nonlocal conditions [57][58][59], by implementing advanced theoretical approaches [60][61][62]. This enables the calculation of the confinement of the induced surface charges appropriately [63]. Such computations in quantum systems have successfully been done using a quantum-corrected model (QCM) [42]. Basically, this concept is an updated version of the time-dependent density-functional theory (TDDFT), which combines the classical electromagnetic framework with the collective quan-tum electron tunneling [38]. As declared in Savage et al. [64], when the interparticle distance is d ≥ 0:4 nm, one can treat the system using classical Maxwell's descriptions to understand the plasmonic interactions. For d < 0:4 nm, classical formulation starts to diverge as a result of the increased rate of critical electron tunneling between proximally located NP surfaces (~0.3 nm), where the plasmon interactions start to get into the quantum regime. For d < 0:3 nm, the quantum tunneling dominates the spectral response of the entire system. By considering a rectangular barrier, this phenomenon can be predicted as the following:d QR = ln ð3qαλ/2πÞ/2q, whered QR is the critical gap size, q is the semiclassical electron tunneling wavenumber, λ is the optical plasmon wavelength, and α is the structural constant. More realistically, the addition of "coherent quantum transport" parameter into the formula given above boosts the tunneling rate, which gives rise to the quantum-tunneling CTPs (more detailed information about the calculations can be found in Ref. [64]). In the upcoming part, we briefly review the recent advances in the quantum-mechanical charge shuttling process in nearly touching metallic NPs. As a pioneer work, Wu et al. [39] demonstrated the first theoretical explanation of plasmon resonances between extremely close NPs due to FN tunneling. In spite of possessing certain conditions to satisfy the direct tunneling (e.g., subnanometer gap), this approach provides a novel outlook to induce CTP modes for a different set of gap sizes by applying high-intensity illumination. From the active nanophotonic device perspective, the proposed concept is significant owing to its ability to control the charge shuttling process by changing the intensity of the incident light. As discussed previously, approaching two NPs to each other (nearly touching regime) and the presence of a metallic path between them lead to transition of oscillating electric charges across the nanoplatforms, in addition to the dipolar mode of the dimer system. As the conductive path becomes narrower, the excited CTP resonance shifts to the longer wavelength and several higher-order modes (e.g., hybridized dipolar and quadrupolar modes) appear [65][66][67]. Nevertheless, when the conductive bridge turns out to a~0.4-0.5 nm gap, the CTP mode disappears, meaning that a quantum mechanical framework should be taking into account to model possible electron tunneling effects. To this end, in Figure 2(a), the authors demonstrated possible tunneling mechanisms as direct tunneling (effective for small interparticle distance) and FN tunneling (dominant when a strong electric field is applied) in a simple schematic. For the direct tunneling, electrons tunnel through the square barrier (from A to C); while for the FN tunneling, electrons first tunnel through the triangular barrier (from A to B), then transport to C. In general, the electrons exist in B are called "space charges." Hence, this process can also be termed as space charge-based charge transfer. Moreover, the researchers here utilized the microscopic version of Ohm's law (see Equation (2)) to specify the tunneling electrons in the gap [68]. It is important to note that for the calculation of tunneling electron density in the gap, Wu and colleagues applied the Kohn-Sham density functional theory, to model exchange correlation and Coulomb interactions between electrons [69,70]. Further, they successfully explained the resulting CTP mode in light of the obtained tunneling electron density (n gap ) and gap conductivity (σ gap ), which are determined based on a numerical methodology developed by the authors [39]. To briefly discuss, εðωÞ = 1 + iðσ gap /ωε 0 Þ is utilized as a part of QCM to derive the optical response of the system. As demonstrated in Figure 2(b), when σ gap is larger than 1:145 × 10 5 S/m, a CTP mode is emerged. For the "tunneling" part indicated in Figure 2(b), a large plasmon-enhanced electric field (10 10 V/m) (or "gap field") is required to bring enough electron in the space charge area to induce the CTP resonant mode. In the following, in Figure 2(c), to better understand the physical mechanism behind the process, σ gap and n gap are compared for the gap sizes of 0.4 nm, 0.6 nm, and 0.8 nm in three different gap field regimes. In the direct tunneling limit, the tunneling barrier can be controlled through the charge potential when the gap field is small, which means σ gap primarily rely on the gap length. As expected, when the gap is 0.4 nm, electrons can straightly tunnel along the narrower barrier (see Figure 2(d), (i)), with the increased tunneling probability of electrons, but for larger gaps (e.g., 0.8 nm), the charge flow process could not be sustained because of the wider tunneling barrier. To address this issue, the employed electric field intensity can be amplified (~10 10 V/m) to start the FN tunneling procedure, in which the tunneling barrier is reshaped as plotted in Figure 2(d), (ii). In this regime, σ gap and n gap are very responsive to the intensity of the external illumination, meaning that these parameters can be easily modified depending on the applied electric field intensity. For instance, when the field intensity 5.5×10 6 (3.6×10 10 ) 1.2×10 6 (1.8×10 10 ) 7.3×10 5 (1.5×10 10 ) 3.2×10 5 (1.2×10 10 ) 1.1×10 5 (1.0×10 10 ) 4.6×10 4 (9.0×10 9 ) 1.3×10 4 (8.0×10 9 ) 9.0×10 2 (6. is altered from 10 9 V/m to 10 11 V/m, σ gap is increased from 1 to 10 7 S/m. Any further increase in the field intensity would lower the level of barrier where all tunneling electrons can stay over the barrier (see Figure 2(d), (iii)). In this regime, σ gap is not dependent on the applied field intensity and gap size, as illustrated in Figure 2(c). This results in perfect transmission of electrons through the gap and extremely large σ gap to retain the CTP mode for the studied gap dimensions. Lastly, in Figures 2(e)-2(g), simulated extinction spectra for different gap size (d), σ gap , and the gap field are presented to prove the tunability of the CTP peak. With the gap size of 0.4 nm, the CTP mode always arise either via direct tunneling (Figure 2(g)) or via FN tunneling (Figure 2(f)). For larger gap sizes (e.g., 0.6 nm and 0.8 nm), high intensity external irradiation (~10 10 or 10 11 V/m) is necessary to excite the CTP resonant mode (see Figures 2(f) and 2(g)). In another recent study, Kulkarni and Manjavacas [71] investigated the quantum effects related to the charge transfer process by examining the spectral response of gold dimer with a two-level system (TLS) (e.g., an atom or a molecule). In this understanding, fully quantum-based computations showed that CTPs are only recognizable if one of the energy levels of the studied two-level system is in resonance with the Fermi level of the dimer system, by enabling the electron transition along the junction. At resonance, the absorption spectrum of the system and the conductance of the junction are calculated, and the outcomes indicated that the conductance of the junction is equivalent to one quantum of conductance, which is G 0 = 2e 2 /h. In Figure 3(a), a schematic for the studied subwavelength system is illustrated. This system consists of two identical gold spheres of 32 au in diameter and a small sphere with a diameter of 6 au to model the TLS. In all calculations, the whole platform is assumed to be in a vacuum. To explore the capabilities of the system, TDDFT in the adiabatic local density approximation [72,73] is utilized. To this end, the authors only take into account the conduction electrons of the metal and the gold NPs are modelled based on jellium approximation, where the ionic background charge has a uniform charge density (n 0 ). Here, n 0 is judiciously selected to correlate with the density of gold, which is equal to a Wigner-Seitz radius of 3 au. In a similar fashion, TLS is formed by having a Wigner-Seitz radius to make sure that TLS only adds a single electron to the system. Without applying any external field, the one-electron potential of the platform is defined as V eff ðrÞ = V 0 ðrÞ + V H ½nðrÞ − n 0 ðrÞ + V xc ½nðrÞ, where V H is the Hartree potential, n is the electronic density, V xc is the exchange-correlation potential (also known as the Perdew-Zunger functional as a part of local density approximation [74]), n 0 is the background ionic charge density, and V 0 is the uniform background potential which incarcerate the electrons within the NPs and the TLs. For the gold NPs, this potential is fixed at -4.6 eV (5.2 eV lower than the vacuum level) to be able to have a proper electron spill-out. As plotted in Figure 3(b), the energy levels of the TLS are tuned according to the Fermi level of the NPs. Additionally, the background and the resulting equilibrium one-electron potentials are represented as gray and red colors, respectively, for V TLS = −4:6 eV. It is evident from Figure 3(b) that the potential barrier of the junction (indicated as dashed black line) is reduced by virtue of the TLS. In the following, the associated equilibrium electronic density is demonstrated in Figure 3(c), which clearly indicates Friedel oscillations due to the discretization of the electronic levels [74]. Besides, the electronic density in the junction is a proof of the single energy level of the TLS. Although the TLS decreases the potential barrier, the electronic density in the gaps between the NPs does not reach to zero. In Figure 3(d), the authors plotted the absorption spectrum for two different cases (by considering, the incident field is polarized along the z-axis): (a) the bare metallic dimer (gray line) and (b) the dimer with the TLS (red line). For the latter case, the background potential of the TLS is selected as equal to the NPs. In both cases, a bonding dipolar mode (BDP) [75,76] appears around 5 eV. For the low-energy part of the spectrum, three new peaks are formed, due to the addition of TLS into the dimer system. To comprehensively evaluate the origin of these modes, the corresponding charge distributions are studied on the surface of the NPs (see Figure 3(e)). The results clearly show that the mode around 5 eV has a dipolar nature, and the other three modes indicate a monopolar distribution pattern, which is the solid evidence of the charge transfer through the junction. What is more, the modes at 1.1 eV and 1.55 eV display an oscillation of charges inside the NPs owing to the finite size effects. Based on these results, one can state that the only way to induce a CTP in this system is contingent upon the use of the levels of the TLS as the conductive paths (see the schematic in Figure 3(f)). Thus, the authors analyze the effect of background potential of the TLS on the response of the system. As demonstrated in Figure 3(g), V TLS is varied to manipulate the position of the levels of the TLS. In this plot, each dot represents an energy level of the whole platform. Depending on the localization rate of the TLS levels, the size of the dot can be bigger, which is beneficial to differentiate localized levels in the TLS. A group of states are localized close to the Fermi level of the system, shown in black dashed line, for small numbers of V TLS . These states become more localized as the background potential is getting deeper and the states move to lower parts in the energy diagram. However, because of the strong localization, the interplay between the NPs is diminished. Particularly, when V TLS is lower than -16.2 eV, an extra level starts to localize on the TLS. Lastly, in Figures 3(h) and 3(i), the absorption spectrum of the system (at a low-energy part) is presented for the V TLS values considered in Figure 3(g). The results explicitly verify the requirement of having a localized state in the TLS, whose energy is near the Fermi level of the system, to be able to generate a CTP mode. Functional Charge Transfer Plasmons As discussed in the previous section, in the quantum tunneling of energetic optically driven electrons, possessing an active control over the CTP spectral feature is limited to modifying the incident field intensity and/or morphological variations [42,45]. On the other hand, the induced CTPs via the direct transition of optically driven electrons across the bulk metallic paths between NPs suffer from the inherent lack of tunability. Recently, these challenges have effectively been addressed by using optoelectronically and optothermally tunable components in the purpose of the excitation of functional CTPs [43][44][45][46][47]77]. Such an active tunability allows for the exploration of several integrated plasmonic instruments and applications owing to its great potential for the next-generation multifunctional technology. Beyond the fundamental studies, now research in CTP devices has been focused on the experimental attempts to efficiently transform its capabilities into the real-world applications, where two fundamental issues must be addressed: functionality and scalable fabrication. While the former has been successfully realized by employing optoelectronically and thermally tunable compounds, in considering the later concern for industrial applications, one of the major challenges is scalable cost-effective fabrication of the functional metastructures. This can be done, for instance, by developing robust techniques based on nanolithography-free techniques. In this section and following subsections, we briefly review the recent techniques that have been utilized to optimize the tunability of CTPs. We will demonstrate how the integration of the plasmonic nanostructures with optoelectronically and optothermally controllable components improves the tunability of CTP spectral features. Graphene-Enhanced Functional CTP Devices. A oneatom thick layer of sp 2 -hybridized carbon atoms, known as graphene, has received copious interest due to its significantly high electron mobility, mechanical flexibility, and exquisite optical properties [78][79][80][81][82]. Graphene-enhanced fundamental applications include but not limited to the light harvesting [82][83][84], ultrafast optics [85][86][87], nonlinear photonics [88][89][90], and quantum effects [91,92]. One of the most interesting properties of 2D carbon sheet is the possibility of controlling the photoconductivity of this monolayer via modifying the generated carrier density [93,94]. This exquisite advantage has effectively been accomplished by modeling the electronic properties of graphene in terms of massless Dirac fermions [93][94][95]. This feature enables graphene to display strong infrared plasmons and made it as a promising component in implementing advanced nanophotonic devices [96][97][98][99][100]. The tunable AC photoconductivity of graphene allows to provide semimetallic behavior with an optical conductivity as a function of quantum conductance as [93,94] where h is Planck's constant. Analogous to the plasmonic components, the spectral properties and plasmonic response of graphene sheet can be estimated by the Drude absorption model for a wide range of carrier densities [93,94,101]. To describe the free carrier photoconductivity with parabolic dispersion in a 2D sheet, one can demonstrate the temperature-independent model as [68,102] σ in which m is the electron mass and Γ is the transport scattering rate. The unprecedented levels of beam confinement and EM field enhancement by graphene allow for having electrostatic control over the plasmonic response in the absorption spectra [96]. The successful example of such quantum effect was provided to explain the quantum effects in the plasmonic response of graphene nanostructures linked by a thin molecular junction. Thongrattanasiri and colleagues [77] demonstrated that the intrinsic characteristics of graphene enable to tune the absorption spectra of a dimer structure via adding small number of atoms. Using first-principle analyses, the plasmonic response of the entirely graphene-based dimer was studied for the intermediate junction with varying atomic row widths (4)(5)(6)(7)(8). The designed triangular structures forming the bowties are oriented with respect to the graphene lattice by assuming having armchair edges. This prevents the possible losses that can be naturally observed in zigzag-edge nanostructures, due to the presence of zero-energy electronic edge states [103]. To begin with, Thongrattanasiri et al. [77] used the random-phase approximation (RPA) theory and finiteelement method (FEM) to analyze the plasmonic response of the graphene-based bowties. In addition, by fixing the fix side length of the triangles to 8 nm in all cases (almost ∼10 3 atoms in each triangle), the researchers utilized the following settings: Fermi energy of the structures was to E F = 0:4 eV, the intrinsic damping as ħτ −1 = 1:6 meV, all corresponding to a DC mobility of 10 4 cm 2 /V·s [104,105]. Figure 4(a), n resembles the number of carbon hexagons that are required to join the graphene triangles for m = 2. As can be explicitly seen in the extinction cross-section, by increasing the junction width, the spectral features show a trend from higher to lower energies. Figure 4(e) shows this effect in details, in which the plasmonic features are arranged as a function of energy and junction width for different values of the junction length n. The size of the circles is prepared proportional to the intensity of the plasmon mode defined as the area under the corresponding plasmon peaks in the extinction profile. In the narrower junction limit, the spectra point out pronounced plasmonic features placed around ∼0.47 eV. On the other hand, when the junction becomes wider, the spectra are dramatically dominated by lowerenergy plasmonic features around ∼0.22 eV. Finally, for the junction width intermediate geometries, there is a complex transition between the mentioned two regimes, with intermediate-energy features around ∼0.35 eV. As depicted in Figure 4(f), the length (n) of the junction does not have a significant influence on the spectral changes. It is significant to note that the three distinguished behaviors of the plasmon energies declared before for all narrow, intermediate, and wide bridges perform for all sizes of the junction length. Further analyses help to understand the nature and properties of the plasmons in the graphene-based structures. In Figures 4(g)-4(l), the induced charge distribution profiles are plotted from narrow to wide junctions, while the length (n) of the junction was fixed. Relatively, Figure 4(g) illustrates the polarization profile for the high energy plasmons (∼0.47 eV) in the capacitive coupling regime (nontouching condition, m = 0), validating a dipole-dipole interference that is not altered when a narrow junction (m = 2) is inserted between the two triangular resonators of the structure (see Figure 4(h)). Conversely, when the width of the molecular bridge increases towards the intermediate regime (m = 6, with the energy of plasmons around~0.35 eV), one can see a distinct dipolar polarization pattern across the junction (see Figure 4(i)). In this limit, the junction plasmons become the dominant feature, arising from the local electronic properties of the junction. Lastly, for the wider bridge (m = 8), similar to the fully metallic structures studied previously, the CTP becomes the dominant spectral response (see Figure 4(j)). The signs on the density plots of charge distribution maps qualitatively correspond to the distribution of the plasmon-induced charge as a function distance to the dimer center, which is exhibited in Figure 4(k). Interestingly, increasing the length (n) of the junction between neighbor graphene-based nanotriangles leads to effects that are consistent with the conclusions for the width of the bridge. Strictly speaking, Thongrattanasiri and teammates verified that the classical description of the investigated graphene nanostructures fails to regenerate the plasmonic behavior derived from first principles. This is obvious, for instance, for the junction width with intermediate sizes, the conventional computations oversight the intermediate-energy junction plasmons that have been observed in the quantum calculations. In addition, another disagreement between classical and quantum modes has been observed, in which the traditional approach predicted a smooth fadeout of the dipole-dipole mode exhibited by nonoverlapping triangles during insertion of the bridge, while the CTP shows a singular behavior, as it migrates towards zero energy in the limit of vanishing junction width. In Figure 4(l), it is illustrated that the electronic states contributed in the plasmon of separated graphene nanotriangles are almost equal to the plasmons from an individual nanopixels. However, this involves a minor amount of Coulombic interaction and hybridization of plasmons. By insertion of a junction with an intermediate width, the strength of the hybridized modes increases significantly, leading to the emergence of novel electronic junction states (see Figure 4(m)). The noteworthy point here is the observation of two new junction plasmons around the zero energy (Dirac point). This was anticipated by researchers, because of the presence of carbon zigzag edges in the molecular bridge [106][107][108]. Therefore, in the junction plasmon regime, the excitation of plasmons contains electron or holes in the corresponding electronic junction states. Figure 4(n) demonstrates the strength of these electron-hole pair dipole transitions in the graphene-mediated structure as a function of initial and final energies (the area of circles demonstrates the strength of the electron-hole pairs). As can be explicitly seen in this panel, the plasmon energies (shown by solid curves) do not overlap with the dominant electron-hole transports. This energy mismatch reveals that the optical transitions are not single-particle excitations. This strongly supports the claim of collective plasmonic nature of the bowtie optical excitations. The advantage of this technique is the possibility of tuning the doping level of molecular graphene nanobridge, which enables possessing an active control over the charge transition. This results in the excitation of tunable spectral features such as functional CTPs. To continue, we briefly consider the recent advances in enhancing the functionality of CTP resonances based on graphene-mediated metallic metastructures. Newly, Ahmadivand and colleagues have developed an approach to induce CTPs in gate-controlled graphene monolayer-integrated particle clusters in the THz spectra towards tailoring multifunctional metamodulators (see the schematic in Figure 4(o)) [46]. Using the exquisite AC photoconductivity of graphene island [94], the researchers demonstrated that the dynamic frequency-dependent conductivity of graphene has a direct relation to the extinction in the transmitted wave, which is defined by [109,110] 1 − T/T 0 = 1 − 1/j1 + Z 0 σ AC ðωÞ/ð1 + n s Þj 2 , where Z 0 is the vacuum impedance, n s is the refractive index of the thin substrate, and the optical conductivity of graphene (σ AC ðωÞ) can be taken by using the RPA principle. This leads to the strong capacitive coupling of plasmons at the charge neutrality point of graphene islands (resembling high resistance regime). This results in the excitation of a typical electric dipole. On the other hand, when the back-gate voltage is applied, the low resistance of graphene islands gives rise to the direct charge transition between the metallic resonators via the conductive atomic bridge and the excitation of THz-CTPs. Figure 4(p) exhibits the SEM image of the fabricated assemblies in periodic arrays with the presence of aligned graphene islands at the middle spot between the aluminum V-shaped blocks, which was implemented on a multilayer SiO 2 /ITO substrate. Here, the ITO sublayer acts as a conductive surface for the applied voltages via the gate to control the AC photoconductivity of graphene islands. The resistance variations for graphene area in the presence of metallic objects are measured as a function of back-gate bias (V bg ), shown in Figure 4(q). The charge neutrality point of graphene islands was determined by neglecting the inherent contact resistance of the electrodes (V cnp bg =9.5 V). Moreover, the maximum resistance was measured around~3.52 kΩ corresponding to the dielectric regime, while the lowest resistance was monitored for the graphene islands in the n-type phase correlating with the conductive regime. The geometrical parameters of the proposed unit cell are superimposed in the inset of this panel. To demonstrate the spectral properties of the device, firstly, the researchers computed and measured the spectral response under longitudinal polarized THz beam exposure at room temperature (300 K), as depicted in Figure 4(r). In these sets of analyses, the researchers judiciously changed the applied bias to the gate in order to control the transition of charges across the graphene-mediated unit cell. As plotted in both panels, a distinct dipolar mode is excited around 3.5 THz due to the capacitive coupling between the neighboring metallic resonators. The dipolar mode in all different regimes is unchanged because of its intrinsic independency from the charge transfer between the blocks. Additionally, for graphene islands in the n-type doping limit, the atomically thin junction between metallic V-shape resonators turns to a conductive component and enables the transition of photoinduced electrons across between the blocks. This results in the excitation of a pronounced CTP spectral feature around~1.95 THz. For high bias regime, the central graphene islands exhibit high-resistance and capacitive coupling becomes dominant, giving rise to the elimination of CTP mode due to blocking of the charge transfer. Figure 4(s) illustrates the charge distribution map for both dipolar and CTP modes, obtained by the FEM method. Theoretically, such a unique functionality was understood and achieved by implementing the intraband AC conductivity of carbon monolayer [102,111]. The intraband AC conductivity of graphene is containing both Drude-like and nonzero conductivities at the charge neutrality point. Hence, careful tuning of this parameter would be possible by adjusting the Fermi energy level "E F " based on applying back-gate bias, given by [28] where ħ is the reduced Planck's constant, v F is the Fermi velocity (10 6 m/s), and C A is the capacitance per unit area per charge of the multilayer substrate under the graphene islands. This enables active tuning of the frequency-dependent intraband photoconductivity of graphene by modifying the applied back-gate bias. Consequently, one can directly tune the conductance, resistance, and reactance of the graphene-enhanced bridge. One other area that deserves special attention is that of optical modulation. Optical and optoelectronic metamodulators have previously been designed based on conventionally resonant structures such as Fano and EIT resonant metamolecules [112][113][114][115][116][117]. The dramatic dissipative and inherent losses correlating with the classical resonant nanostructures have triggered researchers to substitute these metasurfaces with a new type of ones that are capable to provide much faster and efficient modulation properties such as toroidal resonances [24,115,118,119] and CTP spectral features [43][44][45][46]. In relation to the latest research by Ahmadivand et al. [46], THz plasmonic metamodulators are strategic optical components that have faced fundamental restrictions such as low efficiency, slow operating speed, and lack of tunability to manipulate THz waves. So far, various methods have been used to address these shortcomings towards designing of highresponsive, efficient, and fast plasmonic modulators. Although some of these techniques were effective, the tailored devices do not provide ultrafast switching and high modulation depth. In the research by Ahmadivand and colleagues, the devised gated graphene-mediated plasmonic device exploits the remarkable electrical and optical features of both graphene and metallic unit cell to enhance the intensity and tunability of the induced resonant spectral feature. In Figure 4(t), the recorded THz signal amplitude from the photodetector under applied bias variations is shown. This was accomplished by sweeping the signal as a function of modulation frequency up to 10 5 Hz, which led to a remarkable modulation depth up to 72% and fast operation speed with the rising and falling duration around 19 μs and 21 μs, respectively. From the modulation bandwidth principle, Figure 4(u) graph exhibits the normalized modulation magnitude, confirming a 3 dB operation bandwidth of~19.5 kHz and~22 kHz for numerical simulations and experimental measurements, respectively. Phase-Change Material-Enhanced Functional CTP Devices. As elaborated in Sections 2 and 3, inserting a conductive layer underneath a plasmonic nanosystem or reducing the interparticle distance between metallic NPs down to a subnanometer scale leads to the excitation of CTPs at lower energies in the spectrum. Recent efforts have also denoted the use of metallic nanowires between adjacent NPs to make charge flow feasible [32,120]. For the last two examples, the conductance and the geometry of the conductive junction between plasmonic elements are extremely important to tune the spectral response and the local field distribution of the system. Although the mentioned concepts have provided remarkable outcomes for the future of CTP-based realworld applications, they suffer from the lack of active tunability, which requires solid changes in the optical and electrical characteristics of a given design. One possible way to overcome this issue is the use of chalcogenide phase-change materials (PCMs) as optoelectronically controllable structures. Over the past years, these semiconductor alloys, especially Ge 2 Sb 2 Te 5 (GST), have attained a copious interest for tailoring plasmonic and all-dielectric platforms from visible to infrared for various purposes, such as switching/modulation [115,121,122], sensing [123,124], and beam steering [125,126], due to possessing quick phase-changing capability (with a crystallization temperature lower than 477°C), high cyclability, thermal stability, and versatile nonvolatile optoelectronic features between opposite states [127][128][129][130]. In the following, we concisely present the use of GST, as a phase-change glass, to develop dynamically tunable and alloptical near-infrared devices to switch between dipolar and CTP resonances based on the phase of GST. In Figure 5(a), for the first time, Ahmadivand et al. [43] demonstrated a numerical and theoretical study of a metallo-dielectric dimer platform as NIR all-optical switch using a PCM. By introducing a GST nanowire into the center of the conductive bridge that links the gold dimer, the researchers obtained remarkable changes in the position and origin of the induced plasmonic modes. To implement this active switching mechanism, Ahmadivand and teammates utilized substantial alterations in both resistivity and permittivity values of GST, owing to optically stimulated phase transition process. The proposed device in this understanding is illustrated in both Figures 5(a) and 5(b), superimposing the geometrical parameters. As mentioned in the earlier parts of this review, the conductivity of the bridging path between NPs is extremely important to generate CTPs. To this end, successful control of the conductivity of the metallodielectric link was achieved by applying the required energy using an additional light source to initiate the phase toggling. Here, the wavelength-dependent conductivity of the GST section was defined as [131] σ GST ðλÞ = ðc/2λÞð1 − ε eff ðλÞÞ, where ε eff ðλÞ is the effective permittivity of GST portion and c is the velocity of light in vacuum. Besides, depending on the crystallization level of the GST, the Lorentz-Lorenz effective-medium description can be performed using [132][133][134] where n j is the density of the jth phase. It is worth mentioning that to model the photothermal heating process, we utilized the multicapacitive cascading method [135,136]. In this approach, the absorbed photothermal heat energy (E H ) in the platform can be defined as [137] E H = AQ abs FðrÞ, where Q abs is the numerical absorption coefficient, FðrÞ is the optical fluence of the impinging pulse, and A indicates the area of the dimer antenna (more comprehensive information can be obtained from the Supplementary Information file of the article). Furthermore, the normalized extinction spectra of the dimer system are presented for the following four different configurations of the bridge: full gold, air, amorphous GST (a-GST), and crystalline GST (c-GST) (see Figure 5(c)). For completely gold bridged-dimer system, a bright dipolar mode (as a shoulder resonance) is excited around 0.73 μm because of the capacitive coupling between the satellite gold nanodisks. Additionally, the charge shuttling across the conductive junction gives rise to a CTP peak at 2.4 μm. When a small air gap (10 nm in length) is introduced to the monolithic gold nanowire, the charge transfer between the opposite sides of the system is hindered. Similar to the full gold case, a dipolar mode is appeared at higher energies, while a pronounced dipolar extreme is observed around 1.7 μm, owing to the strong capacitive coupling between gold nanorods. Moreover, when the gap area is filled up with GST (initially, considered as a-GST), the dipolar peak around 1.7 μm is red-shifted to 2.2 μm with slightly increased intensity, due to negligible extinction coefficient (k~0) of the a-GST in this regime [138]. Once the phase of the GST become fully crystallized (c-GST), its corresponding resistivity value decreases 6 orders of magnitude and the charge flow starts to dominate the spectral response of the system. As a result, a CTP extreme is formed at 2.3 μm and its position is considerably red-shifted (δ~100 nm) in comparison to the dipolar mode of a-GST, because of its absorptive behavior at low energies. It is significant to note that for all the studied conditions, the position and the amplitude of the induced dipolar mode shoulder are not affected by the changes made on the gold nanobridge. To verify this, the extinction profile of a bare gold dimer is plotted in Figure 5(c) as the inset. Next, in Figures 5(d), (i, ii) and 5(e), (i, ii), the differences in the E-field distributions are clearly demonstrated the presence of the CTP mode for the full gold structure. In the case of a-GST, opposite charges are accumulated around the dielectric region as well as in the surrounding gold nanodisks (see Figure 5(f), (i, ii)). On the contrary, in the c-GST limit, the intensity of the E-field is reduced and the excited charges can still shuttle across the bridge, owing to the minimized capacitive coupling around the GST region in the bridge (see Figure 5(g)). Lastly, the sufficiency of the proposed metallodielectric device for all-optical NIR switching has been analyzed. The corresponding analysis in Figure 5(h) shows that the nanodevice with the length of 100 nm GST portion is the best choice for quick and effective switching purposes with the following features: (a) switching from amorphous to fully crystalline phase in a few ns and toggling back to amorphous case in hundreds of femtoseconds (fs), (b) δ~300 nm shift in terms of the resonance point, and (c) 88% of modulation depth at 1.55 μm, which is also known as the telecommunication band. In another recent study by Nooshnab and Ahmadivand [44], a novel CTP-resonant and optothermally controllable metamodulator is demonstrated. As indicated in Figure 5(i), a six-member gold hexamer nanocluster is placed on top of a ring-shaped GST layer to generate actively tunable plasmonic modes. With this approach, manipulation of the charge transfer process is realized by changing the phase of the GST sublayer. To be able to operate at the telecommunication band, the geometrical parameters given in Figure 5(j) are judiciously selected. In the c-GST limit, a prominent CTP peak is emerged in the vicinity of 1550 nm, owing to the metallic nanocluster and conductive sublayer underneath. Additionally, a dipolar shoulder is formed at higher energies. When the phase of the GST subsurface is reversed to a-GST through optical heating, it acts analogous to a dielectric material and bonding dipolar mode becomes dominant in the spectral response of the system (see Figure 5(k)). In principle, under cylindrical and vortex beam excitations, a plasmonic hexamer can be tailored to sustain pronounced Fano resonances [139,140]. Nevertheless, in this particular case, the gold hexamer does not support a dark mode due to the plane wave excitation. Furthermore, similar to the previously mentioned work, the authors utilized the Lorentz-Lorenz effective medium theory [132][133][134] to model ε eff ðλÞ of the GST sublayer for these calculations. Moreover, optically generated thermal power was quantified using the following equation: Tðr, tÞ = ðAQ a ϕ/1:77ντÞ/exp ð−ððt − t 0 Þ 2 / τ 2 ÞÞ [137], where τ is the time constant of the incident light, ϕ is the beam fluence, r is the distance from the source, and t 0 is the time delay. To verify the origin of the induced modes, the corresponding charge distribution profiles for the proposed device are investigated in Figure 5(l). In the a-GST limit, the excited charges oscillate in the same direction, which is a characteristic signature of the dipolar mode. On the other hand, in the CTP regime, the charges are almost equally separated along the whole structure in connection with the polarization of the impinging light. Next, the modulation performance of the platform is examined. To this end, the reflection and transmission modulation characteristics of the c-GST sublayer based hexamer are plotted in Figure 5(m). Based on the obtained results, one can state that the transition from dielectric phase to conductive phase gives rise to the formation of a new spectral feature around the telecommunication band. Besides, the proposed metallodielectric platform provides a prominent modulation depth (up to~98%) at the NIR bandwidth (see Figure 5(n)). As a final consideration, the authors pointed out the lossy behavior of the proposed plasmonic system. In light of the calculated insertion loss values, they minimized the possible dissipative losses with the help of the direct charge transfer feature of the c-GST sublayer based hexamer. Outlook In recent years, the field of plasmonics has experienced rapid progresses in understanding the dynamics of conductively bridged and nearly touching metallic NPs. As we described in this focused review, many intriguing physical effects have been predicted in the systems supporting distinguished CTP excitations. So far, various theoretical and experimental investigations have been performed to realize oscillating electric current along the conductive junction, as a key principle of the CTP excitation, depending on the type of the nanosystem. However, related investigations have been limited to the excitation approaches and verifying this principle by standard measurements. Now that the CTP spectral feature has been reached in a broad range of platforms in modern nanophotonics, we believe that it is time to numerically and experimentally explore the modalities towards enhancing the functionality of this spectral phenomenon, eventually, to find its useful and practical applications. We also speculate that CTP states possess a strong potential to make a profound impact in the development of coming generation multifunctional nanophotonic instruments. More precisely, the transition between hybridized dipolar and distinct CTP modes have been explicitly demonstrated under capacitive and conductive coupling regimes, respectively. Highly attractive features of CTP resonances have opened the door for novel active devices and applications, including but not limited to surface-enhanced IR absorption (SEIRA), surface-enhanced Raman scattering (SERS), optical switching, modulation, and waveguiding; however, active manipulation of these resonances using functional materials is overlooked. Conclusions In this review, we briefly presented the recent accomplishments in the use of versatile materials towards actively tunable CTP-based nanoscale tools. As indicated in the considered studies, having an active control on the CTP mode is possible using functional materials like graphene and GST, rather than applying morphological variations. The exceptional optical and electrical characteristics of these materials have allowed researchers to design active CTP-resonant devices. We envisage that this review will provide detailed understanding for the evolution from passive to active CTP-based platforms and pave the road for developing next-generation nanophotonic devices to reach new functionalities.
11,895.8
2020-01-30T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Synergistic Catalysis of Water-Soluble Exogenous Catalysts and Reservoir Minerals during the Aquathermolysis of Heavy Oil Oil serves as the essential fuel and economic foundation of contemporary industry. However, the use of traditional light crude oil has exceeded its supply, making it challenging to meet the energy needs of humanity. Consequently, the extraction of heavy oil has become crucial in addressing this demand. This research focuses on the synthesis of several water-soluble catalysts that can work along with reservoir minerals to catalyze the hydrothermal cracking process of heavy oil. The goal is to effectively reduce the viscosity of heavy oil and lower the cost of its extraction. Based on the experimental findings, it was observed that when oil sample 1 underwent hydrothermal cracking at a temperature of 180 °C for a duration of 4 h, the amount of water added and catalyst used were 30% and 0.2% of the oil sample dosage, respectively. It was further discovered that the synthesized Mn(II)C was able to reduce the viscosity of oil sample 1 by 50.38%. The investigation revealed that the combination of Mn(II)C + K exhibited a significant synergistic catalytic impact on reducing viscosity. Initially, the viscosity reduction rate was 50.38%, which climbed to 61.02%. Subsequently, when catalyzed by the hydrogen supply agent isopropanol, the rate of viscosity reduction rose further to 91.22%. Several methods, such as freezing point analysis, thermogravimetric analysis, DSC analysis, component analysis, gas chromatography, wax crystal morphology analysis, and GC-MS analysis, were conducted on aqueous organic matter derived from heavy oil after undergoing different reaction systems. These analyses confirmed that the viscosity of the heavy oil was decreased. By studying the reaction mechanism of the model compound and analyzing the aqueous phase, the reaction largely involves depolymerization between macromolecules, breakdown of heteroatom chains, hydrogenation, ring opening, and other related consequences. These actions diminish the strength of the van der Waals force and hydrogen bond in the recombinant interval, impede the creation of a grid-like structure in heavy oil, and efficiently decrease its viscosity. Introduction Due to the ongoing growth in the worldwide population and the continuing advancement of the social economy, there has been a substantial rise in the annual global demand for oil resources.Previously developed oil fields worldwide have been excessively exploited, resulting in a significant decline in crude oil production.The oil fields have often reached a stage of high water content, and there has been excessive consumption of the Introduction Due to the ongoing growth in the worldwide population and the continuing advancement of the social economy, there has been a substantial rise in the annual global demand for oil resources.Previously developed oil fields worldwide have been excessively exploited, resulting in a significant decline in crude oil production.The oil fields have often reached a stage of high water content, and there has been excessive consumption of the usual light crude oil.As a result, it has become challenging to meet the energy demands of humanity.According to the US Geological Survey, global oil demand is projected to rise by over 40% by 2025 due to economic and social progress.The world possesses a substantial amount of heavy oil resources, with an estimated geological reserve of roughly 815 billion tons, making up about 70% of the global oil reserves.These reserves are significantly larger than those of traditional crude oil [1].The main countries with substantial heavy oil resources are Canada, Venezuela, Russia, the United States, and China.Hence, the retrieval of viscous crude oil holds significant significance [2].China's energy security will encounter unprecedented problems as a result of the restricted availability of conventional oil and the surging demand for oil.Given its status as the wealthiest developing nation globally, the Chinese government has prioritized the establishment of a reliable oil supply as a crucial national strategy to fulfill the growing demand for oil.China, with its booming economy, has become the largest importer of crude oil globally.As depicted in Figure 1, China's oil imports have consistently grown, reaching a staggering 502 million tons by 2022.China's reliance on imported crude oil is steadily growing, reaching 71.2% by 2022, as depicted in Figure 2. Hence, the exploration and extraction of non-traditional crude oil has been acknowledged as a significant and practical alternative for China to counterbalance the consequences of its diminishing conventional oil output and enhance its energy security [3,4].China's geological petroleum resources include a significant amount of heavy oil, making up approximately 25% of the total.This amounts to around 20 billion tons, with over 4 billion tons being recoverable [5,6].China's primary distribution of heavy oil is concentrated in the Shengli, Liaohe, Tahe, Tuha, and other oil fields [7]. Extracting heavy oil is highly challenging due to its intricate composition and the China's geological petroleum resources include a significant amount of heavy oil, making up approximately 25% of the total.This amounts to around 20 billion tons, with over 4 billion tons being recoverable [5,6].China's primary distribution of heavy oil is concentrated in the Shengli, Liaohe, Tahe, Tuha, and other oil fields [7]. Extracting heavy oil is highly challenging due to its intricate composition and the complicated structure of the deposit, which further complicates the extraction process [8,9], Heavy and extra-heavy oils possess a significant amount of gelatinous and asphaltene substances, which make the transportation and refining of oil more complex, resulting in higher costs for refining heavy oil products.Hydrothermal cracking technology is commonly employed to break chemical bonds in heavy oil molecules by using water as a catalyst at high temperatures.This process converts the recombinant parts into lighter components, reducing the content and viscosity of the heavy oil [10][11][12][13]. Water-soluble catalysts are one of the commonly used chemicals in the petroleum industry.Hot water is also a cheap and safe solvent for heavy oil; therefore, water-soluble catalysts were also the earliest catalysts used for crude oil cracking.Maity et al. [9] used water-soluble transition metal ruthenium (Ru) and iron (Fe) catalysts in their asphalt upgrading experiments, with desulfurization efficiencies of 21% and 18%, respectively.The first row of transition metals and Al 3+ ions have significant catalytic activity towards thiophene and tetrahydrothiophene.These metal ions have the ability to convert larger molecules into smaller ones by cleaving the C-S link.Zhong [11] investigated the alterations in viscosity and molecular weight of Liaohe heavy oil when exposed to the catalytic effects of eight metal ions.The findings indicated that all the metal ions employed exhibited a distinct catalytic effect on reducing the viscosity of Liaohe heavy oil.Specifically, Fe, Co, and Mo are capable of reducing the viscosity of heavy oil by 60%, while the inclusion of the hydrogen donor naphthalene can achieve a viscosity reduction of over 90%.In addition, Chen et al. [14] studied the catalytic effect of a water-soluble transition metal complex on heavy oil in Yumen Oilfield, which reduced its viscosity by more than 70% at 180 • C. Research has shown that transition metal compounds have a significant catalytic effect on C-S bond cleavage.The complexation of metal atoms with organic sulfur is strengthened under the action of ligands, which break the C-S bond and initiates acid polymerization and a water-gas shift reaction, thus reducing the viscosity of heavy oil and improving the quality of heavy oil [15][16][17].During the process of crude oil generation, different mineral components in the formation are most in contact and play an important role in catalytic viscosity reduction.The minerals in the reservoir are mainly composed of clay and non-clay minerals.High-carbon hydrocarbon molecules donate electrons to minerals on their surface due to the presence of Lewis acid, simultaneously creating free radicals.The rearrangement of free radicals promotes the fracture of C-C bonds and the generation of short-chain alkanes. Therefore, in this paper, sodium citrate was used as a ligand to prepare a water-soluble catalyst to investigate the viscosity reduction effect of water-soluble catalysts and reservoir mineral catalysis on heavy oil and to investigate the viscosity reduction effect of reservoir minerals and water-soluble catalyst synergistic catalysis on heavy oil water pyrolysis.The objective was to combine the two methods in order to obtain a significant decrease in the viscosity of heavy oil at low temperatures. Infrared Analysis Infrared spectroscopy was used to evaluate the water-soluble catalyst Mn(II)C, as depicted in Figure 3.It can be seen from Figure 3 that the absorption peak of ligand C at 3456 cm −1 is the telescopic vibration peak of O-H, the strong absorption peak at 1592 cm −1 is the telescopic vibration peak of C=O in carboxylic acid, the absorption peak at 1405 cm −1 is the in-plane bending vibration peak of O-H, and the absorption peak at 1280 cm −1 is the telescopic vibration peak of C-O in carboxylic acid.Compared with ligand C, the peak position of Mn(II)C was blueshifted, mainly because the ligand formed a coordination bond with manganese ions. Infrared Analysis Infrared spectroscopy was used to evaluate the water-soluble catalyst Mn(II)C, as depicted in Figure 3.It can be seen from Figure 3 that the absorption peak of ligand C at 3456 cm −1 is the telescopic vibration peak of O-H, the strong absorption peak at 1592 cm −1 is the telescopic vibration peak of C=O in carboxylic acid, the absorption peak at 1405 cm −1 is the in-plane bending vibration peak of O-H, and the absorption peak at 1280 cm −1 is the telescopic vibration peak of C-O in carboxylic acid.Compared with ligand C, the peak position of Mn(II)C was blueshifted, mainly because the ligand formed a coordination bond with manganese ions. Thermogravimetric Analysis Figure 4 shows the thermogravimetric differential thermal analysis (TG-DTA) curves of the catalysts Mn(II)C and ligand C. Thermal decomposition can be divided into the following stages.First, below 200 °C is the water loss stage, mainly organic matter precipitation, decomposition into water vapor and CO2.The mass loss of C this stage is about 14.24%, and the mass loss of Mn(II)C is about 9.70%.During the second stage, which occurred at temperatures of 200-350 °C, an exothermic peak was seen.This peak was mostly caused by the breakdown and oxidation of certain compounds.The mass loss during this stage, denoted as C, was measured to be 14.92%.Additionally, the mass loss of Mn(II)C was found to be 15.20%.Finally, at 350-450 °C, a strong exothermic peak appeared for Mn(II)C at 353 °C, the exothermic peak at 420 °C was exothermic carbon combustion, and the mass loss in this stage was about 15.71% [18][19][20].Above 450 °C, the decomposition of both substances tends to be basically stable.Based on the data provided, Thermogravimetric Analysis Figure 4 shows the thermogravimetric differential thermal analysis (TG-DTA) curves of the catalysts Mn(II)C and ligand C. Thermal decomposition can be divided into the following stages.First, below 200 • C is the water loss stage, mainly organic matter precipitation, decomposition into water vapor and CO 2 .The mass loss of C this stage is about 14.24%, and the mass loss of Mn(II)C is about 9.70%.During the second stage, which occurred at temperatures of 200-350 • C, an exothermic peak was seen.This peak was mostly caused by the breakdown and oxidation of certain compounds.The mass loss during this stage, denoted as C, was measured to be 14.92%.Additionally, the mass loss of Mn(II)C was found to be 15.20%.Finally, at 350-450 • C, a strong exothermic peak appeared for Mn(II)C at 353 • C, the exothermic peak at 420 • C was exothermic carbon combustion, and the mass loss in this stage was about 15.71% [18][19][20].Above 450 • C, the decomposition of both substances tends to be basically stable.Based on the data provided, it is evident that Mn(II)C was effectively produced and remained stable within the temperature range of hydrothermal cracking.it is evident that Mn(II)C was effectively produced and remained stable within the temperature range of hydrothermal cracking. Changes in Viscosity Figure 5 demonstrates that the viscosity reduction rate following the process of simple hydrothermal cracking of heavy oil was 43.18%.This finding highlights the significant contribution of water in both thermal cracking and viscosity reduction of heavy oil.The addition of reservoir minerals resulted in a 9.85% increase in the rate of viscosity Changes in Viscosity Figure 5 demonstrates that the viscosity reduction rate following the process of simple hydrothermal cracking of heavy oil was 43.18%.This finding highlights the significant contribution of water in both thermal cracking and viscosity reduction of heavy oil.The addition of reservoir minerals resulted in a 9.85% increase in the rate of viscosity reduction compared to the blank sample, suggesting that the presence of reservoir minerals promoted a decrease in viscosity of heavy oil.The water-soluble catalyst Mn(II)C exhibited a 7.20% increase in the rate of viscosity decrease compared to the cracking blank.Following the synergistic reaction between Mn(II)C and K, the reservoir minerals were able to catalyze by exchanging ions with the exogenous catalyst.This resulted in a 17.84% increase in the rate of viscosity reduction compared to the control group without cracking.Furthermore, the viscosity reduction effect was enhanced compared to the catalysis of Mn(II)C and the reservoir minerals separately.Figure 6 demonstrates that the catalysis of Mn(II)C + K + isopropanol resulted in a 41.12% increase in the rate of viscosity reduction compared to the cracking blank.The auxiliary isopropanol is added to the reaction system, because isopropanol itself has a diluting effect and can provide active hydrogen to the reaction system so that the free radicals after the broken bond are hydrogenated to generate small molecules, thereby ensuring stable reduction in the viscosity of heavy oil. Changes in Viscosity Figure 5 demonstrates that the viscosity reduction rate following the process of simple hydrothermal cracking of heavy oil was 43.18%.This finding highlights the significant contribution of water in both thermal cracking and viscosity reduction of heavy oil.The addition of reservoir minerals resulted in a 9.85% increase in the rate of viscosity reduction compared to the blank sample, suggesting that the presence of reservoir minerals promoted a decrease in viscosity of heavy oil.The water-soluble catalyst Mn(II)C exhibited a 7.20% increase in the rate of viscosity decrease compared to the cracking blank.Following the synergistic reaction between Mn(II)C and K, the reservoir minerals were able to catalyze by exchanging ions with the exogenous catalyst.This resulted in a 17.84% increase in the rate of viscosity reduction compared to the control group without cracking.Furthermore, the viscosity reduction effect was enhanced compared to the catalysis of Mn(II)C and the reservoir minerals separately.Figure 6 demonstrates that the catalysis of Mn(II)C + K + isopropanol resulted in a 41.12% increase in the rate of viscosity reduction compared to the cracking blank.The auxiliary isopropanol is added to the reaction system, because isopropanol itself has a diluting effect and can provide active hydrogen to the reaction system so that the free radicals after the broken bond are hydrogenated to generate small molecules, thereby ensuring stable reduction in the viscosity of heavy oil. Assessment of the Stability and Universality of Catalysts in Reducing Viscosity Under the conditions of reaction conditions of 180 °C for 4 h and water addition of 30%, the water-soluble catalyst and reservoir mineral addition were 0.2%.The viscositytemperature properties of Mn(II)C + K were investigated after the reaction of oil sample 1 after 1-32 days, and the results are shown in Figure 7.It should be noted that the reaction conditions are optimized, and the results show that the viscosity remains basically constant after 4 h, so we determined that 4 h would work best.For the reaction Figure 6.The impact of various reaction circumstances on the viscosity and the ability of oil sample 1 to reduce viscosity. Assessment of the Stability and Universality of Catalysts in Reducing Viscosity Under the conditions of reaction conditions of 180 • C for 4 h and water addition of 30%, the water-soluble catalyst and reservoir mineral addition were 0.2%.The viscositytemperature properties of Mn(II)C + K were investigated after the reaction of oil sample 1 after 1-32 days, and the results are shown in Figure 7.It should be noted that the reaction conditions are optimized, and the results show that the viscosity remains basically constant after 4 h, so we determined that 4 h would work best.For the reaction temperature, when the temperature is lower than 180 • C, the hydrothermal cracking is not sufficient, and the viscosity drop is not the largest.When the temperature is higher than 180 • C, the structure of the catalyst may be damaged, which will affect its stability and catalytic effect.Therefore, the reaction conditions were finally determined to be 180 Variations in Heavy Oil's Pour Point Both before and after the Reaction Figure 9 clearly demonstrates that the initial oil sample has a notable pour point, which is substantially reduced after undergoing hydrothermal cracking.Specifically, the pour point lowers drastically from 36.5 °C to 30.5 °C, resulting in a remarkable decrease of 6 °C.The oil sample's pour point was reduced to 22 °C and 8.5 °C through the Figure 7 demonstrates that following synergistic catalytic heavy oil water thermal cracking, the viscosity experiences a rapid rebound, with a faster rebound rate.Subsequently, the viscosity fluctuation range becomes smaller and the viscosity reduction rate stabilizes at approximately 51%.Based on the aforementioned analysis, it is evident that the synergistic catalytic stability of Mn(II)C + K is superior.Figure 8 illustrates the comprehensive assessment of Mn(II)C + K for various oil samples.The figure clearly demonstrates that the viscosity of different oil samples experiences a notable rebound within a brief timeframe.Subsequently, the viscosity reduction rate stabilizes at a consistent level.This phenomenon primarily arises from the presence of short-term heavy oil-containing heteroatom groups, which prompts the reformation of hydrogen bonds and other intermolecular forces.Consequently, the smaller molecules that were initially formed undergo a transformation into larger molecules.Moreover, the interconnection of alkyl chains leads to a rise in viscosity.After the intermolecular interaction reaches a stage of stability, the viscosity is reasonably consistent.The figure illustrates that the presence of Mn(II)C + K catalyst can result in a viscosity reduction rate of over 40% in the three oil samples.This suggests that the combined catalytic effect of Mn(II)C + K remains generally constant and reliable.Therefore, the results of this study can be generalized to the development of heavy oils with the same composition and structure.However, due to the strong pertinence of the thermal cracking of crude oil heavy oil, it may have different results for heavy oil with different compositions or different geological backgrounds, which is also a worldwide problem in this field. Variations in Heavy Oil's Pour Point Both before and after the Reaction Figure 9 clearly demonstrates that the initial oil sample has a notable pour point, which is substantially reduced after undergoing hydrothermal cracking.Specifically, the pour point lowers drastically from 36.5 °C to 30.5 °C, resulting in a remarkable decrease of 6 °C.The oil sample's pour point was reduced to 22 °C and 8.5 °C through the Thermogravimetric Analysis of Heavy Oil before and after Reaction Figure 10 illustrates the thermogravimetric analysis of oil sample 1 be the reaction in different reaction systems.The pyrolysis process of hea classified into three distinct stages.In the first stage, which occurs at temper from 25 to 150 °C, there is a volatilization of low-carbon hydrocarbons a weight loss percentages for blank, oil + water, oil + water + K, oil + water and oil + water + Mn(II)C + K + isopropanol are 2.58%, 3.19%, 3.33%, 4.77 Thermogravimetric Analysis of Heavy Oil before and after Reaction Figure 10 illustrates the thermogravimetric analysis of oil sample 1 before and after the reaction in different reaction systems.The pyrolysis process of heavy oil can be classified into three distinct stages.In the first stage, which occurs at temperatures ranging from 25 to 150 • C, there is a volatilization of low-carbon hydrocarbons and water.The weight loss percentages for blank, oil + water, oil + water + K, oil + water + Mn(II)C + K, and oil + water + Mn(II)C + K + isopropanol are 2.58%, 3.19%, 3.33%, 4.77%, and 6.35%, respectively.During the second stage, which occurs at temperatures of 150-350 • C, the research indicates that the evaporation of light components and the disruption of weak chemical bonds result in a partial decomposition of specific asphaltenes.As a result, the oil samples experienced weight loss rates of 16.27%, 16.66%, 19.94%, 19.72%, and 20.06%, respectively.This indicates that a portion of the asphaltenes underwent hydrothermal cracking, resulting in the formation of low-carbon hydrocarbons.These low-carbon hydrocarbons then evaporated during thermogravimetric examination, causing an acceleration in the rate of weight loss.The temperature range of the third stage was 350 to 550 • C. The oil samples experienced weight loss rates of 68.50%, 67.55%, 64.50%, 64.10%, and 64.90%, respectively.This weight loss was mostly caused by the significant breaking of recombinant components in heavy oil.Based on the figure and analysis, the weight loss curves of several reaction systems clearly show a leftward shift in comparison to the blank oil sample.This observation provides additional evidence that recombination leads to the separation of certain lightweight constituents, hence decreasing the viscosity of heavy oil. DSC Analysis Figure 11 demonstrates that the wax precipitation point of oil sample 1 was 51.17 °C.After undergoing hydrothermal cracking, the wax precipitation point decreased to 49.77 °C.Additionally, the peak temperature of wax evolution decreased from 44.20 °C to 42.78 °C compared to the initial sample.When oil, water, and K were combined for the hydrothermal cracking reaction, the wax precipitation point further decreased to 48.76 °C.When it passed through oil + water + Mn(II)C + K synergistic reaction, the wax precipitation point was reduced to 46.82 °C, which was reduced by 4.35 °C compared with the wax precipitation point of the blank oil sample.Upon the introduction of isopropanol into the reaction system, the temperature at which wax precipitation occurred decreased to 45.88 °C and there was a notable shift in the highest temperature at which wax was produced, which now stood at 38.88 °C, which indicated that the hydrogen supply of isopropanol itself induced the formation of light components during heavy oil cracking, and at the same time, the dilutive dissolution of isopropanol itself inhibited the formation of wax deposition and reduced the peak value of wax evolution. DSC Analysis Figure 11 demonstrates that the wax precipitation point of oil sample 1 was 51.17 • C.After undergoing hydrothermal cracking, the wax precipitation point decreased to 49.77 • C. Additionally, the peak temperature of wax evolution decreased from 44.20 • C to 42.78 • C compared to the initial sample.When oil, water, and K were combined for the hydrothermal cracking reaction, the wax precipitation point further decreased to 48.76 • C. When it passed through oil + water + Mn(II)C + K synergistic reaction, the wax precipitation point was reduced to 46.82 • C, which was reduced by 4.35 • C compared with the wax precipitation point of the blank oil sample.Upon the introduction of isopropanol into the reaction system, the temperature at which wax precipitation occurred decreased to 45.88 • C and there was a notable shift in the highest temperature at which wax was produced, which now stood at 38.88 • C, which indicated that the hydrogen supply of isopropanol itself induced the formation of light components during heavy oil cracking, and at the same time, the dilutive dissolution of isopropanol itself inhibited the formation of wax deposition and reduced the peak value of wax evolution.into the reaction system, the temperature at which wax precipitation occurred decreased to 45.88 °C and there was a notable shift in the highest temperature at which wax was produced, which now stood at 38.88 °C, which indicated that the hydrogen supply of isopropanol itself induced the formation of light components during heavy oil cracking, and at the same time, the dilutive dissolution of isopropanol itself inhibited the formation of wax deposition and reduced the peak value of wax evolution. Elements and Four Components Figure 12 displays the results of the four-component analysis of oil samples following reaction under various reaction systems.In the absence of a catalyst, the levels of saturated hydrocarbons and aromatic hydrocarbons in the oil sample increased following the reaction, but the amounts of gum and asphaltene dropped.Mn(II)C + K synergistic catalysis exhibited excellent catalytic efficiency, resulting in the conversion of 31.31%saturated hydrocarbons, 32.11% aromatic hydrocarbons, 23.24% gums, and 13.34% asphaltenes in the oil samples following the reaction.Following the use of isopropanol-assisted catalysis to supply hydrogen, the oil sample had additional increases in its saturated hydrocarbon and aromatic hydrocarbon content, reaching 33.93% and 35.75%, respectively.The levels of gum and asphaltene reduced dramatically, with reductions of 19.95% and 10.37%, respectively.The data above show that the combined catalytic impact of external catalysts and reservoir minerals significantly increases the concentration of light components in heavy oil. The elemental analysis results of oil samples obtained from different reaction systems are displayed in Table 1.The carbon-to-hydrogen ratios of the blank oil samples, oil samples catalyzed by Mn(II)C + K, and Mn(II)C + K + isopropanol catalyzed oil samples were 8.54, 8.35, and 8.33, respectively.It is evident that the C/H ratio decreased significantly, indicating that the catalyst facilitated the hydrogenation of unsaturated bonds [4].The concentrations of carbon (C), nitrogen (N), and sulfur (S) in the oil samples declined to different extents after the catalytic reaction of Mn(II)C + K and Mn(II)C + K + isopropanol, while the concentration of hydrogen (H) increased.The reduction in viscosity of heavy oil can be ascribed to the fragmentation of C-X (X: N, S, O) bonds and the participation of water in the catalytic hydrothermal cracking procedure [7,21].declined to different extents after the catalytic reaction of Mn(II)C + K isopropanol, while the concentration of hydrogen (H) increased.viscosity of heavy oil can be ascribed to the fragmentation of C-X (X: the participation of water in the catalytic hydrothermal cracking proce GC Analysis of Saturated Hydrocarbon Components The study examined the alterations in the distribution of carbon numbers in saturated hydrocarbons present in oil sample 1 following reactions with various catalytic systems.Gas chromatography (GC) was employed to assess the saturated hydrocarbons both before and after reactions.The chromatographic analysis findings are presented in Figure 13.The concentration of oil + water in the C11 and C12 fractions of the oil + water mixture rose significantly compared to the control sample in the higher carbon number range of the blank oil sample.The majority of carbon atoms in the mixture of oil, water, and potassium (K) are found in the range C15-C22.However, after the synergistic reaction, the distribution of carbon atoms in the mixture of oily oil, water, manganese(II) carbonate (Mn(II)C), and potassium (K) shifted dramatically towards a lower range: C12-C16.After adding isopropanol to the reaction system, the oil sample showed a significant reduction in high-carbon compounds, while the proportion of low-carbon compounds notably rose.Specifically, there was a considerable rise in the content of C12 and C13 compounds.The varying distributions of carbon numbers directly indicate the fluctuations in the amount of saturated hydrocarbons present in heavy oil.These changes in saturated hydrocarbon content have a direct impact on the freezing point of heavy oil, thereby demonstrating the alteration in freezing point observed in oil samples and the leftward-shift phenomenon observed in DSC thermal analysis results. Wax Crystal Morphology Polarizing microscopy was employed to examine the structure of wax crystals in oil sample 1 both prior to and during the reaction.Figure 14 demonstrates that the wax crystals in the oil samples exhibit a layered morphology resembling snowflakes prior to hydrothermal cracking.Following hydrothermal cracking, it became evident that the wax crystal accumulation structure was greatly diminished, with the majority of crystals forming needle-like accumulations.Following the reaction between oil, water, and K, the formation of wax crystals was much diminished, while the dispersion of wax crystals was enhanced.Following the synergistic catalytic reaction involving oil, water, Mn(II)C, and Wax Crystal Morphology Polarizing microscopy was employed to examine the structure of wax crystals in oil sample 1 both prior to and during the reaction.Figure 14 demonstrates that the wax crystals in the oil samples exhibit a layered morphology resembling snowflakes prior to hydrothermal cracking.Following hydrothermal cracking, it became evident that the wax crystal accumulation structure was greatly diminished, with the majority of crystals forming needle-like accumulations.Following the reaction between oil, water, and K, the formation of wax crystals was much diminished, while the dispersion of wax crystals was enhanced.Following the synergistic catalytic reaction involving oil, water, Mn(II)C, and K, the dispersion of wax crystals became evident, and the shape of the wax crystals seemed more scattered.Following the reaction of oil, water, Mn(II)C, K, and isopropanol, the dispersion of the oil sample was further enhanced, with isopropanol acting as a solvent and diluent.The main mechanism that contributes to the improved scattering of wax crystals is the fragmentation of colloidal asphaltene macromolecules in heavy oil into hydrocarbons with low carbon content.These hydrocarbons function as solvents, resulting in a reduction in the freezing point and the point at which wax is produced following breaking.As a result, the arrangement of wax crystals becomes more spread out, which increases the flowability of thick oil [22,23]. Molecules 2024, 29, x FOR PEER REVIEW 13 of 25 dispersion of the oil sample was further enhanced, with isopropanol acting as a solvent and diluent.The main mechanism that contributes to the improved scattering of wax crystals is the fragmentation of colloidal asphaltene macromolecules in heavy oil into hydrocarbons with low carbon content.These hydrocarbons function as solvents, resulting in a reduction in the freezing point and the point at which wax is produced following breaking.As a result, the arrangement of wax crystals becomes more spread out, which increases the flowability of thick oil [22,23]. GC-MS of Heavy Oil Aqueous Phase Figure 15 analysis reveals that hydrothermal cracking of the oil sample and catalytic reaction treatment of the catalyst result in the dissolution of polar organic compounds with heteroatoms in water.There has been a notable shift in the composition of polar compounds and chemicals containing heteroatoms in water, characterized by a rise in the diversity of heteroatomic substances.Hydrothermal cracking causes the rupture of chemical bonds in gum asphaltenes and other constituents of heavy oil, leading to a substantial decrease in viscosity.This is evident from the provided evidence.In addition, a small fraction of the heteroatomic organic matter that is generated when bonds are broken enters the composition of the aqueous phase. GC-MS of Heavy Oil Aqueous Phase Figure 15 analysis reveals that hydrothermal cracking of the oil sample and catalytic reaction treatment of the catalyst result in the dissolution of polar organic compounds with heteroatoms in water.There has been a notable shift in the composition of polar compounds and chemicals containing heteroatoms in water, characterized by a rise in the diversity of heteroatomic substances.Hydrothermal cracking causes the rupture of chemical bonds in gum asphaltenes and other constituents of heavy oil, leading to a substantial decrease in viscosity.This is evident from the provided evidence.In addition, a small fraction of the heteroatomic organic matter that is generated when bonds are broken enters the composition of the aqueous phase. Catalytic Aquathermolysis of Model Compounds The catalytic hydrothermal cracking of heavy oil primarily entails the reaction between resin and asphaltene, which are large molecules composed of five-membered and six-membered rings with heteroatoms and alkyl branches.A set of model compounds with a unique chemical structure were chosen for catalytic hydrothermal breakdown research in order to investigate its mechanism for reducing viscosity.The seven model αcompounds were octene, phenol, thiophene, pyridine, quinoline, nonylphenol, and benzothiophene, and the liquid phase was isolated for further GC-MS analysis. Table 2 demonstrates that n-hexene was produced 2.761 min after the reaction of αoctene.Octene experienced cracking during the reaction, resulting in the production of low-carbon olefins, such as n-hexene.The shorter-chain alkanes that were released underwent hydrothermal cracking and were converted into water gas.This further reaction resulted in the generation of CO2 and H2.At a time interval of 3.094 min, the compound n-hexane was produced.Furthermore, the preceding reaction involving the hydrogenation of n-hexane resulted in the generation of additional n-hexane.The reaction path of α-octene cleavage was determined by combining the results of GC-MS analysis with information from the literature [24].This is illustrated in Figure 16. Catalytic Aquathermolysis of Model Compounds The catalytic hydrothermal cracking of heavy oil primarily entails the reaction between resin and asphaltene, which are large molecules composed of five-membered and sixmembered rings with heteroatoms and alkyl branches.A set of model compounds with a unique chemical structure were chosen for catalytic hydrothermal breakdown research in order to investigate its mechanism for reducing viscosity.The seven model α-compounds were octene, phenol, thiophene, pyridine, quinoline, nonylphenol, and benzothiophene, and the liquid phase was isolated for further GC-MS analysis. Table 2 demonstrates that n-hexene was produced 2.761 min after the reaction of α-octene.Octene experienced cracking during the reaction, resulting in the production of low-carbon olefins, such as n-hexene.The shorter-chain alkanes that were released underwent hydrothermal cracking and were converted into water gas.This further reaction resulted in the generation of CO 2 and H 2 .At a time interval of 3.094 min, the compound n-hexane was produced.Furthermore, the preceding reaction involving the hydrogenation of n-hexane resulted in the generation of additional n-hexane.The reaction path of α-octene cleavage was determined by combining the results of GC-MS analysis with information from the literature [24].This is illustrated in Figure 16.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.reduction in phenol hydroxyl in the formation of water, which the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined the literature [25,26], the reaction path of lysis was derived, shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred conditions elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.Table 3 demonstrates that benzene was produced 5.860 min after the reaction of phenol.This occurred under conditions of elevated temperature and abundant hydrogen supply.The reduction in phenol hydroxyl resulted in the formation of water, which diminished the presence of hydrogen bonding between molecules.Consequently, the viscosity of heavy oil was decreased.Based on the results of GC-MS analysis and combined with the literature [25,26], the reaction path of lysis was derived, as shown in Figure 17.As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H 2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18.As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 4, cyclohexane was formed at 5.465 min after thiophene reaction, and may have occurred by hydrogenation of solvent benzene and H + reaction in the reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol reaction system.Phenol was formed at 13.088 min by redox reaction between solvent benzene and CO and H2 in the reaction system.At 6.451 min, m-dimethylcyclohexane was formed by a rearranging reaction of low-carbon hydrocarbons.The chemical mechanism for the decomposition of thiophene was established by analyzing the results obtained from gas chromatography-mass spectrometry (GC-MS) and a comprehensive examination of the existing scientific literature [27][28][29][30].This pathway is illustrated in Figure 18. Compound Structural Formula Thiophene + Water + K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C +K Compound Structural Formula Thiophene + Water + Mn(Ⅱ)C + K+ Isopropanol As shown in Table 5, methylcyclohexane was formed in the reaction product at 4.725 min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of GC-MS analysis and reviewing relevant literature [31,32], as depicted in Figure 19.As shown in Table 5, methylcyclohexane was formed in the reaction product at 4.725 min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of GC-MS analysis and reviewing relevant literature [31,32], as depicted in Figure 19. Compound Structural Formula Quinoline + Water Compound Structural Formula Quinoline + Water + K Compound Structural Formula Quinoline + Water + Mn(II)C + K Compound Structural Formula Quinoline + Water + Mn(Ⅱ)C + K + Isopropanol Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min mark, which are distinctive peaks associated with nonylphenol.Nonylphenol primarily As shown in Table 5, methylcyclohexane was formed in the reaction product at 4.725 min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of GC-MS analysis and reviewing relevant literature [31,32], as depicted in Figure 19. Compound Structural Formula Quinoline + Water Compound Structural Formula Quinoline + Water + K Compound Structural Formula Quinoline + Water + Mn(II)C + K Compound Structural Formula Quinoline + Water + Mn(Ⅱ)C + K + Isopropanol Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min mark, which are distinctive peaks associated with nonylphenol.Nonylphenol primarily As shown in Table 5, methylcyclohexane was formed in the reaction product at 4.725 min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of analysis and reviewing relevant literature [31,32], as depicted in Figure 19. Compound Structural Formula Quinoline + Water Compound Structural Formula Quinoline + Water + K Compound Structural Formula Quinoline + Water + Mn(II)C + K Compound Structural Formula Quinoline + Water + Mn(Ⅱ)C + K + Isopropanol Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min mark, which are distinctive peaks associated with nonylphenol.Nonylphenol primarily shown in Table 5, methylcyclohexane was formed in the reaction product at 4.725 min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of GC-MS analysis and reviewing relevant literature [31,32], as depicted in Figure 19. Compound Structural Formula Quinoline + Water Compound Structural Formula Quinoline + Water + K Compound Structural Formula Quinoline + Water + Mn(II)C + K Compound Structural Formula Quinoline + Water + Mn(Ⅱ)C + K + Isopropanol Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min mark, which are distinctive peaks associated with nonylphenol.Nonylphenol primarily min, the benzene ring in quinoline was hydrogenated, and the transition metal cation coordinated with the nitrogen atom such that the C-N bond was broken.Methylcyclohexane was further generated after ring opening, recombination, and hydrogenation in the system.Toluene was formed at 5.900 min, and after the pyridine ring was hydrogenated in quinoline, toluene was generated by intramolecular ring opening and hydrogenation under the action of high temperature, catalyst, and additives.At 5.190 min, pyridine is generated, which is formed by cyclized deamination of ethylamine and propylamine during cleavage.The hydrothermal cracking reaction mechanism of quinoline was determined by analyzing the findings of GC-MS analysis and reviewing relevant literature [31,32], as depicted in Figure 19. Compound Structural Formula Quinoline + Water Compound Structural Formula Quinoline + Water + K Compound Structural Formula Quinoline + Water + Mn(II)C + K Compound Structural Formula Quinoline + Water + Mn(Ⅱ)C + K + Isopropanol Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min mark, which are distinctive peaks associated with nonylphenol.Nonylphenol primarily Table 6 demonstrates that there is a higher number of peaks occurring at the 26 min which are distinctive peaks associated with nonylphenol.Nonylphenol primarily undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO 2 .The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.undergoes the breaking and reforming of carbon-carbon bonds under diverse reaction circumstances, resulting in the formation of various chemicals.During hydrothermal processes, the shorter alkyl chains separate and mix with OH-to produce alcohols.At elevated temperatures, these alcohols oxidize and produce CO2.The hydrothermal cracking of nonylphenol was developed using the findings from GC-MS analysis and in combination with the existing literature [18,33], as depicted in Figure 20.Table 7 demonstrates that cyclohexane was produced 2.761 min following the reaction of benzothiophene.Furthermore, cyclohexane was synthesized through catalytic thermal cracking, hydrodesulfurization, and catalytic hydrogenation.Toluene was synthesized at a reaction time 5.600 min.It was previously shown in the literature that o-methylphenol underwent hydrogenation and deoxygenation to produce toluene [34][35][36].The mechanism of the hydrothermal cracking reaction of benzothiophene was determined by integrating the findings of GC-MS analysis with relevant information from literary sources.It should be noted that this model is based on a limited number of model compounds, so this mechanism also takes into account all compounds as much as possible.However, the properties of heavy oil in different oil fields and wells are very different, so the model also has its limitations.Table 7 demonstrates that cyclohexane was produced 2.761 min following the reaction of benzothiophene.Furthermore, cyclohexane was synthesized through catalytic thermal cracking, hydrodesulfurization, and catalytic hydrogenation.Toluene was synthesized at a reaction time of 5.600 min.It was previously shown in the literature that o-methylphenol underwent hydrogenation and deoxygenation to produce toluene [34][35][36].The mechanism of the hydrothermal cracking reaction of benzothiophene was determined by integrating the findings of GC-MS analysis with relevant information from literary sources.It should be noted that this model is based on a limited number of model compounds, so this mechanism also takes into account all compounds as much as possible.However, the properties of heavy oil in different oil fields and wells are very different, so the model also has its limitations. Catalytic Mechanism The mechanism can be separated into the following stages. (1) The abundance of glial asphaltene in heavy oil leads to a pronounced van der Waals force between the layers, causing the units to stick together.This phenomenon is visually observed as high viscosity and limited fluidity.The introduction of external catalysts has a significant impact on the active site, leading to both partial and permanent depolymerization as well as partial and loose binding.Consequently, certain unstable units undergo depolymerization and separation, leading to a substantial decrease in the viscosity of heavy oil.(2) C-S, C-O, and C-N bonds separate as a result of interactions between the external catalyst and the heteroatoms in the recombination component, which breaks the hydrogen bonds between some high carbon hydrocarbon molecules.(3) Reservoir minerals have a negatively charged surface as a result of lattice substitution, which allows them to absorb cations.This characteristic allows minerals in reservoirs to function as efficient catalysts and transporters.The transition metals found in the catalyst from another source can easily substitute sodium/calcium ions in the clay, thereby becoming the active sites in the process.Transition metals possess several vacant orbitals, allowing them to readily engage with electron-rich compounds found in heavy oil.This interaction significantly enhances the catalytic efficiency of hydrothermal cracking [37]. Table 7 demonstrates that cyclohexane was produced 2.761 min following the reaction of benzothiophene.Furthermore, cyclohexane was synthesized through catalytic thermal cracking, hydrodesulfurization, and catalytic hydrogenation.Toluene was synthesized at a reaction time of 5.600 min.It was previously shown in the literature that o-methylphenol underwent hydrogenation and deoxygenation to produce toluene [34][35][36].The mechanism of the hydrothermal cracking reaction of benzothiophene was determined by integrating the findings of GC-MS analysis with relevant information from literary sources.It should be noted that this model is based on a limited number of model compounds, so this mechanism also takes into account all compounds as much as possible.However, the properties of heavy oil in different oil fields and wells are very different, so the model also has its limitations. Catalytic Mechanism The mechanism can be separated into the following stages. (1) The abundance of glial asphaltene in heavy oil leads to a pronounced van der Waals force between the layers, causing the units to stick together.This phenomenon is visually observed as high viscosity and limited fluidity.The introduction of external catalysts has a significant impact on the active site, leading to both partial and permanent depolymerization as well as partial and loose binding.Consequently, certain unstable units undergo depolymerization and separation, leading to a substantial decrease in the viscosity of heavy oil.(2) C-S, C-O, and C-N bonds separate as a result of interactions between the external catalyst and the heteroatoms in the recombination component, which breaks the hydrogen bonds between some high carbon hydrocarbon molecules.(3) Reservoir minerals have a negatively charged surface as a result of lattice substitution, which allows them to absorb cations.This characteristic allows minerals in reservoirs to function as efficient catalysts and transporters.The transition metals found in the catalyst from another source can easily substitute sodium/calcium ions in the clay, thereby becoming the active sites in the process.Transition metals possess several vacant orbitals, allowing them to readily engage with electron-rich compounds found in heavy oil.This interaction significantly enhances the catalytic efficiency of hydrothermal cracking [37]. Table 7 demonstrates that cyclohexane was produced 2.761 min following the reaction of benzothiophene.Furthermore, cyclohexane was synthesized through catalytic thermal cracking, hydrodesulfurization, and catalytic hydrogenation.Toluene was synthesized at a reaction time of 5.600 min.It was previously shown in the literature that o-methylphenol underwent hydrogenation and deoxygenation to produce toluene [34][35][36].The mechanism of the hydrothermal cracking reaction of benzothiophene was determined by integrating the findings of GC-MS analysis with relevant information from literary sources.It should be noted that this model is based on a limited number of model compounds, so this mechanism also takes into account all compounds as much as possible.However, the properties of heavy oil in different oil fields and wells are very different, so the model also has its limitations. Catalytic Mechanism The mechanism can be separated into the following stages. (1) The abundance of glial asphaltene in heavy oil leads to a pronounced van der Waals force between the layers, causing the units to stick together.This phenomenon is visually observed as high viscosity and limited fluidity.The introduction of external catalysts has a significant impact on the active site, leading to both partial and permanent depolymerization as well as partial and loose binding.Consequently, certain unstable units undergo depolymerization and separation, leading to a substantial decrease in the viscosity of heavy oil.(2) C-S, C-O, and C-N bonds separate as a result of interactions between the external catalyst and the heteroatoms in the recombination component, which breaks the hydrogen bonds between some high carbon hydrocarbon molecules.(3) Reservoir minerals have a negatively charged surface as a result of lattice substitution, which allows them to absorb cations.This characteristic allows minerals in reservoirs to function as efficient catalysts and transporters.The transition metals found in the catalyst from another source can easily substitute sodium/calcium ions in the clay, thereby becoming the active sites in the process.Transition metals possess several vacant orbitals, allowing them to readily engage with electron-rich compounds found in heavy oil.This interaction significantly enhances the catalytic efficiency of hydrothermal cracking [37]. Table 7 demonstrates that cyclohexane was produced 2.761 min following the reaction of benzothiophene.Furthermore, cyclohexane was synthesized through catalytic thermal cracking, hydrodesulfurization, and catalytic hydrogenation.Toluene was synthesized at a reaction time of 5.600 min.It was previously shown in the literature that o-methylphenol underwent hydrogenation and deoxygenation to produce toluene [34][35][36].The mechanism of the hydrothermal cracking reaction of benzothiophene was determined by integrating the findings of GC-MS analysis with relevant information from literary sources.It should be noted that this model is based on a limited number of model compounds, so this mechanism also takes into account all compounds as much as possible.However, the properties of heavy oil in different oil fields and wells are very different, so the model also has its limitations. Catalytic Mechanism The mechanism can be separated into the following stages. (1) The abundance of glial asphaltene in heavy oil leads to a pronounced van der Waals force between the layers, causing the units to stick together.This phenomenon is visually observed as high viscosity and limited fluidity.The introduction of external catalysts has a significant impact on the active site, leading to both partial and permanent depolymerization as well as partial and loose binding.Consequently, certain unstable units undergo depolymerization and separation, leading to a decrease in the viscosity of heavy oil.(2) C-S, C-O, and C-N bonds separate as a result of interactions between the external catalyst and the heteroatoms in the recombination component, which breaks the hydrogen bonds between some high carbon hydrocarbon molecules.(3) Reservoir minerals have a negatively charged surface as a result of lattice substitution, which allows them to absorb cations.This characteristic allows minerals in reservoirs to function as efficient catalysts and transporters.The transition metals found in the catalyst from another source can easily substitute sodium/calcium ions in the clay, thereby becoming the active sites in the process.Transition metals possess several vacant orbitals, allowing them to readily engage with electron-rich compounds found in heavy oil.This interaction significantly enhances the catalytic efficiency of hydrothermal cracking [37]. Catalytic Mechanism The mechanism can be separated into the following stages. (1) The abundance of glial asphaltene in heavy oil leads to a pronounced van der Waals force between the layers, causing the units to stick together.This phenomenon is visually observed as high viscosity and limited fluidity.The introduction of external catalysts has a significant impact on the active site, leading to both partial and permanent depolymerization as well as partial and loose binding.Consequently, certain unstable units undergo depolymerization and separation, leading to a substantial decrease in the viscosity of heavy oil.(2) C-S, C-O, and C-N bonds separate as a result of interactions between the external catalyst and the heteroatoms in the recombination component, which breaks the hydrogen bonds between some high carbon hydrocarbon molecules.(3) Reservoir minerals have a negatively charged surface as a result of lattice substitution, which allows them to absorb cations.This characteristic allows minerals in reservoirs to function as efficient catalysts and transporters.The transition metals found in the catalyst from another source can easily substitute sodium/calcium ions in the clay, thereby becoming the active sites in the process.Transition metals possess several vacant orbitals, allowing them to readily engage with electron-rich compounds found in heavy oil.This interaction significantly enhances the catalytic efficiency of hydrothermal cracking [37].(4) At elevated temperatures, clay minerals exhibit strong acidic properties.The catalytic mechanism by which the mineral matrix produces oil and gas involves the formation of carbonium ions.Specifically, the acid centers on the surface of the mineral matrix facilitate the conversion of kerogen into carbonium ions.The catalytic action is accomplished by decomposing and transferring carbonium ions [38,39].Non-clay minerals such as quartz and calcite have the ability to absorb free cations and create L-acid, which promotes the transformation of kerogen [40,41].The existence of Lewis Water Thermal Cracking Take a specific quantity of oil sample and place it in a water bath at a temperature of 65 • C.After maintaining a constant temperature for 1 h, transfer 30 g of the oil to the reactor.Next, introduce the prepped catalyst into the reactor at a proportion of 0.2% relative to the weight of the oil.Create a vacuum and fill the reactor with nitrogen.Allow the reaction to occur at a temperature of 180 • C for 4 h.Then, add a specific amount of water based on a water-oil mass ratio of 30%.Add the catalyst at a ratio of 0.2% to the mass of the oil.After the reaction has finished, lower its temperature to match that of the surrounding room, then separate it using a centrifuge.Then, transfer the separated substance into a measuring cup and proceed to measure its temperature, viscosity, and other physical characteristics.Figure 22 illustrates the reaction process. Characterization The viscosity of heavy oils is measured [47].The rate of decrease in viscosity of the oil, Δη%, was calculated using the formula ((η0 − η)/η0) × 100, where η0 and η (mPa•s) represent the initial and final viscosities of the oil, respectively [14,15].In addition, the heavy oil components were analyzed following the guidelines of China Petroleum Industry Standard [48].The elemental compositions (carbon, hydrogen, nitrogen, and sulfur) of the initial oil and enhanced oil were determined using the Elementar Vario EL Cube.An examination of the four components of petroleum asphaltene was carried out following the guidelines of standard [49].Thermogravimetric analysis was used to evaluate the distribution of carbon numbers in crude oil across different temperature ranges.The oil samples were subjected to heating in a nitrogen atmosphere from 30 °C to 550 °C at a rate of 10 °C per minute.The wax precipitation point of heavy oil was tested according to the standard [50].The examination of heavy oil using different scanning calorimetry (DSC) techniques was conducted using the a Mettler-Toledo DSC822e DSC apparatus (Mettler-Toledo Instruments Co., Ltd., Greifensee, Switzerland).The studies were conducted in a nitrogen atmosphere with a flow rate of 50 mL/min and a temperature range of −25 to 50 °C.The wax crystals in the crude oil were analyzed by examining the microstructure using saturated hydrocarbons.The examination was conducted at a temperature of 15 °C (±0.2 °C) using a polarizing microscope (BX41-Olympus, Tokyo, Japan).The composition analysis of the model compound was performed using gas chromatography-mass spectrometry (GC-MS) with a 7890A-5975C apparatus located in Santa Clara, CA, USA.The carrier gas employed was hydrogen, with a constant flow rate of 25 mL/min.The composition analysis of the samples was conducted using the DRS chemical database. Characterization The viscosity of heavy oils is measured [47].The rate of decrease in viscosity of the oil, ∆η%, was calculated using the formula ((η0 − η)/η0) × 100, where η0 and η (mPa•s) represent the initial and final viscosities of the oil, respectively [14,15].In addition, the heavy oil components were analyzed following the guidelines of China Petroleum Industry Standard [48].The elemental compositions (carbon, hydrogen, nitrogen, and sulfur) of the initial oil and enhanced oil were determined using the Elementar Vario EL Cube.An examination of the four components of petroleum asphaltene was carried out following the guidelines of standard [49].Thermogravimetric analysis was used to evaluate the distribution of carbon numbers in crude oil across different temperature ranges.The oil samples were subjected to heating in a nitrogen atmosphere from 30 • C to 550 • C at a rate of 10 • C per minute.The wax precipitation point of heavy oil was tested according to the standard [50].The examination of heavy oil using different scanning calorimetry (DSC) techniques was conducted using the a Mettler-Toledo DSC822e DSC apparatus (Mettler-Toledo Instruments Co., Ltd., Greifensee, Switzerland).The studies were conducted in a nitrogen atmosphere with a flow rate of 50 mL/min and a temperature range of −25 to 50 • C. The wax crystals in the crude oil were analyzed by examining the microstructure using saturated hydrocarbons.The examination was conducted at a temperature of 15 • C (±0.2 • C) using a polarizing microscope (BX41-Olympus, Tokyo, Japan).The composition analysis of the model compound was performed using gas chromatography-mass spectrometry (GC-MS) with a 7890A-5975C apparatus located in Santa Clara, CA, USA.The carrier gas employed was hydrogen, with a constant flow rate of 25 mL/min.The composition analysis of the samples was conducted using the DRS chemical database. Figure 1 . Figure 1.Changes in China's oil production, consumption, and imports.Figure 1. Changes in China's oil production, consumption, and imports. Figure 2 . Figure 2. Changes in China's dependence on foreign crude oil. Figure 3 . Figure 3. Infrared spectra of ligand C and catalyst Mn(II)C. Figure 3 . Figure 3. Infrared spectra of ligand C and catalyst Mn(II)C. Figure 7 . Figure 7. Assessment of the synergistic effect of Zn(II)O+K on the viscosity decrease of oil sample 1. Figure 7 .Figure 8 . Figure 7. Assessment of the synergistic effect of Zn(II)O+K on the viscosity decrease of oil sample 1. Figure 8 . Figure 8. Evaluation of the synergistic effect of Zn(II)O+K universally on several oil samples. 2. 5 . Variations in Heavy Oil's Pour Point Both before and after the Reaction Figure 9 Figure 9 clearly demonstrates that the initial oil sample has a notable pour point, which is substantially reduced after undergoing hydrothermal cracking.Specifically, the pour point lowers drastically from 36.5 • C to 30.5 • C, resulting in a remarkable decrease of 6 • C. The oil sample's pour point was reduced to 22 • C and 8.5 • C through the synergistic catalysis of Mn(II)C + K.The condensation point is further decreased to 19.5 • C with the assistance of hydrogen supplied during the catalytic process of isopropanol.Molecules 2024, 29, x FOR PEER REVIEW Figure 9 . Figure 9.Comparison of solidification points of heavy oil. Figure 9 . Figure 9.Comparison of solidification points of heavy oil. Figure 13 . Figure 13.Carbon number distribution of oil samples. Figure 13 . Figure 13.Carbon number distribution of oil samples. Figure 15 . Figure 15.GC-MS analysis of polar substances dissolved in water after reaction of oil sample 1. Figure 15 . Figure 15.GC-MS analysis of polar substances dissolved in water after reaction of oil sample 1. Molecules 2024 , 25 Figure 22 . Figure 22.Catalyst catalyzes the thermal cracking process of heavy oil water. Figure 22 . Figure 22.Catalyst catalyzes the thermal cracking process of heavy oil water. • C for 4 h.Otherwise stability, especially temperature, will be affected. Table 1 . Element analysis results of oil samples. Table 1 . Element analysis results of oil samples. Figure 12.Component analysis of oil sample 1. Table 6 . Compounds after nonylphenol reaction.Reaction mechanism of nonylphenol.
16,258.2
2024-08-01T00:00:00.000
[ "Environmental Science", "Chemistry", "Engineering" ]
THE APPLICATION OF PRACTICE REHEARSAL PAIRS LEARNING MODEL TOWARD BASIC PROGRAMMING LEARNING OUTCOMES Learning model is a general pattern of learning behavior to achieve conducive and effective learning goals. The PRP (Practice Rehearsal Pairs) model is one of the effective learning models used in this study. The purpose of this study was to determine the effect of PRP learning models on student learning outcomes in basic programming subjects. This type of research is quantitative research using quasi experimental design carried out at Hamzanwadi University. The design in this study uses posttest only control group design with a total population of 54 students and the number of samples taken using the saturated sampling method is 54 students. Data collection used learning outcomes tests. Data analysis used descriptive analysis and paired sample ttest. The results showed that the average value of learning outcomes in the RPS model was 75.07 higher the average value of learning outcomes on the contextual model was 70.37. Hypothesis test results show that ttest (8,619)> ttable (2,060) (significant with ρ <0.05). Thus, the conclusion of this study is that there is a significant effect of student learning outcomes after applying the PRP learning model. Keyword: Practice Rehearsal Pairs Learning Model, Contextual Learning Model, Learning Outcomes. INTRODUCTION A teacher should be able to choose a suitable learning model for their students, so that the learning objectives that have been set can be achieved. Learning model is an approach that has a systematic reference framework used as a guide in implementing learning activities to achieve learning objectives. In Kamus Besar Bahasa Indonesia (2005: 751) model is defined as a pattern (eg, reference, variety, etc.) of something to be created or produced. While, Rusman (2014: 133) states that learning model is a general pattern of learning behavior to achieve the expected learning objectives. Meanwhile, the learning model can also be interpreted as a learning planning systematically arranged that serves as a guide to achieve a goal (Komalasari, 2010; Trianto, 2010; Wahab, 2007). The success rate of learning models used by lecture and teachers in achieving effective learning is the learning where students learn by themselves actively. Sanjaya (2007) states that active learning is a learning that invites students to learn so that they can actively use the brain, either to find key ideas from subject matter, solve problems or apply what they have just learned into a problem that exists in life real. While, D'Silva (2010: 77) describes the notion of active learning, by stating that "active learning refers to the models of instruction that focus on the learning of students by promoting higher-order thinking. Strategically designed active learning is critical for the overall development of graduate students towards life-long learning ". Meanwhile, Hamalik (2000) reveals that effective teaching is a teaching that provides self-study opportunities. e-ISSN 2549-7472 Volume 1, Nomor 2, Desember 2017 EDUMATIC: Jurnal Pendidikan Informatika | 45 Contextual learning model is a model that has been on the Basic Programming course at Hamzanwadi University. This learning model is done by connecting the subject matter with real activities or adjusted to the real world situation by promoting the practice activity in the learning activities. In the implementation of lectures using contextual model, the lecturer gives explanation of the material by directly giving the training to the students to do or make simple program based on the theory that has been given. The based on the results of preliminary observations, contextual learning model that applied the positive impact that is able to help students to achieve competence in the realm of skills. This is evidenced by the acquisition of the average value of 70. However, this contextual learning model has not been able to help students achieve competence in the realm of knowledge. This means that the level of understanding of the concept of students on the material that has been given is still low, students are only able to understand the material with practice directly but the students less than the maximum conceptual or theoretical. This is because that still low ability of students in absorbing material given by lecturer, less attention to explanation of material which has been presented by lecturer, and still the students who are lazy to think for solve the problems that have been given by the lecturer. Based on the above description, one way that can be used to overcome the problem is to use practice rehearsal pairs learning model. This learning model is one of the active learning models that can improve students' understanding of the material they learn. Zaini et al (2008: 81) states that Practice Rehearsal Pairs (PRP) is a simple learning model that can be used to practice a skill or procedure with a classmate aimed at ensuring that each partner can perform the skills correctly. While Silberman (2009: 228) states that the language practice rehearsal pairs means practice exercise in pair. Whereas according to the term practice rehearsal pairs are a simple method used to practice a skill or procedure with a learning partner. This means that some students are grouped into sections and they are required to actively practice a particular skill, so that each group works together. In accordance with the terms of the PRP model above, it can be concluded that PRP is a simple learning model that can be used to practice a skill or procedure with a classmate. With the friends learn, students are more encouraged to improve learning and free to share knowledge or ask with friends learn, and can improve students' recall of the material that has been presented by the lecturer. Therefore, by applying PRP learning model is expected to influence student learning outcomes in Basic Programming course. The aims of this study is to determine whether there is influence of the practice rehearsal pairs (PRP) learning model to student learning outcomes in the basic programming courses. This research is expected to provide benefits: (1) for students: can be grow liveliness, thinking skills, communicate, and can work together to improve student learning outcomes in the basic programming courses, (2) for lecturer: this research can be used as input and information about instructional model that can be applied in learning process in an effort to improve student learning result, (3) for the institution: this research can be used as reference material in improving the quality of education through model practice rehearsal pairs towards student learning outcomes. METHOD The type of this research is quantitative research using experimental method. The nature of this research is quasi experimental design, because in this study not all the symptoms that arise can be used as sample, so the sample in this research is class II A and class II B. e-ISSN 2549-7472 Volume 1, Nomor 2, Desember 2017 EDUMATIC: Jurnal Pendidikan Informatika | 46 Tabel 1. Posttest Only Control Group Design Group Treatment Posttest Experiment (E) X1 O2 INTRODUCTION A teacher should be able to choose a suitable learning model for their students, so that the learning objectives that have been set can be achieved. Learning model is an approach that has a systematic reference framework used as a guide in implementing learning activities to achieve learning objectives. In Kamus Besar Bahasa Indonesia (2005: 751) model is defined as a pattern (eg, reference, variety, etc.) of something to be created or produced. While, Rusman (2014: 133) states that learning model is a general pattern of learning behavior to achieve the expected learning objectives. Meanwhile, the learning model can also be interpreted as a learning planning systematically arranged that serves as a guide to achieve a goal (Komalasari, 2010;Trianto, 2010;Wahab, 2007). The success rate of learning models used by lecture and teachers in achieving effective learning is the learning where students learn by themselves actively. Sanjaya (2007) states that active learning is a learning that invites students to learn so that they can actively use the brain, either to find key ideas from subject matter, solve problems or apply what they have just learned into a problem that exists in life real. While, D'Silva (2010: 77) describes the notion of active learning, by stating that "active learning refers to the models of instruction that focus on the learning of students by promoting higher-order thinking. Strategically designed active learning is critical for the overall development of graduate students towards life-long learning ". Meanwhile, Hamalik (2000) reveals that effective teaching is a teaching that provides self-study opportunities. Contextual learning model is a model that has been on the Basic Programming course at Hamzanwadi University. This learning model is done by connecting the subject matter with real activities or adjusted to the real world situation by promoting the practice activity in the learning activities. In the implementation of lectures using contextual model, the lecturer gives explanation of the material by directly giving the training to the students to do or make simple program based on the theory that has been given. The based on the results of preliminary observations, contextual learning model that applied the positive impact that is able to help students to achieve competence in the realm of skills. This is evidenced by the acquisition of the average value of 70. However, this contextual learning model has not been able to help students achieve competence in the realm of knowledge. This means that the level of understanding of the concept of students on the material that has been given is still low, students are only able to understand the material with practice directly but the students less than the maximum conceptual or theoretical. This is because that still low ability of students in absorbing material given by lecturer, less attention to explanation of material which has been presented by lecturer, and still the students who are lazy to think for solve the problems that have been given by the lecturer. Based on the above description, one way that can be used to overcome the problem is to use practice rehearsal pairs learning model. This learning model is one of the active learning models that can improve students' understanding of the material they learn. Zaini et al (2008: 81) states that Practice Rehearsal Pairs (PRP) is a simple learning model that can be used to practice a skill or procedure with a classmate aimed at ensuring that each partner can perform the skills correctly. While Silberman (2009: 228) states that the language practice rehearsal pairs means practice exercise in pair. Whereas according to the term practice rehearsal pairs are a simple method used to practice a skill or procedure with a learning partner. This means that some students are grouped into sections and they are required to actively practice a particular skill, so that each group works together. In accordance with the terms of the PRP model above, it can be concluded that PRP is a simple learning model that can be used to practice a skill or procedure with a classmate. With the friends learn, students are more encouraged to improve learning and free to share knowledge or ask with friends learn, and can improve students' recall of the material that has been presented by the lecturer. Therefore, by applying PRP learning model is expected to influence student learning outcomes in Basic Programming course. The aims of this study is to determine whether there is influence of the practice rehearsal pairs (PRP) learning model to student learning outcomes in the basic programming courses. This research is expected to provide benefits: (1) for students: can be grow liveliness, thinking skills, communicate, and can work together to improve student learning outcomes in the basic programming courses, (2) for lecturer: this research can be used as input and information about instructional model that can be applied in learning process in an effort to improve student learning result, (3) for the institution: this research can be used as reference material in improving the quality of education through model practice rehearsal pairs towards student learning outcomes. METHOD The type of this research is quantitative research using experimental method. The nature of this research is quasi experimental design, because in this study not all the symptoms that arise can be used as sample, so the sample in this research is class II A and class II B. This research uses 2 classes (group), that is experiment group and control group. The experimental group is chosen based on certain considerations, namely the class having a lower knowledge domain value than the skill domain value, and homogeneity testing to ensure that the experimental and control group have the same learning ability. The control group is a class that is taught by the previously applied learning model of contextual learning, while the experimental group is a class that is taught by the PRP learning model. The technique of data analysis in this research using descriptive analysis and inferential analysis. For descriptive analysis, the researcher will describe data about basic programming learning result, where each will be described in the form of distribution table and histogram. Descriptive analysis is based on the average of ideal score (MI) and standard deviation (SD). Instrument of data collecting in this research is using a test to know student learning outcomes in basic programming course. While for the analysis of inferential using paired sample t test that aims to determine whether there is influence PRP learning model of student learning outcomes in the basic programming courses. Before performing the hypothesis analysis, first perform the prerequisite analysis test consisting of normality test and homogeneity test. RESULT AND DISCUSSION After doing the data normality test, obtained the result that all data is normally distributed. Meanwhile, homogeneity test also shows that the value obtained is significant. This means the data has a homogeneous variance. Thus the analysis using paired sample ttest can be continued. The results of the data description analysis in this study are shown in table 2 below. Based on table 2 above, the acquisition of the average value of student learning outcomes in the experiment group applied to the PRP model is 75.07 with the highest value of 85 and the lowest value of 60. While the average student learning outcomes in the control group are applied to the model Contextual learning is 70.73 with the highest value of 80 and the lowest value is 55. Based on these data it can be seen that students 'understanding of the experimental class is better than students' understanding of the control class. The difference in learning outcomes between the experimental class students and the control class is also proven by testing hypotheses using the paired sample t-test listed in table 3 below. This happens because students in the experimental class who are exposed to the PRP learning model can increase participation among students, easier interaction and more opportunities for the construction of each partner, and can improve their memory, so that the concepts of the material provided by the lecturer easy to understand. In addition, the results of this study are supported by the results of research conducted by Widianto & Kustini (2016). The results of their study indicate that the PRP type cooperative model influences the student learning outcomes at SMK 3 Jombang. Meanwhile, the results of research conducted by Zakaria (2017) show that the PRP model can increase motivation and student learning outcomes in lathe subjects. CONCLUSION Based on the results and discussion of the above research, it can be concluded that there is a significant effect of learning model of PRP that has been applied to student learning outcomes in the basic programming courses. It is evident that, the average value of student learning outcomes applied with the PRP model amounted to 75.07; this result is higher than the average value of student learning outcomes applied with a contextual model of 75.07. In addition, based on the hypothesis test obtained t test (8.619) > t table (2.060), so there is a significant effect of student learning outcomes in the basic programming courses after applied model of learning PRP. e-ISSN 2549-7472 Volume 1, Nomor 2, Desember 2017
3,731
2017-12-11T00:00:00.000
[ "Computer Science", "Education" ]
Fluctuation–dissipation relation from anomalous stress tensor and Hawking effect We show a direct connection between Kubo’s fluctuation–dissipation relation and Hawking effect that is valid in any dimensions for any stationary or static black hole. The relevant correlators corresponding to the fluctuating part of the force, computed from the known expressions for the anomalous stress tensor related to gravitational anomalies, are shown to satisfy the Kubo relation, from which the temperature of a black hole as seen by an observer at an arbitrary distance is abstracted. This reproduces the Tolman temperature and hence the Hawking temperature as that measured by an observer at infinity. Introduction Quantisation of fields on a classical background containing a horizon leads to the fact that particles can radiate from the horizon [1][2][3]. Moreover, it is now well known that such a phenomenon depends on a particular observer and is connected to the non-unique definition of vacuum in a noninertial frame or in curved spacetime. One of the classic examples in this context is the Hawking effect [1]-the thermal radiation observed by a static observer at infinity in a black hole spacetime in the Kruskal or Unruh vacuum. While there are several approaches to analysing this phenomenon, each has its own merits and/or demerits, but none is truly clinching. This is the primary reason that new avenues are still being explored. The fluctuation-dissipation theorem is a general result of statistical thermodynamics that yields a concrete relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations (see [4] for a review). It has been effectively used to study various processes like Brownian motion in fluids, Johnson a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>(corresponding author) noise in electrical conductors and, as shall become clear very soon, relevantly, Kirchoff's law of thermal radiation. Since black holes satisfy the condition of detailed balance and the emitted radiation is thermal, it is likely that new insights into the phenomenon of Hawking radiation could be obtained by using the fluctuation-dissipation relation. Our motivation in this paper is to exploit the fluctuationdissipation relation to obtain the Hawking temperature. In fact we are able to obtain the more general Tolman temperature [5,6], which is valid for an observer at an arbitrary distance. There are different versions of the theorem but the one suited for our analysis was given by Kubo [4,7,8]. In simple terms this relation is able to provide the temperature of the heat bath from a study of the correlators of the fluctuations of the force of the emitted particles, as measured by the detector. We shall apply this relation to black holes. Treating the black holes as a heat bath, it is possible to compute the fluctuations of the force of the emitted particles as seen by an observer at an arbitrary distance. Using Kubo's relation we derive the temperature, which turns out to be the Tolman temperature. Putting the detector at infinity immediately yields the Hawking temperature [1]. An essential ingredient in our calculation is the structure of the energy-momentum tensor. The force is computed by taking the time variation of the space component of the four momentum which in turn is defined from the energymomentum tensor. Since our analysis is very near the horizon, where the spacetime (arbitrary dimensional static as well as stationary black holes) is effectively (1 + 1) dimensional [9][10][11], we shall concentrate on the two dimensional stress tensor in a curved background. Classically this is not well defined and recourse has to be taken to some regularisation to include quantum effects. Incidentally the method discussed here was applied earlier by one of the authors, in a collaborative work [12,13], in a classical treatment. It is useful to recall that the fluctuation-dissipation relation remains valid both for classical and quantum systems. Other relevant applica-tions were based on the path integral fluctuation-dissipation formalism developed in [14]. The Minkowski vacuum was modelled as a thermal bath with respect to an accelerated observer, so that any particle in it is executing a Brown-like motion [15][16][17][18][19]. For the quantum case, which is relevant here, there are two possibilities. Either the theory is nonchiral or it is chiral. In the first case the stress tensor satisfies diffeomorphism invariance but lacks conformal invariance leading to a non-vanishing trace of the stress tensor [20][21][22][23]. For the chiral theory both conformal and diffeomorphism symmetries are broken [24][25][26][27]. We have done the analysis for either situation and find the correct Tolman/Hawking temperatures. There are certain differences and important aspects of the present analysis compared to the earlier related investigations. As the analysis is done very near the horizon where the effective metric is (1 + 1) dimensional, the appearance of the gravitational anomalies (trace and/or diffeomorphism), at the quantum level, is unavoidable. Since our calculation is at the quantum regime, dealing with the corresponding anomalous stress tensor is quite natural; this was lacking in the literature. This is much more consistent with the quantum nature of black holes. Here we shall do exactly the same to find the fluctuating part of the force as experienced by a relevant observer. We shall show that this satisfies the equilibrium fluctuation-dissipation relation. Furthermore, the near horizon effective (1 + 1) geometry is very general for not only static black holes, but also for stationary ones in any arbitrary number of dimensions. Therefore our results are valid beyond the two dimensional static black hole spacetimes as far as near horizon physics is concerned. All these points are new in showing, in the very general terms outlined above, the connection between the fluctuation-dissipation theorem and particle production in a background black hole metric. It shows that, as far as the near horizon case is concerned, the standard form of the fluctuation-dissipation relation is very universal. Moreover, in our analysis the concept of temperature is emerging by comparing this with Kubo's form of the fluctuationdissipation relation, which is given by Tolman's expression, instead of looking at the emission spectrum of particles. This can be viewed as an alternative way of understanding the thermality of the horizon. So it presents a much more general aspect of particle production in black hole spacetimes. The organisation of the paper is as follows. In the next section, we shall briefly discuss our general formalism of computing fluctuations of force and their use in Kubo's relation. In Sects. 3 and 4, respectively, we derive the Hawking effect from the non-chiral and chiral theories using the corresponding anomalous stress tensor. The last section contains our conclusions and a look into future possibilities. Fluctuations of force and Kubo's relation Here we outline our general strategy for obtaining the Hawking temperature from Kubo's relation. The first thing is to define the force that will lead to the force correlators. The space components of the four momentum as measured by the observer in its own frame which perceives the particles is defined as where h is the determinant of the induced metric of the t = constant hypersurface for the detector's metric and x D is the position of the detector in its own frame. Here √ h has been introduced both with volume element and the Dirac-delta function to define the most general invariant quantities which remain same under a general coordinate transformation for a general space. Then Eq. (1) yields Note that x D is not changing. Therefore, the above quantity must be a function of its proper time τ only. Then the force, as measured by this detector, turns out to be The fluctuating part of this force is Next we define the fluctuating force-force two point correlation function as In the above < · · · > refers to the expectation value of the quantity of interest with respect to the relevant state. It is now possible to proceed with the calculations in a general way. More precisely, for the accelerated frame case, the stress tensor components are defined in the Rindler frame, while the states are vacuum states corresponding to the Minkowski observer. Specialising to black holes, the operators are defined in static Schwarzschild coordinates and the states will be considered as Kruskal and Unruh vacua. Moreover, near the horizon, the arbitrary dimensional static and stationary black holes are effectively two dimensional and hence conformally flat [9][10][11]. The idea is the following. Consider for simplicity, say a massive scalar field, in the original stationary background. The explicit expansion of the field action for this black hole metric in the near horizon limit, upon integrating out the angular coordinates, reduces to an infinite set of two dimensional theories enumerated by angular and magnetic quantum numbers. One can check that the dimensionally reduced form of the action is similar to that of a collection of infinite free massless scalar field modes, each of which is residing on an effective (1 + 1) dimensional conformally flat metric. This has been checked case by case. For example, see [9] in the case of static spherically symmetric black hole (also see section 9.3 of [28] and for higher dimensional black holes look at section 9.10 of the same). The (1 + 3) dimensional Kerr-Newman and other higher dimensional rotating cases (e.g. the Myers-Perry black hole) have been investigated as well [10,29,30]. In all instances, the general form of the effective metric, in Schwarzschild coordinates, is given by The explicit value of f (r ) is different for different black holes. For instance, in the case of the Kerr-Newman metric this is given by [29] f where M, Q, and a are mass, charge and angular momentum per unit mass of the black hole, respectively. Since we investigate the thermal behaviour of horizons, which is dominated by the near horizon physics, the above form of the metric is sufficient as far as the near horizon phenomenon is concerned. Moreover, we shall see that the explicit form of the metric coefficient is not necessary for our purpose, and in this sense the results, obtained using (6), as far as the near horizon is concerned, will be valid for any stationary black hole. The above effective metric in Kruskal null-null coordinates and in Eddington null-null coordinates is given by where f is the metric coefficient and κ = f (r H )/2 is the surface gravity and r = r H is the horizon. The explicit form of f (r ) determines which higher dimensional metric is effectively given by the above near horizon form. For our present analysis, this explicit expression is not needed and thereby helps us to discuss all black holes (static as well as stationary) in a unified manner. The relations among these sets of coordinates are as follows: Thus in Eq. (5) we have to take only the R 11 component. Also, since the correlators are translationally invariant, it is a function of only the difference of the proper times, (τ 2 − τ 1 ). Let us now represent the Fourier transform of the correlator R 11 (τ 2 − τ 1 ) by K (ω). Now as usual [4,7,8], the symmetric and anti-symmetric combinations are taken as Then the Kubo fluctuation-dissipation relation states that these two are related by where T is the temperature of the heat bath which, in our case, is the black hole. In the next sections we will explicitly compute the correlators, both for non-chiral and chiral cases, and we obtain the temperature from Eq. (12). Non-chiral theory At the quantum level both the trace and the covariant divergence of the stress tensor cannot be made to vanish. Since the diffeomorphism symmetry is more fundamental in gravitational theories, a regularisation is done such that the trace of the energy-momentum tensor is non-vanishing and given by [20,21] T a a = c w R. However, it is covariantly conserved, i.e. ∇ a T ab = 0. The value of the proportionality constant is c w = 1/(24π). In the (trace) anomaly based approach of discussing the Hawking effect [23], which is valid only for two dimensions, use is made of this result. However, we are interested in the explicit form of the stress tensor. This is derived from the anomalous effective action [22], where φ = R. By taking appropriate functional derivatives [22], Now the equation for the scalar field (14) under the background (8) yields Since the metric is static, we choose the ansatz for the solution as φ(t, r * ) = e −iωt F(r * ), where ω is the energy of the scalar mode and F(r * ) is an unknown function to be determined. Substituting this in (16) we obtain Since our analysis is very near the horizon, where R is finite, the right hand side of the above can be neglected compared to the terms on the left hand side and then the solutions for F are F = e ±iωr * . In this limit, the modes are identical to the usual ones and hence the mode expansion of φ is the same as that for a free massless scalar field. Therefore the positive frequency Wightman functions, corresponding to respective vacuum states, will be same as those for the free massless scalar field. So the expressions for the positive frequency Wightman functions corresponding to Kruskal and Unruh vacuums are as follows: [31]: Kruskal vacuum For a Kruskal vacuum the expectation value of T tr , as measured by the Schwarzschild static observer, vanishes (see Appendix C of [13] for details). So the fluctuating part of the force is given by the first term of (4). Therefore, the fluctuating force-force correlator, as measured by the Schwarzschild static observer, is given by where g uv = −(2/ f (r )) has been used while the observer is static at r = r s . In going from the first to the second equality, only the T uu (i.e. T UU ) component was considered. This is because this component, which corresponds to outgoing modes, leads to the flux of emitted particles from the horizon, while T vv , related to ingoing modes, does not contribute to this flux. Now using (15) one finds where A = f (U, V )/(U V ). With this, one finds by using Wick's theorem In the above we used the following notations: The expectation value is taken here with respect to the Kruskal vacuum. Other terms vanish as < φ >= 0. In the above we denoted < φ 2 φ 1 >= G(x 2 ; x 1 ) which is Green's function corresponding to the differential equation (14) for field φ. Here for the Kruskal vacuum, this is given by (18). Substituting this we find the terms of Eq. (22) as Hence the correlator (20) turns out to be where we have used the transformation U = −(1/κ)e −κu , , while a prime denotes differentiation with respect to the r coordinate and Unruh vacuum The expression for R 11 U (τ 2 ; τ 1 ) is again given by (20) where the vacuum expectation has to be calculated with respect to the Unruh vacuum. This is because < T tr > for a Schwarzschild static observer is constant (see Appendix C of [13] for a detailed analysis) and hence the second part of (4) vanishes. The two point correlation function for the stress tensor component in this expression is again expressed in terms of the positive frequency Wightman function, which is given by (19). Since the derivatives will be with respect to U , only the U part of G + contributes and hence the final expression for R 11 U (τ 2 ; τ 1 ) turns out to be the same as that of the Kruskal vacuum; i.e. Eq. (24). Chiral theory Contrary to the previous non-chiral case, here both trace and diffeomorphism anomalies exist: ac ∇ c R. This is the covariant form of the anomaly which, as was shown by [32][33][34][35], is more effective than the consistent form of the anomaly, in analysing the Hawking effect. Here the effective action and the corresponding energymomentum tensor are evaluated in [27]. The form of the stress tensor is given by where G satisfies G = R. The chiral derivative is defined as D a = ∇ a ±¯ ab ∇ b . Here +(−) corresponds to ingoing (outgoing) mode and¯ ab = √ −g ab (or¯ ab = − ab √ −g ) is an anti-symmetric tensor, while ab is the usual Levi-Civita symbol in (1 + 1) dimensions. Since we are interested in outgoing modes, the negative sign of the D a operator will be considered. In this case then we have D U = 2∇ U and D V = 0. This can be checked using the expression for¯ ab = √ −g ab with U V = 1. Then T UU , as obtained from (25), becomes identical to the non-chiral case. Moreover, G satisfies an equation which is identical to that satisfied by φ. So the correlator for the fluctuation of the force, in both vacua, will be identical to the form (24). Only the over-all multiplicative constant factor is different. Note that in all cases, the forms of the correlators for the fluctuation of the force are identical: where C and C 0 are unimportant constants (which can be different for different cases) and B is given by Moreover, as the correlator depends only on the difference of the detector's proper time, it is time translational invariant. This is a signature of the thermal equilibrium between the detector and the thermal bath seen by this detector. It also helps us to express the quantity in its Fourier space which we shall do later to find the equilibrium fluctuation-dissipation relation. The Fourier transformation of (26) is given by The above equation involves four integrations. All of them can be evaluated by the standard formula [36] +∞ Then one finds Now use of (11) yields which is identical to Kubo's fluctuation-dissipation relation (12) with the temperature identified as Using (27) one can check that this corresponds to which is the correct value of the Tolman expression [5,6]. For the detector located at infinity, r s → ∞, f (r s ) → 1, the above result simplifies to which is the familiar Hawking expression [1]. Conclusions A new approach for analysing the Hawking effect has been given in this paper which is based on the fluctuationdissipation relation as formulated by Kubo. It is general enough to include any stationary metric, any dimensions and also to yield the Tolman temperature, which is the result of a measurement by an observer at an arbitrary distance from the black hole horizon. As expected, the result for an observer at infinity is easily derived, thereby giving the Hawking temperature. It is universal in the sense that it does not depend on how the effective action yielding the stress tensor is regularised. Thus it was applicable both for non-chiral and chiral couplings. In the literature [9,10,23,[32][33][34][35], stress tensor based approaches have used either one or the other, but a holistic treatment was lacking. As mentioned in the introduction, the present analysis is much more improved and general as it dealt with the full quantum theory in the choice of the stress tensor and extends to any arbitrary dimensional stationary black hole. Therefore the present analysis provides a much more robust development in the paradigm of thermality of the black hole horizon. Contrary to several other approaches, this is a physically motivated derivation of the Hawking effect by directly computing the force of the emitted spectrum on the detector and brings it in line with other phenomena of statistical thermodynamics. For instance, the present approach shows that the thermal heat bath characterising a black hole is Brownian in nature. Naturally such an approach is expected to yield further insights in the interpretation of black holes as thermodynamic objects. It is possible to extend this analysis in several ways. As an example, the back reaction effect might be taken into account. This would change the force (and its fluctuations) as perceived by the detector. Application of the Kubo relation would then yield a correction to the Hawking temperature that determines the greyness of the otherwise black body radiation. Finally, we mention that our present analysis is general in the sense that all the static as well as stationary solutions of Einstein's gravity (with or without cosmological constant) have been addressed. Moreover, the calculation has been done close to an event horizon. In this regard, it may be mentioned that there are spacetimes (e.g. see [37]), which possess other types of horizon; e.g. the Killing horizon, the conformal Killing horizon and the apparent horizon. It has been observed that they can also emit at the quantum level (some of the cases are discussed in [37]). Therefore it would be interesting to investigate such examples under the present scenario. In that case one should first check if the near horizon is effectively (1 + 1) dimensional. If so, then only the use of two dimensional gravitational anomalies can be done. In this case, it would be interesting to check the fate of the Tol-man expression. All these issues need a thorough and detailed investigation which is beyond the scope of the present paper. Therefore we leave these cases for our future studies. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: There is no data related to this publication that could be deposited. A preprint version of this article was published as at arXiv:1909.03760 [gr-qc].] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .
5,223.4
2019-09-09T00:00:00.000
[ "Physics" ]
FKRR-MVSF: A Fuzzy Kernel Ridge Regression Model for Identifying DNA-Binding Proteins by Multi-View Sequence Features via Chou’s Five-Step Rule DNA-binding proteins play an important role in cell metabolism. In biological laboratories, the detection methods of DNA-binding proteins includes yeast one-hybrid methods, bacterial singles and X-ray crystallography methods and others, but these methods involve a lot of labor, material and time. In recent years, many computation-based approachs have been proposed to detect DNA-binding proteins. In this paper, a machine learning-based method, which is called the Fuzzy Kernel Ridge Regression model based on Multi-View Sequence Features (FKRR-MVSF), is proposed to identifying DNA-binding proteins. First of all, multi-view sequence features are extracted from protein sequences. Next, a Multiple Kernel Learning (MKL) algorithm is employed to combine multiple features. Finally, a Fuzzy Kernel Ridge Regression (FKRR) model is built to detect DNA-binding proteins. Compared with other methods, our model achieves good results. Our method obtains an accuracy of 83.26% and 81.72% on two benchmark datasets (PDB1075 and compared with PDB186), respectively. Introduction The interaction between DNA and protein exists in various tissues of the living body. For example, DNA-protein interactions during many activities such as DNA replication, DNA repair, DNA packaging, DNA modification, and viral infection. The study of DNA binding residues in DNA-protein interactions facilitates a comprehensive understanding of the mechanisms of chromatin recombination and gene-regulated expression. The methods of detecting DNA-binding proteins are mainly deployed by biochemistry and physical chemistry methods. However, wet experiment-based methods are both time and money consuming. Data Sets In our study, two benchmark datasets (PDB1075 and PDB186 datasets) are used to test our predictive model of DNA-binding proteins. PDB1075 and PDB186 were collected from the Protein Data Bank (PDB) [41]. Liu et al. [26] randomly extracted non-DNA-binding and DNA-binding proteins from the PDB database. The similarity of any two sequences does not exceed 25%. A total of 525 DNA-bind proteins and 550 non-DNA-binding proteins form the PDB1075 dataset. PDB186 dataset [42] contains 93 DNA-bind and 93 non-DNA-bind proteins. Table 1 lists the information of the two benchmark data sets. Measurements Accuracy (ACC), Sensitivity (SN), Specificity (SP) and Matthew's Correlation Coefficient (MCC) are used to evaluate the performance of predictive model. These coefficients are calculated as follows: where N + and N − are the total number of positive and negative samples, respectively. N − + and N + − are the number of false positive and false negative, respectively. And Area Under ROC curve (AUC) is also an effective evaluation method for binary classification. Performance Analysis of Different Features on the PDB1075 Data Set The single type feature can not fully describe the properties of a protein, so we build the predictive model with multi-view sequence features to represent the protein. We test (Jackknife test evaluation) these features (kernels) on the PDB1075 dataset, as shown in Table 2. The PSSM-based features (PSSM-AB and PsePSSM feature) achieve better performance than non-PSSM (MCD and NMBAC feature) single features. The performance (MCC) of MCD, NMBAC, PSSM-AB and PsePSSM feature are 0.4139, 0.4564, 0.5113 and 0.5886, respectively. In addition, mean weighted kernels (KRR) combines the above 4 kernels (features) via average weight and obtains better performance (MCC: 0.6398) than single feature. Compared with mean weightes (KRR), MKL (KRR) achieves a higher value of MCC (0.6439). FKRR weighs training sets by fuzzy membership, which can filter outliers. So, mean weights (FKRR) (MCC: 0.6554) and MKL (FKRR) (MCC: 0.6664) are both better than KRR because of using multiple kernel information and fuzzy membership. Moreover, MKL (FKRR) achieves a better MCC of 0.6664. Performance on an Independent DataSet of PDB186 In order to evaluate the generalization performance of predictive models, FKRR-MVSF and other methods are also tested on the independent dateset (training set is PDB1075). The results are shown in Table 4. Discussion To improve the performance of predicting DNA-binding proteins, we employ an MKL algorithm and fuzzy-based model to integrated different features and further handle the outliers, respectively. There are many ways in machine learning to avoid overfitting and generating skewed models caused by outliers, e.g., adjustment of the cost value in SVM. For different training samples, the parameter of cost should be different. Different samples have different contributions to the model. In Table 2, the performance (MCC: 0.6664) of fuzzy-based models (FKRR with MKL) is better than non-fuzzy models (KRR with MKL, MCC: 0.6439). Compared to other single kernels, the PsePSSM-based kernel achieves the highest weight and highest value of MCC (0.5886). MKL could integrate multiple information of sequence. Our method (KRR with MKL) also achieves better performance of MCC (0.6439) than a single kernel model on the PDB1075 dataset. In addition, the performance of KRR with MKL (MCC: 0.6439) is better than KRR with mean weights (MCC: 0.6398) under PDB1075 dataset. On the independent test dataset, our method (FKRR with MKL) also achieves better MCC (0.676). MSFBinder (SVM) [48] is a two-layer model with SVM. MSFBinder (SVM) also employed several features to build a predictive model. The generalization performance of FKRR (withe MKL) is better than MSFBinder (MCC: 0.640) on an independent test set (PDB186). The above two models are similar. The main reason of different results is that the parameter C of FKRR is different for each train sample. Fuzzy membership may reduce the effect of some noise samples in the model. Materials and Methods The prediction of DNA-binding proteins can be regarded as a task of binary classification. The protein can be represented by some feature vectors. The DNA-binding proteins and non-DNA-binding proteins are labeled as +1 (positive samples) and −1 (negative samples), respectively. We construct a Fuzzy Kernel Ridge Regression model based on Multi-View Sequence Features (FKRR-MVSF) to determine whether a protein binds to DNA. We employ Normalized Moreau-Broto Auto Correlation (NMBAC) [49,50], PSSM based Average Blocks (PSSM-AB) [51], Multiple-scale Continuous and Discontinuous descriptor (MCD) [52] and PsePSSM algorithms to extract four types of PSSM-based features. Radial Basis Function (RBF) is used to build four types of kernels from the above four kinds of features. In our study, the MKL algorithm is employed to calculate the weights of kernels and to combine four kernels. Then, a membership score is estimated for each training sample. Finally, a fuzzy kernel ridge regression model for identifying DNA-binding proteins is constructed via membership scores and a combined kernel. The framework of proposed method is showed in Figure 3. In the literature [13,33], the researchers have made good use of flowcharts to describe the main framework of their methods. In our work, we employ Figure 4 to describe the flow of our model. Firstly, we extract four types of feature from a sequence. Then, Radical Basis Function (RBF) is used to build four kernels. These kernels are conbined by MKL. Finally, combined kernel and training labels are employed to construct the FKRR model and predict new samples. Feature Extraction Extracting features from proteins is a challenge for identifying DNA-binding proteins. A suitable feature extraction algorithm can adequately represent the properties of the protein. We use four types of feature to describe a protein. [52]. Then, above sequence was split into 10 local regions, which described multiple overlapping continuous and discontinuous interaction patterns. Composition (C), Transition (T) and Distribution (D) were calculated in each local region. The detailed descriptions of MCD algorithm can refer to You's work [52]. The MCD feature was 882-dimentional vector. NMBAC Feature Normalized Moreau-Broto Auto Correlation (NMBAC) [49,50] was proposed for extracting the sequence feature of membrane proteins. A protein sequence (string) can be represented as discrete numerical sequence via six physicochemical properties of Amino Acids (AA): including Hydrophobicity (H), Net Charge Index of Side Chains (NCISC), Solvent-Accessible Surface Area (SASA), Volumes of Side Chains of amino acids (VSC), Polarity (P1) and Polarizability (P2), respectively. The six physicochemical properties of amino acids are list in Table 5. To extract the feature of a protein X with L-length, the NMBAC feature is calculated by following equation: where i denote the position in the sequence, and i = 1, 2, ..., n − lag. j is the type of physicochemical properties, j = 1, 2, ..., 6. lag ∈ [1, lg] is the gap between amino acids. lg is a parameter of maximum distance. Multiple Kernel Learning RBF is employed to construct 4 types of kernels via above features (including MCD, NMBAC, PSSM-AB and PsePSSM): where γ is the Gaussian kernel bandwidth. N is the number of samples. x i and x j are the feature vector of sample i and j. The 4 types of feature can be represented as a kernel set as: The MKL algorithm combines multi-view features from different sources. Some kernels may have bias in the learning process. MKL can reduce bias of kernels by low weights. The optimal kernel K * train is obtained as follows: where H denotes the number of basic kernels. MKL algorithm [54] can estimate the optimal weights of kernels by minimize the distance between ideal kernel K ideal and optimal kernel K * train . The K ideal = y train y T train ∈ R N×N denote the information of label space. y train ∈ R N×1 is the labels of training set. We hope that optimal kernel K * train is close to the K ideal kernel: where X 2 F = Trace(XX T ), λ is a regularization parameters, ω ω ω = [ω 1 , ω 2 , ..., ω h ] T is the weights of kernels. Fuzzy Kernel Ridge Regression Kernel ridge regression is a method from statistics that implements a form of Regularized Least Squares (RLS). Given a training sample x i , y i , i = 1, 2, ..., N. N, x i and y i is the number of samples, feature vector and label. The RLS aims to find the minimum of the following function: where K train ∈ R N×N is the training kernel, C is the non-negative regular term. The solution of KRR is: In this paper, we present a Fuzzy Kernel Ridge Regression (FKRR) for classification. We need to minimize the sum of errors ( K train α α α − y train 2 ). The contribution of sample x i to the decision boundary should be proportional to its fuzzy membership value. The objective function is following function: where D ∈ R N×N is a diagonal matrix whose element D ii (0 ≤ D ii ≤ 1) represents a fuzzy membership value for sample x i . We set ∂J/∂α α α = 0 and the solution of α α α can be obtained as follows: where I ∈ R N×N . So, the decision function is following: where y test ∈ R M×1 is predictive labels. K test ∈ R M×N denotes the kernel of testing samples, M is the number of testing samples. To compute fuzzy membership values of train samples, we employ the optimal kernels K * train (training kernel) as following function: where score t denotes the score of training point t. If a sample t has a larger score. This sample may has a greater contribution to model. We normalize scores into fuzzy membership values (0-1), as follows: , t = 1, 2, ..., N Conclusions FKRR-MVSF achieves better results on independent datasets (MCC: 0.676). Eliminating noise points can improve the predictive performance of the model. In the future, we aim to use other fuzzy membership functions to build fuzzy models for filtering the noise points. As pointed out in PseAAC-based methods [13,33,39,40,[55][56][57][58][59][60], we will establish a web-server for our model. The related code and datasets can be download from: https://figshare.com/s/e80f1a96b7b7bbf8062b. Acknowledgments: The authors would like to thank all the guest editors and anonymous reviewers for their constructive advices. Conflicts of Interest: The authors declare no conflict of interest.
2,739.8
2019-08-26T00:00:00.000
[ "Computer Science", "Biology" ]
Broken R-Parity in the Sky and at the LHC Supersymmetric extensions of the Standard Model with small R-parity and lepton number violating couplings are naturally consistent with primordial nucleosynthesis, thermal leptogenesis and gravitino dark matter. We consider supergravity models with universal boundary conditions at the grand unification scale, and scalar tau-lepton or bino-like neutralino as next-to-lightest superparticle (NLSP). Recent Fermi-LAT data on the isotropic diffuse gamma-ray flux yield a lower bound on the gravitino lifetime. Comparing two-body gravitino and neutralino decays we find a lower bound on a neutralino NLSP decay length, $c \tau_{\chi^0_1} \gsim 30 cm$. Together with gravitino and neutralino masses one obtains a microscopic determination of the Planck mass. For a stau-NLSP there exists no model-independent lower bound on the decay length. Here the strongest bound comes from the requirement that the cosmological baryon asymmetry is not washed out, which yields $c \tau_{\tilde\tau_1} \gsim 4 mm$. However, without fine-tuning of parameters, one finds much larger decay lengths. For typical masses, $m_{3/2} \sim 100 GeV$ and $m_{NLSP} \sim 150 GeV$, the discovery of a photon line with an intensity close to the Fermi-LAT limit would imply a decay length $c\tau_{NLSP}$ of several hundred meters, which can be measured at the LHC. Introduction Locally supersymmetric extensions of the Standard Model predict the existence of the gravitino, the gauge fermion of supergravity [1]. For some patterns of supersymmetry breaking, the gravitino is the lightest superparticle (LSP), and therefore a natural dark matter candidate [2]. Heavy unstable gravitinos may cause the 'gravitino problem' [3][4][5] for large reheating temperatures in the early universe. This is the case for thermal leptogenesis [6], where gravitino dark matter has become an attractive alternative [7] to the standard WIMP scenario [8]. Recently, it has been shown that models with small R-parity and lepton number breaking naturally yield a consistent cosmology incorporating primordial nucleosynthesis, leptogenesis and gravitino dark matter [9]. The gravitino is no longer stable, but its decays into Standard Model (SM) particles are doubly suppressed by the Planck mass and the small R-parity breaking parameter. Hence, its lifetime can exceed the age of the Universe by many orders of magnitude, and the gravitino remains a viable dark matter candidate [10]. Gravitino decays lead to characteristic signatures in high-energy cosmic rays, in particular to a diffuse gamma-ray flux [9][10][11][12][13][14][15][16][17]. The recent search of the Fermi-LAT collaboration for monochromatic photon lines [18] and the measurement of the diffuse gamma-ray flux up to photon energies of 100 GeV [19] severely constrain possible signals from decaying dark matter. In this paper we study the implications of this data for the decays of the next-to-lightest superparticle (NLSP) at the LHC, extending the estimates in [9]. We shall restrict our analysis to the simplest class of supergravity models with universal boundary conditions at the Grand Unification (GUT) scale, which lead to neutralino or τ -NLSP. Electroweak precision tests, thermal leptogenesis and gravitino dark matter together allow gravitino and NLSP masses in the range m 3/2 = 10 . . . 500 GeV and m NLSP = 100 . . . 500 GeV [20]. Following [9], the breaking of R-parity is tied to the breaking of lepton number, which leads to a model with bilinear R-parity breaking [21,22]. The soft supersymmetry breaking terms are characteristic for gravity or gaugino mediation. In order to establish the connection between the gamma-ray flux from gravitino decays and NLSP decays, one needs R-parity breaking matrix elements of neutral, charged and supercurrents. For the considered supergravity models we are able to obtain these matrix elements to good approximation analytically. This makes our results for the NLSP decay lengths rather transparent. As we shall see, the lower bound on the neutralino decay length is a direct consequence of the Fermi-LAT constraints on decaying dark matter. On the other hand, the lower bound on the τ -decay length is determined by the cosmological bounds on R-parity breaking couplings, which follow from the requirement that the baryon asymmetry is not washed out [24,25]. This paper is organized as follows. In Section 2 we discuss the general Lagrangian for R-parity breaking in a basis of scalar SU (2) doublets where all bilinear mixing terms vanish. This leads to new Yukawa and gaugino couplings, some of which are proportional to the up-quark Yukawa couplings. Section 3 deals with the various supersymmetry, R-parity and lepton number breaking terms in the Lagrangian and the relations among them due to a U(1) flavour symmetry of the considered model. The needed R-parity breaking matrix elements of neutral, charged and supercurrent are analytically calculated in Section 4, based on the diagonalization of the mass matrices which is discussed in detail in the appendix. The main results of the paper, the bounds on the NLSP decay lengths and the partial decay widths, are described in Section 5, followed by our conclusions in Section 6. Bilinear R-Parity Breaking Supersymmetric extensions of the Standard Model with bilinear R-parity breaking contain mass mixing terms between lepton and Higgs fields in the superpotential 1 , as well as the scalar potential induced by supersymmetry breaking, These mixing terms, together with the R-parity conserving superpotential the scalar mass terms where higher order terms in the R-parity breaking parameters have been neglected. It is convenient to discuss the predictions of the model in a basis of SU(2) doublets where the mass mixings µ i , B i and m 2 id in Eqs. (1) and (2) are traded for R-parity breaking Yukawa coulings. This can easily be achieved by field redefinitions. First one rotates the superfields H d and l i , Then the bilinear term (1) vanishes for the new fields, i.e., µ ′ i = 0, and one obtains instead the cubic R-parity violating terms where The new R-parity breaking mass mixings are given by The corrections for R-parity conserving mass terms are negligable. In a second step one can perform a non-supersymmetric rotation among all scalar SU(2) doublets, where ε is the usual SU(2) matrix, ε = iτ 2 . Choosing the H uli andl † H d mixing terms vanish in the new basis of doublets. According to (6) also the scalar lepton VEVs ν i vanish in this basis. It is straightforward to work out the R-parity violating Yukawa couplings which are induced by the rotation (11). We are particularly interested in the terms containing one light superparticle, i.e, a scalar lepton, bino or wino. The corresponding couplings read, after dropping prime and double-prime superscripts on all fields 3 , where the Yukawa couplings are given by Since the field transformations are non-supersymmetric, the couplings λ ijk andλ ijk are no longer equal as in Eq. (9). Furthermore, a new coupling of right-handed up-quarks, λ ′ ijk , has been generated. After electroweak symmetry breaking one obtains new mass mixings between higgsinos, gauginos and leptons, where we have defined Given the Yukawa couplings h u ij , h d ij and h e ij , the Lagrangian (14) predicts 108 Rparity breaking Yukawa couplings in terms of 9 independent parameters which may be chosen as These parameters determine lepton-gaugino mass mixings, lepton-slepton and quarkslepton Yukawa couplings, and therefore the low-energy phenomenology. The values of these parameters depend on the pattern of supersymmetry breaking and the flavour structure of the supersymmetric standard model. 3 Our notation for gauge fields, field strengths and left-handed gauginos reads: B µ , B µν , b etc. 3 Spontaneous R-parity breaking Let us now compute the parameters ǫ i , ǫ ′ i and ǫ ′′ i in a specific example where the spontaneous breaking of R-parity is related to the spontaneous breaking of B-L, the difference of baryon and lepton number [9]. We consider a supersymmetric extension of the standard model with symmetry group In addition to three quark lepton generations and the Higgs fields H u and H d the model contains three right-handed neutrinos ν c i , two non-Abelian singlets N c and N, which transform as ν c and its complex conjugate, respectively, and three gauge singlets X, Φ and Z. The part of the superpotential responsible for neutrino masses has the usual form where M P = 2.4 × 10 18 GeV is the Planck mass. The expectation value of H u generates Dirac neutrino masses, whereas the expectation value of the singlet Higgs field N generates the Majorana mass matrix of the right-handed neutrinos ν c i . The superpotential responsible for B-L breaking is chosen as where unknown Yukawa couplings have been set equal to one. Φ plays the role of a spectator field, which will finally be replaced by its expectation value, Similarly, Z is a spectator field which breaks supersymmetry and U(1) R , Z = F Z θθ. The superpotential in Eqs. (22) and (23) is the most general one consistent with the R-charges listed in Table 1, up to nonrenormalizable terms which are irrelevant for our discussion. The expectation value of Φ leads to the breaking of B − L, where the first equality is a consequence of the U(1) B−L D-term. This generates a Majorana mass matrix M for the right-handed neutrinos with three large eigenvalues Integrating out the heavy Majorana neutrinos one obtains the familiar dimension-5 seesaw operator which yields the light neutrino masses. Since the field Φ carries R-charge −1, the VEV Φ breaks R-parity, which is conserved by the VEV Z . Thus, the breaking of B − L is tied to the breaking of R-parity, which is then transmitted to the low-energy degrees of freedom via higher-dimensional operators in the superpotential and the Kähler potential. Bilinear R-parity breaking, as discussed in the previous section, is obtained from a correction to the Kähler potential, Replacing the spectator fields Z and Φ, as well as N c and N by their expectation values, one obtains the correction to the superpotential where m 3/2 = F Z /( √ 3M P ) is the gravitino mass. Note that Θ can be increased or decreased by including appropriate Yukawa couplings in Eqs. (22) and (23). The corresponding corrections to the scalar potential are given by The corresponding R-parity conserving terms are generated by [26] K which yields Higher dimensional operators yield further R-parity violating couplings between scalars and fermions. However, the cubic couplings allowed by the symmetries of our model are suppressed by one power of M P compared to ordinary Yukawa couplings and cubic soft supersymmetry breaking terms. Note that the coefficients of the nonrenormalizable operators are free parameters, which are only fixed in specific models of supersymmetry breaking. In particular, one may have µ 2 , m 2 i > m 2 3/2 and hence a gravitino LSP. All parameters are defined at the GUT scale and have to be evolved to the electroweak scale by the renormalization group equations. The phenomenological viability of the model depends on the size of R-parity breaking mass mixings and therefore on the scale v B−L of R-parity breaking as well as the parameters a i . . . c ′ i in Eq. (25). Any model of flavour physics, which predicts Yukawa couplings, will generically also predict the parameters a i . . . c ′ i . As a typical example, we use a model [27] for quark and lepton mass hierarchies based on a Froggatt-Nielsen U(1) flavour symmetry, which is consistent with thermal leptogenesis and all contraints from flavour changing processes [28]. The mass hierarchy is generated by the expectation value of a singlet field φ with charge Q φ = −1 via nonrenormalizable interactions with a scale Λ = φ /η > Λ GU T , η ≃ 0.06. The η-dependence of Yukawa couplings and bilinear mixing terms for multiplets ψ i with charges Q i is given by The charges Q i for quarks, leptons, Higgs fields and singlets are listed in Table 2. The neutrino mass scale m ν ≃ 0.01 eV implies for the heaviest right-handed neutrinos M 2 ∼ M 3 ∼ 10 12 GeV. The corresponding scales for B − L breaking and R-parity breaking are For the small R-parity breaking considered in this paper the neutrino masses are dominated by the conventional seesaw contribution [9]. The R-parity breaking parameters µ i , B i and m 2 id strongly depend on the mechanism of supersymmetry breaking. In the example considered in this section all mass parameters are O(m 3/2 ), which corresponds to gravity or gaugino mediation. From Eqs. (26), (27) and (31) one reads off ψ i 10 3 10 2 10 1 5 * . Correspondingly, one obtains for ǫ-parameters (cf. (12), (13)) with a, b, c = O(1). Our phenomenological analysis in Section 5.2 will be based on this parametrization of bilinear R-parity breaking. Depending on the mechanism of supersymmetry breaking, the R-parity breaking soft terms may vanish at the GUT scale [22], Non-zero values of these parameters at the electroweak scale are then induced by radiative corrections. The renormalization group equations for the bilinear R-parity breaking mass terms read (cf. [22], t = ln Λ): In bilinear R-parity breaking, the R-parity violating Yukawa couplings vanish at the GUT scale. One-loop radiative corrections then yield for the soft terms at the electroweak scale (cf. Eqs. (36),(37); ǫ i = µ i /µ): This illustrates that the bilinear R-parity breaking terms µ 2 i , B i and m 2 id are not necessarily of the same order of magnitude at the electroweak scale. Neutral, charged and supercurrents In Section 2 we have discussed the R-parity breaking Yukawa couplings in our model. For a phenomenological analysis we also need the couplings of the gauge fields, i.e., photon, W-bosons and gravitino, to charged and neutral matter, 9 The corresponding currents read The gravitino and the gauginos are now Majorana fermions, where the superscript c denotes charge conjugation. In Eqs. (41) -(44) we have only listed contributions to the currents which will be relevant in our phenomenological analysis. The R-parity breaking described in the previous section leads to mass mixings between the neutralinos b, w 3 , h 0 u , h 0 d with the neutrinos ν i , and the charginos w + , h + u , w − , h − d with the charged leptons e c i , e i , respectively. The 7 × 7 neutralino mass matrix reads in the gauge eigenbasis where we have neglected neutrino masses. Correspondingly, the 5 × 5 chargino mass matrix which connects the states (w − , h − d , e i ) and (w + , h + u , e c i ) is given by Note that all gaugino and higgsino mixings with neutrinos and charged leptons are parametrized by the three parameters ζ i . In the following section we shall need the couplings of gravitino, W -and Z-bosons to neutralino and chargino mass eigenstates. Since ζ i ≪ 1, diagonalization of the mass matrices to first order in ζ i is obviously sufficient. We shall also consider supergravity models where the supersymmetry breaking parameters satisfy the inequalities (cf. Fig. 1) The gaugino-higgsino mixings are O(m Z /µ), and therefore suppressed, and χ 0 1 , the lightest neutralino, is bino-like. The mass matrices M N and M C are diagonalized by unitary and bi-unitary transformations, respectively, where These unitary transformations relate the neutral and charged gauge eigenstates to the mass eigenstates (χ 0 a , ν ′ i ) (a = 1, . . . , 4) and (χ − α , e ′ i ), (χ + α , e ′c i ) (α = 1, 2), respectively. Inserting these transformations in Eqs. (42) -(44) and dropping prime superscripts, one obtains neutral, charged and supercurrents in the mass eigenstate basis: where we have defined the photino matrix elements In the appendix the unitary transformations between gauge and mass eigenstates and the resulting matrix elements of neutral and charged currents are given to next-to-leading order in m Z /µ. As we shall see, that expansion converges remarkably well. In the next section we shall need the couplings of the lightest neutralino χ 0 1 to charged leptons and neutrinos, and the coupling of the gravitino to photon and neutrino. From the formulae in appendix A one easily obtains 4 (s 2β = 2s β c β ) Note that the charged and neutral current couplings agree up to the isospin factor at leading order in m 2 1i LO /2. The mass of the lightest neutralino is given by We have numerically checked that varying M 1 between 120 and 500 GeV, the relative corrections in Eqs. (54) -(57) are less than 10%. Fermi-LAT and the LHC We are now ready to evaluate the implications of recent Fermi-LAT data [18,19] and cosmological constraints [24,25] for signatures of decaying dark matter at the LHC. We shall first discuss monochromatic gamma-rays produced by gravitino decays and then analyze the implications for a neutralino and a τ -NLSP, respectively. In order to keep our analysis transparent we shall not study the most general parameter space of softly broken supersymmetry, but only consider two typical boundary conditions for the supersymmetry breaking parameters of the MSSM at the grand unification scale, (A) m 0 = m 1/2 , a 0 = 0, tan β = 10 , which yields the right-handed stau as NLSP. In both cases, the trilinear scalar coupling a 0 is put to zero for simplicity. Choosing tan β = 10 as a representative value of the Higgs vacuum expectation values, only the gaugino mass parameter m 1/2 remains as independent variable; the mass parameters µ and B are determined by requiring radiative electroweak symmetry breaking with the chosen ratio tan β. For both boundary conditions (58) and (59), the gaugino masses at the electroweak scale satisfy the familiar relations For the chosen supergravity models, consistency with electroweak precision tests, gravitino dark matter (GDM) and thermal leptogenesis leads to the following allowed mass ranges of gravitino and lightest neutralino [20], where we have used m χ 0 1 ≃ M 1 (cf. (57)). Note that the masses M 1 and m 3/2 cannot be chosen independently. The GDM constraint implies that for a given gravitino mass the maximal bino mass is M max 1 ≃ 270 GeV(m 3/2 /100 GeV) 1/2 [20]. Inserting the matrix element (56) one obtains the gravitino lifetime where α is the electromagnetic fine-structure constant, and we have defined The corrections to the leading order expression in (63) are less than 10%. Using Eq. (60) and M P = 2.4 × 10 18 GeV, one obtains (65) Recent Fermi-LAT data yield for dark matter decaying into 2 photons the lower bound on the lifetime τ DM (γγ) > ∼ 1 × 10 29 s, which holds for photon energies in the range 30 GeV < E γ < 200 GeV [18]. For gravitino decays into photon and neutrino this implies Since according to the GDM constraint the largest allowed bino mass scales like M max 1 ∝ m 1/2 3/2 , the largest lifetime (65), and therefore the most conservative bound on ζ, is obtained for the smallest value of m 3/2 . For small gravitino masses, a rough lower bound on the lifetime can be obtained from the isotropic diffuse gamma-ray flux. The recent Fermi-LAT data give E 2 dJ/dE| 5 GeV ≃ 3 × 10 −7 GeV (cm 2 s str) −1 [19]. From the analysis in [12] one then obtains τ 3/2 > ∼ 10 28 s. 6 Together with Eq. (65) one then obtains the approximate upper bound on the R-parity breaking parameter On the other hand, the observation of a photon line corresponding to a gravitino lifetime close to the present bound would determine the parameter ζ as 7 ζ obs = 10 −9 5 × 10 28 s τ 3/2 (γν) Note the strong dependence of ζ obs on the gravitino mass. In (68) we have normalized these masses to central values suggested by thermal leptogenesis, electroweak precision tests and gravitino dark matter [20]. Neutralino NLSP A neutralino NLSP heavier than 100 GeV dominantly decays into charged lepton and W-boson or neutrino and Z-boson [30] (cf. Fig. 3). The partial decay widths are given by Here V (χ,e) 1i LO and V (χ,ν) 1i LO are the charged and neutral current matrix elements at leading order, which are given in Eqs. (55) and (54), respectively, and is a phase space factor which becomes important for neutralino masses close to the lower bound for m χ 0 1 of 100 GeV (cf. Fig. 4). The total neutralino NLSP width is the sum Using the matrix elements (54) and (55), one obtains the branching ratios Furthermore, the flavour structure of our model implies Using the matrix elements (54), (55) and (56) for neutral, charged and supercurrent, respectively, one can express the neutralino lifetime directly in terms of the gravitino lifetime, With the mass relations (57) and (60) one then obtains for the minimal neutralino decay length In Eqs. (75) and (76) the corrections to the leading order expressions are less than 10%. We emphasize again the strong dependence of this lower bound on the neutralino and gravitino masses. For instance, for a gravitino mass of 100 GeV and the Fermi-LAT bound τ 3/2 > ∼ 5 × 10 28 s, which applies for gravitino masses in the range 60 GeV < m 3/2 < 400 GeV, one obtains cτ χ 0 1 ≃ 2 km for a neutralino mass of 150 GeV. It is very interesting that such neutralino lifetimes are detectable at the LHC [31]. We conclude that, given the current bounds on the gravitino lifetime, a neutralino NLSP may still decay into gauge boson and lepton inside the detector, yielding a spectacular signature. However, for most of the parameter space a neutralino NLSP decays outside the detector, leading to events indistinguishable from ordinary neutralino dark matter. τ -Lepton NLSP Contrary to the neutralino NLSP decay, the R-parity violating decays of a τ 1 -NLSP strongly depend on the flavour structure and the supersymmetry breaking parameters. The relative strength of the various decay modes becomes most transparent in the field basis where all bilinear R-parity breaking terms vanish, as discussed in Section 2. Since the R-parity breaking Yukawa couplings are proportional to the ordinary Yukawa couplings, decays into fermions of the second and third generation dominate. The leading partial decay widths of left-and right-handed τ -leptons are (cf. (14)) In the flavour model discussed in Section 3, the order of magnitude of the various decay widths is determined by the power of the hierarchy parameter η (η 2 ≃ 1/300), The lightest mass eigenstate τ 1 is a linear combination of τ L and τ R , From the above equations one obtains the τ 1 -decay width The total width is dominated by the contributions τ R → τ L ν, µ L ν and τ L →t R b L , respectively, and it can be directly expressed in terms of the τ -lepton and top-quark masses, where we have assumed This corresponds to the parameter choice a = b = c = 1 in Eq. (34). Note that τ 1decay width and branching ratios have a considerable uncertainty since these parameters depend on the unspecified mechanism of supersymmetry breaking. From Eqs. (18), (26) and η ≃ 0.06, one obtains for the R-parity breaking parameter which is consistent with the present upper bound (67) within the theoretical uncertainties. The dependence of the mixing angle θ τ on m τ 1 is shown in Fig. 5 for the boundary condition (59). For masses below the top-bottom threshold only leptonic τ 1 -decays are possible. When the decay into top-bottom pairs becomes kinematically allowed, sin 2 θ τ is small. However, the suppression by a small mixing angle is compensated by the larger Yukawa coupling compared to the leptonic decay mode. This is a direct consequence of the couplingsλ ′ which were not taken into account in previous analyses. Due to the competition between mixing angle suppression and hierarchical Yukawa couplings, the top-bottom threshold is clearly visible in the τ 1 -decay length as well as Branching Ratio¯t the branching ratios into leptons and heavy quarks. This is illustrated in Figs. 6 and 7, respectively, where these observables are plotted as functions of m τ 1 . Representative values of the τ 1 -decay lengths below and above the top-bottom threshold are (90) Choosing for ǫ the representative value (68) from gravitino decay, ǫ = ζ obs = 10 −9 , one obtains cτ τ 1 = 4 km(1 km) for m τ 1 = 150 GeV(250 GeV). It is remarkable that such lifetimes can be measured at the LHC [31,32]. Is it possible to avoid the severe constraint from gravitino decays on the τ 1 -decay length? In principle, both observables are independent, and the unknown constants in the definition of ǫ, ǫ ′ and ǫ ′′ can be adjusted such that ζ = 0. However, this corresponds to a strong fine-tuning, unrelated to an underlying symmetry. To illustrate this, consider the case where the soft R-parity breaking parameters vanish at the GUT scale, B i = m 2 id = 0, which was discussed in Section 3. In bilinear R-parity breaking, also the R-parity violating Yukawa couplings vanish at the GUT scale. With the one-loop radiative corrections at the electroweak scale (cf. (39); ǫ i = µ i /µ), and M 1,2 ∼ µ, one reads off from Eqs. (10), (12) and (13) Hence, all R-parity breaking parameters are naturally of the same order, unless the finetuning also includes radiative corrections between the GUT scale and the electroweak scale. Even if one accepts the fine-tuning ζ = 0, one still has to satisfy the cosmological bounds on R-parity violating couplings, which yield ǫ i = µ i /µ < ∼ 10 −6 [25]. In the flavour model discussed in Section 3 this corresponds to the choice a = 20 in Eq. (33). For the smaller τ 1 -mass, which is preferred by electroweak precision tests, one then obtains the lower bound on the decay length However, let us emphasize again that current constraints from Fermi-LAT on the diffuse gamma-ray spectrum indicate decay lengths several orders of magnitude larger. 20 Planck Mass Measurement It has been pointed out in [9] that, in principle, one can determine the Planck mass from decay properties of a τ -NLSP together with the observation of a photon line in the diffuse gamma-ray flux, which is produced by gravitino decays. This is similar to the proposed microscopic determination of the Planck mass based on decays of very long lived τ -NLSP's in the case of a stable gravitino [33]. From our analysis of NLSP decays in this section it is clear that neutralino NLSP decays are particularly well suited for a measurement of the Planck mass, which does not require any additional assumptions. Eq. (75) implies (G F = √ 2/(4v 2 )), As expected, for gravitino and neutralino masses of the same order of magnitude, the ratio of the two-body lifetimes is determined by the ratio of the electroweak scale and the Planck mass, Quantitatively, using the relation (60) for the gaugino masses, one finally obtains (v = 174 GeV), M P =3.6 × 10 18 GeV m 3/2 m χ 0 1 3/2 τ 3/2 (γν) It is remarkable that the observation of a photon line in the diffuse gamma-ray flux, together with a measurement of the neutralino lifetime at the LHC, can provide a microscopic determination of the Planck mass. Summary and conclusions We have studied a supersymmetric extension of the Standard Model with small R-parity breaking related to spontaneous B − L breaking, which is consistent with primordial nucleosynthesis, thermal leptogenesis and gravitino dark matter. We have considered supergravity models with universal boundary conditions at the GUT scale, which lead to scalar tau or bino-like neutralino as NLSP. Supersymmetry breaking terms have been introduced by means of higher-dimensional operators. The size of the soft terms corresponds to gravity or gaugino mediation. We have analyzed our model, which represents a special case of bilinear R-parity breaking, in a basis of scalar SU(2) doublets, where all bilinear terms vanish. In this basis one has R-parity violating Yukawa and gaugino couplings. They are given in terms of ordinary Yukawa couplings and 9 R-parity breaking parameters ǫ i , ǫ ′ i and ǫ ′′ i , i = 1, ..., 3, which are constrained by the flavour symmetry of the model. The R-parity violating couplings include terms proportional to the up-quark Yukawa couplings, which were not taken into accound in previous analyses. The main goal of this paper are the quantitative connection between gravitino decays and NLSP decays, and the corresponding implications of recent Fermi-LAT data on the isotropic diffuse gamma-ray flux for superparticle decays at the LHC. To establish this connection one needs the relevant R-parity breaking matrix elements of neutral, charged and supercurrents. For the considered supergravity models these matrix elements can be obtained analytically to good approximation, since the diagonalization of the neutralino-neutrino and chargino-lepton mass matrices in powers of m Z /µ converges well, as demonstrated in the appendix. The analytic expressions for the decay rates make the implications of the Fermi-LAT data for NLSP decays very transparent. Our main quantitative results are the branching ratios for NLSP decays and the lower bounds on their decay lengths. For a neutralino NLSP with m χ 0 1 = 150 GeV, the Fermi-LAT data yield the lower bound cτ χ 0 1 > ∼ 30 cm. This bound does not depend on details of the superparticle mass spectrum or the flavour structure of the model. It directly follows from the comparison of two-particle gravitino and neutralino decays. On the contrary, there exists no model independent lower bound on the τ 1 -decay length. The natural relation between gravitino and τ -decay widths can be avoided by fine-tuning. In this case the cosmological constraint that the baryon asymmetry is not washed out leads to the lower bound cτ τ 1 > ∼ 4 mm. Without fine-tuning parameters the diffuse gamma-ray flux produced by gravitino decays constrains the lifetime of a neutralino as well as a τ -NLSP. For typical masses, m 3/2 ∼ 100 GeV and m NLSP ∼ 150 GeV, the discovery of a photon line with an intensity close to the present Fermi-LAT limit would imply a decay length cτ NLSP of several hundered meters. This is a definite prediction of a class of supergravity models. It is very interesting that such lifetimes can be measured at the LHC [31,32]. Finally, it is intriguing that the observation of a photon line in the diffuse gammaray flux, together with a measurement of the neutralino lifetime at the LHC, can yield a microscopic determination of the Planck mass, a crucial test of local supersymmetry. A.1 Mass matrix diagonalization The mass matrices M N and M C in the gauge eigenbasis were explicitly given in Eqs. (46) and (47), respectively, For non-vanishing R-parity breaking parameters ζ i , i = 1, . . . , 3, they induce a mixing between gauginos, Higgsinos and leptons,
7,302.4
2010-07-28T00:00:00.000
[ "Physics" ]
Securing a Telecom Services Using Quantum Cryptographic Mechanisms The architectural model of Internet telephony is rather different than that of the traditional telephone network. The base assumption is that all signaling and media flow over an IP-based network, either the public Internet or various intranets. IP-based networks, present the appearance at the network level that any machine can communicate directly with any other, unless the network specifically restricts them from doing so, through such means as firewalls. This architectural change necessitates a dramatic transformation in the architectural assumptions of traditional telephone networks. In particular, whereas in a traditional network a large amount of administrative control, such as call-volume limitation, implicitly resides at every switch, and thus additional controls can easily be added there without much architectural change, in an Internet environment an administrative point of control must be explicitly engineered into a network, as in a firewall; otherwise end systems can simply bypass any device which attempts to restrict their behavior. In addition, the Internet model transforms the locations at which many services are performed. In general, end systems are assumed to be much more intelligent than in the traditional telephone model; thus, many services which traditionally had to reside within the network can be moved out to the edges, without requiring any explicit support for them within the networks. Other services can be performed by widely separated specialized servers which result in call setup information traversing paths which might be extremely indirect when compared with the physical network’s actual topology. Introduction The architectural model of Internet telephony is rather different than that of the traditional telephone network.The base assumption is that all signaling and media flow over an IP-based network, either the public Internet or various intranets.IP-based networks, present the appearance at the network level that any machine can communicate directly with any other, unless the network specifically restricts them from doing so, through such means as firewalls.This architectural change necessitates a dramatic transformation in the architectural assumptions of traditional telephone networks.In particular, whereas in a traditional network a large amount of administrative control, such as call-volume limitation, implicitly resides at every switch, and thus additional controls can easily be added there without much architectural change, in an Internet environment an administrative point of control must be explicitly engineered into a network, as in a firewall; otherwise end systems can simply bypass any device which attempts to restrict their behavior.In addition, the Internet model transforms the locations at which many services are performed.In general, end systems are assumed to be much more intelligent than in the traditional telephone model; thus, many services which traditionally had to reside within the network can be moved out to the edges, without requiring any explicit support for them within the networks.Other services can be performed by widely separated specialized servers which result in call setup information traversing paths which might be extremely indirect when compared with the physical network's actual topology. Most of the services and service features of ITU-T Q.1211 can be provided by the IETF's draft signaling standards for Internet Telephony, SIP (the Session Initiation Protocol) (Schulzrinne, 2001) and, for some more specialized features, its Call Control extensions (Johnston, 2002). However, The growing globalization and the liberalization of the market telecommunications, necessitates a more global infrastructure of IN that satisfies the needs of different subscribed legal implied, especially for multinational services subscribed.A lot of these services are offered on a current system, but are often realized with specialists of the system.The concepts of IN for giving such service is a coherent and stable basis (Schulzrinne, 2001).Some security functions have been introduced already in current systems, but they define constraints to user groups with the private line means, the proprietor equipment and proprietor algorithms or secrets. There is a major difference between the realization of today service (limited) and the goals of the introduction of IN, however, the IN offer its services to a public world, open, and especially to offer the services that allow user groups to communicate by the public transmission and to change the equipment without unloading their intimacy and affluence of employment.Therefore, the services of security provides for the current network will not be sufficient for IN, the goal is more long.The defying of the aspect of the new security functions has to make then be publicly usable, economically feasible and insured all its at the same time. Interconnecting SIP and telecom service application The IN (Intelligent Network) functional architecture identifies several entities such as the service switching function (SSF), the service control functions (SCF), and the call control function (CCF) that model the behavior of the IN network.The CCF represents the normal activities of call and the connection processing in traditional switch, for two clients.The SSF model's additional functionality required for interacting with a service logic program executed by the SCF.The CCF and the SSF are generally co-located in specific switching centers, known as SSP, while the SCFs are hosted on dedicated computers known as SCP.The communication between the different nodes uses the INAP protocol. Accessing IN services from the IMS network requires that one or more entities in these networks can play the role of an SSP and communicate with existing SCPs via the INAP protocol. To realize services with SIP it is important that the SIP entity be able to provide features normally provided by the traditional switch, including operating as a SSP for IN features.The SIP entity should also maintain call state and trigger queries to IN-based services, just as traditional switches do. The most expeditious manner for providing existing IN services in the IP domain is to use the deployed IN infrastructure as much as possible. The creation of a service by SIP is possible by the three methods INVITE, Bye and options and fields of header SIP Contact, Also, Call-Disposition Replace and Requested-by that are extensions of SIP specified by IETF (Fing, 2001).Some services are already specified by IETF (Gisin, Ribordy, 2001) by using methods and fields of already quoted header: Call, Hold, Transfer of call, Return of call, Third party control. The key to programming telecom services with SIP is to add logic that guides behavior at each of the elements in the system (Johnston, 2002).In a SIP proxy server, this logic would dictate where the requests are proxies to, how the results are processed, and how the packet should be formatted. The basic model for providing logic for SIP services is shown in Figure 1.The figure shows a SIP server that has been augmented with service logic, which is a program that is responsible for creating the services.When requests and responses arrive, the server passes information up to the service logic.The service logic makes some decisions based on this information, and other information it gathers from different resources, and passes instructions back to the server.The server then executes these instructions (Johnston, 2002).The separation of the service logic from the server is certainly not new.This idea is inspired by the Intelligent Network (IN) concept (Chapron, 2001).This separation enables rapid development of new services; the idea also exists on the web. The mechanisms used in these environments can provide valuable insight on how a solution for SIP, IN integration.SIP does not operate the model of call IN directly to access to services IN, the trick is then to bypass the machine of states of the entity SIP with the layer IN such that the acceptance of the call and the routing is executed by the native states and service machine are accessed to layers IN with the model of call IN (El Ouahidi, 2000).The model of service programming with SIP consists therefore in add on SIP server an IN layer, that manages the interconnection with the IN called SIN (SIP Intelligent Network) (Gurbani, 2002).This operation necessitates the definition of a correspondence between the model of call IN and the model of call SIP, it is to tell a correspondence between the state machine of the SIP protocol (SIP defines the header Record_Route that allows to order SIP server to function in mode with the states until the liberation of the call) and the state machine of IN.A call will be processed by the two machines, the state machine SIP processes the initiation of a call and the final reply deliverance, and the IN layer communicate with the intelligent point SCP to provide services during processing of the call (Gurbani, 2002).The figure 2 In the basic system of SIP, the SIP proxy with this control of call " intelligent " is defined to interconnect with intelligent network, this intelligence is realized by the use of control calls, that can synchronize with the model of call IN (BCSM).According to RFC 2543 (SIP) one can define calls of the state machine of the SIP client and SIP server [7,8] The SIP "Calling" protocol state has enough functionality to absorb the seven PICs.From server of SIP proxy point of view, its initial state could be corresponded to the O_NULL PIC of the O_BCSM.Its processing state could be corresponded to the Auth_Ori_att, Collect_Info, Analyse_info, Select_ Route, Auth_Call_Setup, Senf_Call and O_Alerting PICs.Its success, confirmed and complete states could be corresponded to the O_Active PIC. Figure 4 below provides a visual mapping from the SIP protocol state machine to the terminating half of the IN Call model. When the SIP server of termination receives the message INVITE, it creates the T_ BCSM object and initials it to the PIC T_Null, this operation is realized by the state " Proceeding " that orders the five PICs : T_Null, Auth_Ter_Att, Select_Facility, PICs of the T_BCSM.Its calling state could be corresponded to the prrsent_Call PIC.Its call processing state could be corresponded to the T_Alerting.Its complete state could be corresponded to the T_Active PIC. The service-level call flows for Voice over IP communication where interconnecting between the PSTN and IP-based networks are necessary to complete a call.www.intechopen.com Threats in the intelligent network (telecom service) As all elements of a network can be distributed geographically and that element of system IP is totally opened, several threats of security can rise and attack these different elements.and its elements of service, then it will not be possible to make these users responsible for their actions. The list is not complete.In practice, nevertheless, one can be found confronted with others problems of security, considered as not belonging of the area of application (for example, problems linked, the policy of security, the security of the management system, to security of the implementation, the operational security or to the processing of security incidents).As well as the technological evolution on the soft, does not cease to increase in a manner not estimable, similarly, the technological connection tools to the system and the attack passive and active become impressive things? If the system and its components are not sufficiently secured, a lot threats can occur.However, it is necessary always to consider:  What is the probability of a threat (occurrence; likelihood)? What are the potential damages (impact)? What are the costs to prevent a threat? Depending on these suppositions, the cost and the efficiency of security mechanisms has to be implemented.The potential of the threat depends on the implementation of IN, the specific service IN.They depend also on the implementation of security mechanisms (ex.PIN, strong authentication, placement of authentication, management of key, etc.). Manufacturers as well as the groups of standardization make the work of the analysis of risks in the order to improve the security of systems IN.Although a many improvements have already been realized, concerning the security of access to SCP and SMP.New services and new architectural concepts necessitate supplementary improvements.The next figure, figure 5, presents places of the different threats. Intelligent networks are distributed by nature.This distribution is realized not only to the superior level where the services are described as a collection of service feature (SF), but to too low levels, where the functional entities (FE) can be propagated on different physical entities (PE).Like the communications of various elements of the network are realized on the open and uncertain environment, the security mechanisms have to be applied. A responsible of the system can prevent these threats of security by using the various mechanisms.The authentication has to be applied in the order to prevent users not authorized to earn the access to the distribution of services. Secure the telecom service with SIP SIP Communication is susceptible to confront several types of attack.The simplest attacks in the SIP permit to a hacker of earning the information on users' identities, services, media and topology of the distribution.This information can be employed to excute others types of attacks.The modification of attack occurs when an assailant intercept the effort and covers the signal and modify the SIP message in the order to change a certain characteristic of a service.For example, this sort of attack can be employed for diverting the flow of signaling by forcing a particular itinerary or for changing a recording of the user or modifying a profile of service (Profos, 2002).The sort of attack depends on the type of security (or insecurity) employed (the type of authentication, etc.).These attacks can also be employed for the denial of service. The main two mechanisms of security employed with SIP: authentication and the encryption data.The authentication of data is employed to authenticate the sender message and insure that certain sensitive information of the message was not modified in the transit. It has to prevent an assailant to modify and/or listen in SIP requests and reply.SIP employs Proxy -Authentication, Proxy -Authorization, authorization, and WWW -Authentication of areas of the letterhead, similar to these of HTTP, for the authentication of the terminal system by the means of a numerical signature.Rather, proxy-by-proxy authentication can be executed by using the authentication protocols over Internet such that, the transport layer TLS or SSL and the network layer IPSEC. The cryptography of data is employed to, ensure the confidentiality in SIP communication, allowing only the legal receiver client to decrypt and read data.It is usual of using algorithms of cryptography such that the DES (DES: Data Encryption Standard) and Advanced AES (AES: Advanced Encryption Standard).SIP endorses two forms of cryptography: end-to-end and hop-by-hop.The end-to-end encryption provides confidentiality for all information (some letterhead and the body of SIP message) that needs to be read by intermediate servers or proxy.The end-to-end encryption is executed by the mechanism S/MIME.On the contrary, the hop-by-hop encryption of whole SIP message can be employed in the order to protect the information that would have to be accessed by intermediate entities, such that letterhead From, To and Via.The security of such information prevents malevolent users to determine that calls, or to access to the information of the itinerary.The hop-by-hop encryption can be executed by external security mechanisms to SIP (IPSEC or TLS). Before closing this poll of security mechanisms in SIP, it is necessary to consider the efforts of standardization processing to improve the security mechanisms for SIP.The most important matter here is the problem of the agreement on the chosen security mechanism between two SIP entities (user agents and/or proxy) that want to communicate by applying a level of «sufficient» security.For this reason, it is very important to define how a SIP entity can select an appropriate mechanism for communicating with a next proxy entity.One of the proposals for an agreement security mechanism that allows two agents of exchanging their clean security aptitudes of preferences in the order to select and apply a common mechanism is, when a client initiated the procedure, the SIP agent include in the first request sent to the neighbor proxy entity the list of its mechanisms of security sustained.The other element replies with a list of its clean security mechanisms and its parameters.The client selects then the common security mechanism preferred and use this chosen mechanism (ex, TLS), and contact the proxy by using the new mechanism of security (Jennings, 2002). With this technique, another problem should be handled.It is the problem with the assertion and the validation of the identity of the user by SIP server.The SIP protocol allows a user to assert its identity by several manners (ex, in the letterhead); but the information of identity requested by the user is not verified in the fundamental SIP operation.On the other hand, an IP client of telephony could have required and insure the identity of a user in order to provide a specific service and/or to condition the type of service to the identity of the user himself.The model of SIP authentication could be a way of obtaining such identity; however, the user agents have not always the necessary information on key to authenticate with all the other agents.A model is proposed in (Profods, 2002) for «confirmed identity» which is based on the concept of a «confirmed area».The idea is, that when a user authenticates its clean identity with a proxy, the proxy can share this authenticated identity (the confirmed identity) with all the other proxies in the «confirmed area».A confirmed area is a totality of proxies that have a mutual configuration of a security association.Such association of security represents a confidence between proxies.When a proxy in a «confirmed area» authenticates the identity of the author of a message, it adds a new letterhead to the message containing the confirmed identity of the user.Such identity can be employed by all other proxies belonging to the «confirmed area». Using this mechanism the client UAC, is capable to identify himself to a proxy UAS, to an intermediate proxy or to a registration proxy.Therefore, the SIP authentication is applied only to the communications end-to-end or end-to-proxy; the authentication proxy-by-proxy would have to count on others mechanisms as IPsec or TLS. Fig. 6. Authentication SIP The procedure of authentication is executed when the UAS, the proxy intermediate, or the necessary recording proxy for the call of the UAC has to be authenticated before accepting the call, or the recording.In the beginning the UAS sends a request of SIP message «text» (ex, INVITE).In the reception of this message, the UAS, proxy, or recording proxy decides that the authentication is necessary and sent to the client a specific SIP error message of the request of authentication.This error message represents a challenge.In the particular case, where the message of error is 401 (Unauthorized) is sent by UAS and recording, while when the message of error is 407 (Proxy Authentication Required) is sent by proxy server.The UAC receives the message of error, calculates the reply, and includes it in a new message of the SIP request.The next figure 6 shows the sequence of message for the case of request of authentication by the proxy server. The UAC sends a message ACK immediately after that the message of error is received. Elements necessitates security in telecom service Manufacturers as well as the groups of standardization make the work of the analysis of risks in the order to improve the security of systems IN.Although a lot of improvements have already been realized, concerning the security of access to SCP and SMP.New services and new architectural concepts necessitate supplementary improvements.The next figure, figure 7, presents places of the different threats. Intelligent networks are distributed by nature.This distribution is realized not only to the superior level where the services are described as a collection of service feature (SF), but to too low levels, where the functional entities (FE) can be propagated on different physical entities (PE).Like the communications of various elements of the network are realized on the open and uncertain environment, the security mechanisms have to be applied. A responsible of the system can prevent these threats of security by using the various mechanisms.The authentication has to be applied in the order to prevent users not authorized to earn the access to the distribution of services. The cryptography technique that allows detecting a spy and realized the security is the technique of quantum cryptography There are mainly two types of quantum key distribution (QKD) schemes.One is the prepareandmea sure scheme, such as BB84, in which Alice sends each qubit in one of four states of two complementary bases; B92, in which Alice sends each qubit in one of two non-orthogonal states; six-state, in which Alice sends each qubit in one of six states of three complementary bases.The other is the entanglement based QKD, such as E91, in which entangled pairs of qubits are distributed to Alice and Bob, who then extract key bits by measuring their qubits; BBM92, where each party measures half of the EPR pair in one of two complementary bases. BB84 algorithm BB84 is a quantum key distribution scheme developed by C. Bennett and G. Brassard in 1984.It is the first quantum cryptography protocol.The protocol is provably secure, relying on the quantum property that information gain is only possible at the expense of disturbing the signal if the two states we are trying to distinguish are not orthogonal.It is usually explained as a method of securely communicating a private key from one party to another for use in one-time pad encryption.It has the following the phases. Step 1. Alice sends a random stream of bits consisting of 1's and 0's using a random selection of rectilinear and diagonal scheme over the quantum channel.On the other side Bob has to measure these photons.He randomly uses one of the two schemes (+ or X) to read the qubits.If he has used the right scheme he notes down the correct bits or he ends up noting down the wrong bits.This is because one cannot measure the polarization of a photon in 2 different 'basis' (rectilinear and diagonal in our case) simultaneously.For example, suppose Alice sends | using '+' scheme to represent 1 and Bob uses '+' scheme and notes down the bit as 1.For the second bit Alice sends \ using the 'X' scheme to represent 1 but Bob uses an incorrect scheme + and miss interprets it as either | or -noting it down correctly as 1 or 0 incorrectly. Step 2. Alice talks to Bob over a regular phone line and tells the polarization schemes (and not the actual polarization) she had used for each qubit.Bob then tells Alice which of the schemes he had guessed it right.This gives the correct bits noted down by Bob.Based on this they discard all the bits Bob had noted down guessing the wrong scheme. Step 3. The only way for Alice and Bob to check errors would be to read out the whole sequence of bits on an ordinary telephone line.This wouldn't be a wise idea.So Alice randomly picks some (say 100) binary digits out of the total number of bits that were measured using the correct scheme and checks just these.The probability of Eve being online and not affecting Bob's measurements of 100 bits is infinitesimally small.So if they find any discrepancy among the 100 digits they will detect Eve's presence and will discard the whole sequence and start over again.If not then they will just discard only those 100 digits discussed over phone and use the remaining 1000 digits to be used as one time pad (Johnston, 2002) One advantage of quantum cryptography is the ability to detect an eavesdropper since Eve can never read the values with out altering them.For example, if Eve reads \ polarization using + scheme then he will be altering the polarization of the photon to either -or , resulting in binary values 0 or 1 respectively.Eve might be able to get the actual (binary) Here the bits 1, 4, 6, 7, 8 are selected by Alice and Bob since both of them use the same detection scheme.But when they randomly check bits 1, 7 and 8 they find that the values are different.Through this they can detect the presence of the eavesdropper. BB92 algorithm In 1992, Charles Bennett proposed what is essentially a simplified version of BB84 in his paper, "Quantum cryptography using any two non-orthogonal states".The key difference in B92 is that only two states are necessary rather than the possible 4 polarization states in BB84.As 0 can be encoded as 0 degrees in the rectilinear basis and 1 can be encoded by 45 degrees in the diagonal basis.Like the BB84, Alice transmits to Bob a string of photons encoded with randomly chosen bits but this time the bits Alice chooses dictates which bases she must use.Bob still randomly chooses a basis by which to measure but if he chooses the wrong basis, he will not measure anything; a condition in quantum mechanics which is known as an erasure.Bob can simply tell Alice after each bit she sends whether or not he measured it correctly. This security feature results from the use of a single or two photons.We know from quantum properties of photons that the measurement of spin of one gives the spin of the other.So if a given pulse sent from Alice is formed by entangled photons then Eve can split these photons using a beam splitter in such a way that he can direct one photon to his polarization detector and another to Bob's.But if only single photons are generated then in presence of a beam splitter the photons have to choose between Eve's and Bob's polarization detectors.This increases the error rate which the BB84 protocol will detect. Thus, it is the singleness of the photon that guarantees security (Mohamed, 2007).The figure 9 represents a probability to spy detected vs photon number: The BB84 protocol and its variants are the only known provably secure QKD protocols. Other QKD protocols (BB92), although promising, have yet to be proven secure.The benefits of QKD are that it can generate and distribute provably secure keys over unsecured channels and that potential eavesdropping can be detected (Mohamed, 2007).QKD can defeat the current computationally complex key exchange methods.Because QKD generates random strings for shared secrets, attaining a QKD system and reverse engineering its theory of operation would yield no mechanism to defeat QKD. Implementation a quantum cryptography in SIP protocol over IMS network The IMS has been originally conceived for mobile systems, but with the addition of TISPAN works in the 7 version 7, the PSTN are equally supported. The IMS is a structured of the architecture of Next Generation Network (NGN) that allows the progressive introduction of voice application and multimedia in mobile and permanent network. The IMS is used with all types of systems (permanent, mobile or wireless), including a switching functions, as GPRS, UMTS, CDMA 2000, WLAN, WiMAX, DSL.An open interface between control and service layers allows mixing calls/session in a different access networks.The IMS used a cellular technology to provide an access in every position, and Internet technology to provide services. The principle of the IMS consists on the one hand in clearly separate the transport layer of the service layer and on the other hand to use the transport layer for control and signaling functions so as to insure the quality of service wished for the desired application.IMS aims, to make the network a sort of middleware layer between applications and the access.Applications are been SIP, or not SIP, they pass by a Gateway before the connection to the controller of sessions. Despite considerable advantages of the IMS, several limit render the IMS all alone is incapable to be beneficial for the creation and the exploitation of services by operators to the near users, it is in order that, one uses IN service into SIP protocol. In my Application, I'm proposing a several IMS networks nodes for mobile application (figure10).The signaling into the different elements is assured with SIP over MAP (Mobile Application Protocol). Conclusion Mobile networks has adopted the IN technology to provide to users of the new services that can be only obtained in the permanent system and improved management of mobility.With the SIP protocol under IP an attempt of the Mobile connection has IP and IN to define the new services as well as to exploit advanced techniques in the security developed for IP network becomes possible especially for IMS network.The SIP authentication is the alone technique proposed by SIP for the security for the terminal and subscribed.With quantum cryptography mechanism into SIP Authentication, the security between IMS client and P-CSCF or I-CSCF is assured and sure. Fig. 1 . Fig. 1.Model for Programming SIP Services illustrates the integration SIP-IN.Similarly to the machine of states IN, one defines the quoted calling and called the entity (O-SIP) and (T-SIP) which are entities corresponding, respectively, to the O-BCSM and T-BCSM of the model IN. Fig. 10 . Fig. 10.IMS Network node After this I'm developing a quantum cryptographic key distribution in SIP Body based on the BB84 algorithm (Bernhard, 2003), named qc: The interconnection IN, is the process to execute a demand of service IN on the part of the user of service through at least two different autonomous areas.Typically, each area represents a system IN separated with different legal entities and entities of resource IN.The limp of the IN service is only to be executed, if the two different areas cooperates by exchanging management, control, and services of data on the basis of a legal contract (cocontract of operation).In this competition, each area applies these clean mechanisms to provide the integrity, the availability, and the intimacy of service users.If the two operators that distribute cooperate, they have to apply the same totality of mechanisms to exchange data between their SCPs and SDPs. The main problems of security that pose in the multimedia communication area and the telephony over IP as well in a system IN under IP are following: -Imitation of attack: users not authorized can try to access to services of IN.For example, in the case of service IMR a user not recorded can try to see video services.-Simulating attack: A recorded user can try to avoid the policy of security and obtains illegitimately access to sensitive services.For example, a user with common access privileges can try to act as an administrator of service IN.-Denial of Service attack: an adversary can try to block users to access to the services of IN.For example to send a great number of requests to the system simultaneously -Communication to spy and alter: an adversary can try to spy and/or to modify the communication between a legitimate user and elements of service IN.-Lack of responsibility: if IN is not capable to verify the communication between users value of 1 but ends up altering the polarization for Bob.If Bob uses X scheme to measure the values he might get either \, which is what Alice sent or /, which is an incorrect measurement. equation, is True if spy is detected. www.intechopen.com
7,126
2012-03-14T00:00:00.000
[ "Computer Science", "Physics" ]
Design and Implementation of Highly Secure Residents Management System Using Blockchain This article presents the design and implementation of highly secure and reliable database system for resident records management system using blockchain technology. Blockchain provides highly secure and reliable data access environment. In blockchain, several data fragments are packed into one block and all blocks are connected to form the chain of blocks. In our prototype, each event of resident such as birth, moving, employment and so on, is assigned to data fragment and certain amount of data fragment, says 20 fragments are packed into block. We also developed the web application interface to avoid installing any applications in users’ PC or smartphone. Prototype development proved the possibility to use the blockchain technology to large amount of data management system with highly secure and reliable features. Introduction This article presents the design and implementation of highly secure and reliable residents' data management system (RDMS) using blockchain technology. Data management can be defined as the organizing and maintaining data to meet ongoing information lifecycle needs [1]. The importance on data management began with the electronics era of data processing. Data management methods have roots in accounting, statistics, logistical planning and other disciplines before the electronics era. Currently, many countries use electronic data management systems to make people's services more effectively and efficiently. One of the most important points of RDMS is its reliability and security. Resident data usually include highly confidential personal information such as birth date, postal address, occupation, income and phone number. Therefore, RDMS must be secure, reliable. In order to implement highly secure and reliable RDMS, several equipment and software such as firewall, encryption must be installed. Furthermore, data transmission lines must be protected from illegal access. However, these equipment and software require large cost and operation of these equipment and software also require many highly skilled engineers. Some countries in the world, especially under-developing countries such as African countries, are still facing the difficulty of data management problem. Such countries do not have enough communication infrastructures such as highspeed data communication lines. Poor communication infrastructure disturbs high-level data exchange system. Furthermore, data management systems must be highly secure, reliable and easy to use. There are already some technologies to implement such highly secure and reliable system. However, many of these technologies require rich IT resources and infrastructure to implement them. Some novel technologies to implement effective data management system under poor communication infrastructure require new idea for realization. To meet such requirements, the use of blockchain [2] and web application [3] can be one possible solution. Behind blockchain there are several technologies such as cryptography technique and hashing which are used to protect the information. They enable us to implement our final research goal by combining other technology. Blockchain is one of the most promising technologies for our purpose. Web application technology is also essential for under-developing countries. Such countries usually do not have sufficient wired communication networks such as telephone line or high-speed data communication line. This is partly because such infrastructures require huge amount of budget. However recent advancement of wireless communication network dramatically improves the communication infrastructure of such under-developing countries. By using such wireless communication infrastructure such as cell-phone network, people can access websites and get information. Based on these considerations, we started implementing the highly secure and reliable data management system by combining the blockchain and web application technologies. Many people still misunderstand what kind of technology the blockchain is and how does it work. It enables to implement the secure and reliable control and protection of data on the network systems. If there is intrusions to data, the hacker can access all data and do whatever he wants with it. It is important to have and save secure and reliable data. By coupling these two technologies: blockchain technology and java web framework, we can get a reliable and secure data for a resident data management system. Our system will help to avoid the loss of traceability and falsification of any data. This technology can record the history of all the transaction at any moment by using the chains of data. One of the most famous applications of blockchain is the application to cryptocurrency. Bitcoin is the most famous example of cryptocurrency which uses the blockchain technology. And there are already some applications of blockchain technology other than cryptocurrency. For example, applications in the banking ledger [4] [5]. Trading is another promising application field, [6] Identity management is also very impressive application field [7]. Energy and Commodity sector provides new application field of blockchain [8]. Other interesting application of blockchain technology is port logistics or international trading management [9]. These activities include a lot of documents exchanges between several stakeholders. In order to guarantee the correctness and trust, blockchain technology is used. However, there is no work about the management of residents records by blockchain. Therefore, we start designing and implementing the prototype of residents records management system. We will have the following structure of this paper: In Chapter 2, we will describe the overview of blockchain technology. Chapter 3 explains the system design. Chapter 4 is used to describe the implementation of blockchain framework named Multichain. In Chapter 5, we will mention about the implementation of sample user interface on web browser. Chapter 6 includes the conclusion and future works. Overview of Blockchain Technology In our system, blockchain technology plays an essential role. Therefore, in this section, we will briefly explain the basic concept of blockchain. Definition of Blockchain The term blockchain is still far from clear to define now. Blockchain itself most likely traces back to Satoshi Nakamoto's original Bitcoin white paper in 2008 [10]. In this paper, Nakamoto described some technology components underlying the cryptocurrency as series of data blocks that are cryptographically chained together. Blockchains are now widely known in cryptocurrency particularly such as Bitcoin. However, there is a big difference between Bitcoin and blockchain. Blockchain is not a Bitcoin. Blockchain is an innovative data management method. Blockchain is the technology used to implement the Bitcoin and other cryptocurrencies. Blockchain was used firstly to design for solving a double spending problem, in other words to establish consensus in a decentralized network. But cryptocurrency is only one of blockchain applications. Blockchain is an authentic distributed database ledger, all of which records the history of the activities along the network among the connected devices or users, which are usually called as nodes. Each node stores the all information on the network with simple secure private keys. The fundamental concept of blockchain is relatively simple; it is a linked list-based database. Figure 1 illustrates the basic concept of blockchain. There are several blocks in one blockchain. Each block contains some information such as hash value of previous block, data to be stored in the current block and specific value named nonce. Among them, only nonce is controllable, which holds the specific condition of the hash value of each block. Other two are unchangeable. Hash value of each block is computed using all data in the block. As hash value of each block is computed using all stored data in the block, once some part of data is modified, hash function will generate completely difference hash value and therefore it is very easy to find the falsification of data. In order to generate each block, the user must select appropriate value of nonce in each block and the adjustment of nonce requires huge amount of computing power. Therefore, virtually falsification is impossible. This is the main reason why blockchain provides highly secure and reliable data management system. And blockchain has an attractive function which are very different from the traditional databases. Every node in the network store all blockchain. Therefore, even some accidents such as power failure or cracker's attack destroy the database of one node in the network, all data will be recovered by using the data stored in the other nodes. This is why blockchain has high robust nature. Figure 1 is a conceptual illustration of blockchain database. How Blockchain Works When a user wants to store data in the blockchain, user firstly packs several data fragments into one data package. For example, in case of Bitcoin, each transaction such as transferring money from one user to another is a data fragment. Certain amount of basic data fragments, say 20 fragments, will be packed into one package. After the finish of constructing one package, block construction will start. Each block contains several data such as data package, nonce and previous block hash value. Each block must be connected to existing blockchain. To connect new block to exiting blockchain, hash value of each block must satisfy the specific condition. To satisfy the condition, user must compute specific nonce value. For example, if hash value is represented by 256 bits (32 bytes), hash value of first 64 bits (8 bytes) must be 0. To get such hash value, users will set appropriate nonce value. However, as hash function generates virtually random value, user must repeat computing hash value by assigning specific value to nonce for many times. It requires a huge amount of computing time and power. Once a specific value of nonce which satisfies the condition of hash value is found, newly created block will be connected to existing blockchain. The reason why blockchain has highly secure and reliability is the amount of computing time. As mentioned above, to connect one block to blockchain, user must compute appropriate nonce value. For example, let n be the length of blockchain (the number of blocks in the blockchain), and the attacker modified the data in k-th block. When data in k-th block is falsified by someone, hash value of k-th block will change drastically. Therefore, the connection of blocks will be broken. If malicious attacker tries to connect modified block to blockchain, he/she has to find new nonce value which satisfies the condition of hash value. The change of hash value of k-th block affects following all ( ) 1 n k − + blocks; attacker must compute ( ) 1 n k − + nonce value again to connect all blocks in the blockchain and this re-computation require quite huge amount of computation. Therefore, falsification of blockchain is virtually impossible. This is why the blockchain system provides highly secure and reliable data. Types of Blockchain There are mainly three types of blockchains: public blockchains, private blockchains and consortium blockchains: 1) Public Blockchain: This type of blockchain is opened to public and anyone can participate as a node in the network. The users may or may not be rewarded for their participation. They are not owned by anyone and are publicly open for anyone to participate in. The best example of public blockchain is Bitcoin. Whenever a user transfers a transaction, it is reflected on everyone's copy of the block. Bitcoin uses a public blockchain. 2) Private Blockchain: As the name implies, private blockchain is private and is opened only to a consortium or group of individuals or organizations that has decided to share the ledger among themselves. Only the owner makes any changes to it because he has a right. Federated blockchains are faster (high scalability) and provide more transaction privacy. 3) Consortium Blockchain: Consortium blockchain is basically a hybrid of public and private blockchains. There are federated blockchains which are operated under the leadership of a group. In a group, there are two or more administrative nodes. Opposite to public blockchains, no one without the approval by administrative nodes is allowed to participate in the process of verifying transactions. For example, in the banking sector, this kind of blockchains are mostly used. Because of the nature of our target system (resident record management system), public blockchain is not suitable. We will adopt private or consortium blockchain. Overall Structure In this section we will describe the basic structure of our protype system. Our prototype system uses a blockchain framework named MultiChain [11], Java web application framework PrimeFace [12] and relational database management system MySQL [13]. MySQL is used to store all data of blockchain. Figure 2 shows the conceptual structure of our prototype. As shown in Figure 2, there are several layers. Bottom-most layer is hard disk or other external storage device. This device stores all data. User will access the stored data by using database management system (DBMS). In our prototype, we use relational DBMS (RDBMS). Blockchain functions are implemented by using blockchain framework. Our prototype system uses MultiChain, which is classified as consortium blockchain. Top-most layer is a framework for web application named PrimeFaces. User interface will be implemented by PrimeFaces framework as a web application. Therefore, no specific application is installed on user's PC/smartphone. Mapping Residents Record into Blockchain Our prototype deals with each resident record as a data fragment of blockchain mentioned in section 2.2. Normally one resident record is generated when a resident was born. Then several update of record will be applied to existing record because of some events such as address change, marriage and so on. And when a resident dead, the dead flag will add to resident record. This is because in blockchain no record can be removed. Instead, dead flag will be added to resident record. As mentioned in section 2.2, it is virtually impossible to modify the data in the blockchain, the correctness of each data in the blockchain will be guaranteed. Development of User Interface We developed our own web application using Java Primefaces framework. It is an open source framework for Java Server Faces (JSF) [14] featuring over 100 components. By using this framework, we will easily develop web application. As it is an open source framework, we can modify to fit other components such as Multichain blockchain framework. As we have implemented our prototype system as a web application, no user needs to implement specific application in their own smartphone or PC. Introduction to Multichain Framework Installing Multichain is not difficult. Implementor only needs to follow several instructions for installation. To install Multichain it is required some specific hardware and software requirements. They are the following: • OS: Linux 64-bit (Ubuntu 12.04+, CentOS 6.2, Debian 7+, Fedora 15+, RHEL 6.2+) or Window 64 bit (Windows 7 or later) • RAM: Minimum 512 MB • Disk space: minimum 1 GB Our prototype system uses on CentOS 7.3 with 2 GB RAM. Firstly, user must download the package of Multichain platform from website (https://www.multichain.com/download-install). At the date and time of this article, user can download version 2.0.7. After downloading the archive, user only need to expand archive file to specific directory in the system. In our prototype system, Multichain is installed on Linux by executing following commands: Creating and Connecting Blockchain with Multichain After installing Multichain platform, user has to create his/her own blockchain into Multichain platform. To create a new blockchain, user must run the following command: If necessary, user can create a clone of existing blockchain: User, of course, can also set any parameter on the command line using the same name, for example: After these settings are finalized, user can start running the blockchain in the background (daemon) with the following command: The parameters file params.dat will be locked, initialize the blockchain and create in the first block (genesis). For the first to connect time to an existing blockchain, first user needs to obtain the node address. The node address is shown whenever multichaind starts up or can be get from the getinfo API call. User can connect using the node address as follows: In case of private blockchain, user is able to connect immediately because user have not yet been approved permission by an administrator. After permission was approved, user can immediately reconnect to name-of-blockchain using the short form: Multichain provides full control over permissions at the network level. There are many global permissions that can granted to addresses connect, send, receive, issue, create, mine, activate, admin. All permissions are assigned on a peraddress basis, where addresses can either be public key hashes or script hashes. Permissions can be made temporally by limiting them to a specific range of block numbers. All permissions are granted and revoked using network transactions containing special metadata, which are easy to send using grant and revoke commands for multichain-cli or the JSON-RPC API. The creator of chain's first genesis block has the all basic permissions. The purpose of blockchain is to create decentralized databases, which are not controlled by any third party. It is necessary to extend the philosophy to the administration of permissions. Implementation Web Service Framework PrimeFaces We tried to implement our prototype system as a web application. To implement our resident management system as a web application, we combine Multichain blockchain, MySQL database system and Java web framework PrimeFaces. The data will send and receive from/to user interface through MySQL and blockchain technology. Our system is composed by web application, MySQL database and blockchain technology. The Java web framework is used to insert, update and delete data in the blockchain. In Multichain framework, there is a module which is used to send data to the blockchain technology and MySQL database. To protect data from illegal access from outside of the network, the data will be hashed and encrypted. These data can be sent through internet without any worry. Only the other entity who has a approval in the blockchain network can get the exact information inside the data. The fundamental module of this system is the web application developed with the java web framework. We test it firstly. A user needs to be authenticated. After the authentication with his role, the user can insert, update or delete an information. The Multichain blockchain module is to be embedded and control the flux of information. The following test is the communication between the web application and MySQL database on the one hand, in the other hand the Multichain blockchain. At the starting of the web application, we need to authenticate. We must have in mind that; our purpose is to get a registration for births in the national record. The test of creating block, creating a consensual governance model and using multiple keys for extra security in Multichain blockchain is an important test work. General Description of Java Web Application Web server is a software that can process the client request and send the response back to the client. Apache is one of the most widely used example web server. Web server is installed on some physical machine and listens to client request on specific port. Web client is a software that helps in communicating with the server. Our browsers can be considered as web client: Firefox, Google Chrome, Safari, etc. when something is requested from server (through URL), web client takes care of creating a request and sending it to server and then parsing the server response and present it to the user. Web server and client uses a common communication protocol, HTTP (Hyper Text Transfer Protocol) which is the communication protocol between server and client. HTTP runs on top of TCP/IP protocol. Java web application is used to create a dynamic website. Basically, any websites are created with static HTML pages. It means that the user always get the same webpage. However, the information displayed on the screen will be dynamic when web application is used. A web application provides a dynamic extension of web or application server. Web applications are two types: presentation-oriented and service-oriented. • Presentation-oriented generates an interactive web pages containing various types of markup language (HTML, XML, and •••) and dynamic content in response to requests. • Service-oriented implements the endpoint of a web service. Presentationoriented applications are often clients of service-oriented web applications. Java servlet technology is the foundation of all web application technologies. There are many java web technologies. We can name: Java Server Pages, Java Server Faces, Java Server Pages Standard Tag Library. Java web applications are packaged as web archive (WAR) and it has a defined structure we can export above dynamic web project as WAR file and unzip it to check the hierarchy. Sample User Interface Firstly, when the user accesses the web application, the authentication is required, which is shown in Figure 3. After being authenticated, the user can use the application according to his access rights. In order to access functions, user will use the main page which contains function menu. You can find the selection window in Figure 4. For example, if the user is an administrative user, he/she can create, update or delete data in the database. Figure 5 and Figure 6 are sample interface for updating data in the database. After the finish of data registration, end users can get their official record by entering their information through the web. Users enter the information through the interface shown in Figure 7. User can get, for example, following birth certificate for official use. Figure 8 is a sample official birth certificate document generated by the system. Conclusions In this paper, we described the design and implementation of highly secure and reliable data management system using blockchain. Blockchain technology provides easy implementation of highly secure, reliable data management system. Our prototype probed the efficiency of the blockchain technology to develop highly secure and reliable data management system. Furthermore, we adopted the web application for user interface. Web application approach avoids the installation of any application to users' PCs or smartphones. This approach contributes to the security of the usage of data management. There are several future works for the deployment of our system. First one is the development of application (APP) for smartphone. Web application is suitable from the security point of view. However, as the size of smartphone display is too small to enter many data. Therefore, to improve the user interface, specific application (APP) is suitable for many users. Another future work is the improvement of processing speed. Currently, to connect one block to exiting blockchain, second order of computation time is needed because of the huge amount of computation to find appropriate nonce value. To process more requests the system, execution time must be faster than now. This will be able to introduce large-scale parallel processing by GPU. GPU provides highly parallel computing.
5,180
2020-09-03T00:00:00.000
[ "Computer Science", "Mathematics" ]
Drilling Operation and Formation Damage Transport of particle suspensions in oil reservoirs is an essential phenomenon in many oil industry processes. Solid and liquid particles dispersed in the drilling fluid (mud) are trapped by the rock (porous medium) and permeability decline takes place during drilling fluid invasion into reservoir resulting in formation damage. The formation damage due to mud filtration is explained by erosion of external filter cake. Nevertheless, the stabilization is observed in core floods, which demonstrates internal erosion. A new mathematical model for detachment of particles is based on mechanical equilibrium of a particle positioned on the internal cake or matrix surface in the pore space. In the current work the analytical solution obtained to mud filtration with one particle capture mechanism with damage stabilization. The particle torque equilibrium is determined by the dimensionless ratio between the drag and normal forces acting on the particle. The maximum retention function of the dimensionless ratio closes system of governing equations for colloid transport through porous medium. Introduction Formation damage is an undesirable operational and economic problem that can occur during the various phases of oil and gas recovery from subsurface reservoirs including production, drilling, hydraulic fracturing, and work-over operations.Formation damage assessment, control, and remediation are among the most important issues to be resolved for efficient exploitation of hydrocarbon reservoirs [1]. Formation damage indicators include permeability impairment, skin, damage and decrease of well performance.Flow of suspensions in rocks with particle capture and consequent permeability impairment is an essential phenomenon in many oil industry processes.Particle capture by rock and permeability decline takes place during drilling fluid invasion into reservoir resulting in formation damage.It also occurs during fines migration, mostly in reservoirs with low consolidated sands and heavy oil. Diagnosis of formation damage allows choosing the right damage removal technology during seawater injection to enhance the recovery factor of hydrocarbon reservoirs.The particle erosion was described by introduction of a new particle storage capacity function which is equal to maximum retained concentration versus dimensionless flow velocity.The particle storage capacity func-tion is a reological characteristic that closes system of governing equations. Deep bed filtration of fines with capture and permeability damage takes place near to production wells, in drilling operation.The particles in drilling fluid are captured by size exclusion (straining) or by different attachment mechanisms (electric forces, gravity segregation and diffusion).The analytical solution for deep bed filtration shows that for the case of the vanishing filtration function, the injectivity stabilizes when time tends to infinity. The classical mathematical model for suspension flow in rocks consists of particle balance and capture kinetics equations [2].It is assumed that the mean particle speed is equal to carrier water velocity. Internal filtration is the phenomenon describing the capture of suspended particles and droplets of a suspension flowing through a porous medium.Internal filtration of suspended and precipitating solids and the associated formation damage constitute fundamental components of the productivity decline problem.The phenomenon is complex and multiple parameters play a role.The characteristics of the porous medium play a role; for example the size distribution of the pore throats, the connectivity of the pore bodies and the surface chemistry of the grains comprising the porous medium. External filter cake is the term used to describe the particles retained at the interface of porous medium.The retention of particles occurs due to different phenomena including size exclusion; i.e. at the wellbore (Barkman and Davidson 1972) [3].An alternative mechanism for the development of an external filter cake is that of transitioning from internal filtration to external filter cake buildup.An example of external filter cake build-up due to size exclusion is that associated with drilling.When drilling, the circulating mud is designed to prevent fluid leak-off with minimal internal filtration by forming external filter cakes due to size exclusion.Such cakes are easier to remedy and stimulate, whereas internal filtration is more difficult to remedy. Literature Review Davison et al. [4] conducted experiments by following colloidal silica suspension through three types of core plugs taken from Berea, Noxie and Cleveland sandstones.Their results show that particles initially pass through the larger openings in the core and are stopped gradually by a combination of effects of sedimentation, direct interception and surface deposition.The plugging of the core appears to be a combination of internal blockage of pores and cake buildup.They found that the larger particles initiate cake formation. In most cases, the core itself was only partially plugged, but the experimental cake formed at the face of the core restricted the flow of the suspension.Davidson [4] conducted experiments on the flow of particles suspensions through porous media.A major finding of this work was that the velocity required to prevent particle deposition is reversely related to the particle size.However, it is the first indication in the literature that there is a relationship between particle movement through porous media and linear flow velocity. Todd et al. [5] conducted particle-plugging experiments on three different core materials.The result of these studies indicates that significant permeability impairment can be caused by inorganic solids, even in dilute system. Todd [6] used aluminum oxide particles in the size ranges 0 to 3, 4 to 6, and 8 to 1 microns.They found the following: 1) The overall damage is related to the mean pore-throat size; 2) The cores damaged with 0 to 3 micron suspensions exhibit damage throughout their entire length; and 3) As particle size increases, the damage is gradually shifted toward the injection end of the core. When suspended particles in a carrier fluid are flowed through a porous medium, the operative plugging mechanism depends on the characteristics of the particles, the characteristics of the formation and the nature of the interaction between the particles and the various reser-voir materials. The particle/pore size ratio is the most important parameter in the filtration process.It has been found that larger particle/pore size ratios tend to cause rapid, but shallow, damage. Particle retention and formation damage accumulation take place according to the classical suspension filtration theory until the retention concentration reaches the critical value defined by the torque balance, from now on the particles do not deposit any more. Particle Mechanical Equilibrium and Maximum Retention Concentration Function Particle capture by the rock takes place until the drog force moment, acting on the particle on the surface of the growing internal cake by moving water exceeds the attractive normal force moment.Particle capture and filling the space in a pore results in the porosity decrease and in increase of interstitial velocity of fluid in a pore, provided the same injection rates maintained.Consequently increases the drag force acting on a particle on the internal cake surface from moving fluid (Figure 1). According to Jiao and Sharma [7] for particle equilibrium on the cake surface as the equality torques for electric and drag forces: Here the normal force is a total of electric and lifting forces. Drag force is the general expression for the drag force acting on a spherical particle in flow between two parallel plates.The lever arms for drag and normal force have the same order of magnitude.Their ratio is equal to 3 0.5 for the case of equal spherical particles (Figure 1). Appendix A provides all the above-mentioned forces.The flow velocity and the pressure gradient are linked by the Darcy law.According the Darcy equation for fluid (Mud Filtrate) flow through porous media: Here we introduce the dimensionless parameter which is ratio between the drag force and the normal force: As it follows from the torque balance and the velocity de on concentration becomes a funct pendencies of drag and lifting forces, for each flow velocity there does not exist a maximum retained concentration that correspond to equilibrium of torques acting on a single particle. The maximum retenti ion of the particle dislodging number, expressing the relationship between cr  and ε: The higher is the velocity U, the higher is the numerator in the expression (4) for the erosion number  , the higher is the lifting force and the lower is the deno inator in (4).So the erosion number is a monotonically increasing function of U. The dependence ( 4) is called the storage capacity function. Basic Equation for Classical Filtration Theory with Retained Particle Release e particle capture rate is given by the classical fi tion theory [8] assuming that retention rate is proportional to particle flux cU, i.e.  : Here the deposited concentration is significantly lower than the concentration of vacancies in the pore space which means that the filtration coefficient  is constant. The mathematical mass balance for suspended and retained particles during 1 d linear filtration is The Darcy equation accounting for formation damage describes the flux and pressure distribution The derivations of dimensionless system along with in Th for system the under saturated case  itial and boundary data are presented in Appendix B. Analytical Solution for Erosive Classical Filtration with One Capture Mechanism for Constant Rate Mud Invasion e initial boundary value problem (B-6,7) (B-2,5) allows for exact analytical solution.Below the solution is derived. This solution for cr S S  is well known [9].Expressing suspension con ion from (B-3) Substituting it into (B-2) and integrating in D t with regards to initial condition (B-6), we obtain From kinetics Equation (B-3) and boundary condition (B-7) we derive the following boundary condition for retained concentration: The solution is obtained by meth [9 od of characteristics ].Both concentrations are zero ahead of the concentration front D D x t  : : 0 That with boundary condition (10) leads to the solutio As it follows from rate Equation (B-3), conc tration S is en always continuous.Therefore, on the ero-on on t Which leaves two possibilities: eith or C is co e de sion front, and the balance conditi he shock becomes nt is l ntinuous.Since velocity of erosion fro ess than unity, the suspended concentration is continuous.Therefore, 1 C  along the erosion front.It allows calculating the tim rivative of S: Equations ( 9), ( 16) and ( 17) from a linear system of three eqs.for three unknowns.The solution is Finally, we obtain equation for erosion front: The analytical model developed allows to atch the experimental impedance curve and calculated from it three injectivity parameters-filtration and formation damage coefficients, and also the maximum retention concentration. Near wellbore mud the resulting formation damage are amongst the most important problems involving the petroleum reservoir exploitation. The susp ncentration is based on the assumption that for each flow velocity there does exist the maximum amount of retention particles that electric-molecular forces can keep.The dimensionless erosion number, which is ratio be-tween the cross flow drag force and the total of normal forces, is proportional to flow velocity.The stabilization phenomenon is characterized by so called storage capacity which is the maximum retention concentration versus erosion number.The maximum retention function of the dimensionless ratio closes system of governing equations for colloid transport through porous medium. The analytical solution obtained to mud filtration e particle capture mechanism with damage stabilization. Pre e suspension allows for complete characterization of the filtration damage system, i.e., mud filtration and formation damage coefficients altogether with maximum retention concentration can be calculated.F and gravity g F .A general form of the crossflow drag force in Hele-Shaw flow in rectangular channel geometry with a single permeable wall is given as: where  is proportionality factor in the range [10, 60],  is the viscosity, is the particle radius, cf is the average crossflow velocity at a given cross-section, a u H is the height of the channel, and h is the thickness of the cake at a given cross-section. The electrostatic (a.k.a.surface or colloidal) force is directly proportional to the radius of the particles and is given by ( 10): where: Here r  is the characteristic retardation wavelength of interaction and is often assumed to be , 20 100 nm И is the Hamkar constant of the interacting media, S  is the surface to surface separation, 0 and w  are the permittivity under vacuum and relative dielectric permittivity of the solute (water) respectively, 1 2  and   are the surface potentials of particles and collectors respectively and is the inverse Debye length. It should be noted that at certain salinities and pH values decreasing the surface to surface separation S  between one hematite particle and the other from infinity to zero yields two energy minima.The secondary minimum usually corresponds to a separation of >15 nm while the primary minimum corresponds to surface to surface separation of 4 nm.By considering only the forces acting along the axis parallel to the permeate force, a check can be made to see whether the particle has enough energy to overcome the activation energy and position itself within the primary minimum. The general force of lifting force is given by:   The gravity force is given by: where   is the difference between the densities of the suspended particles (hematite) and the suspending fluid (water).The normal force in Figure 1 is a total of electric molecular and lifting forces. Appendix B. Basic Equations for Suspension Transport with One Particle Capture Mechanisms and Deposit Erosion Let us introduce dimensionless length and time into the dimensionless system for classical filtration with particle dislodging (7): 0 0 , , , , The system (7) takes the form:  : System (B-2,5) subject to initial and boundary conditions (B-6,7) describes the process of suspension injection into a virgin core. Figure 1 . Figure 1.Forces and force moment balance on the particle at the cake surface. contains three unknowns: both partial de f S. on rivatives of S and velocity of the erosion front.Equation (9) also contains two partial derivatives o Let us derive the condition of particle flux continuity the erosion front during core flood by particlREFERENCES[1] F. Civan, "Re ge (Fundamentals, us Medium Mod ction of suspension with the constant rate accounting for retained particle dislodging by drag force.Introduction of just one new parameter-ma m reined concentration-into a classical suspension filtration model allows for significant enrichment of the physics schema for suspension transport and retention. coefficient  : Effective porosity c : Volumetric concentration of suspended particles with respect to the volume  : Volumetric concentration of retained particles with respect to the bulk volume S  : Surface to surface separation 0  : Permittivity under vacuum w  : Permittivity of the solute (water) 1  : Surface potentials of particles and collectors respectively  : Inverse Debye length   : Difference between the densities of the suspended particles (hematite) and the suspending fluid (water) Appendix A. Forces Acting on the Captured Particle on the Surface of Internal Cake on the Pore Wall Consider forces which act on the captured particle (Figure 1): darg force cf F acting on the particle from bypassing viscous water; electric-molecular force e F ; lifting force 1  was given as 89.5 by Kang et al. (11). 5 ) -3) and (B-4) can be written in the generalized form Initial condition for system (B-2,5) correspond to absence of either suspended or deposited particles in the core before the suspension injection
3,515.4
2013-06-05T00:00:00.000
[ "Engineering", "Environmental Science" ]
SN 2022joj: A Peculiar Type Ia Supernova Possibly Driven by an Asymmetric Helium-shell Double Detonation We present observations of SN 2022joj, a peculiar Type Ia supernova (SN Ia) discovered by the Zwicky Transient Facility (ZTF). SN 2022joj exhibits an unusually red $g_\mathrm{ZTF}-r_\mathrm{ZTF}$ color at early times and a rapid blueward evolution afterward. Around maximum brightness, SN 2022joj shows a high luminosity ($M_{g_\mathrm{ZTF},\mathrm{max}}\simeq-19.7$ mag), a blue broadband color ($g_\mathrm{ZTF}-r_\mathrm{ZTF}\simeq-0.2$ mag), and shallow Si II absorption lines, consistent with those of overluminous, SN 1991T-like events. The maximum-light spectrum also shows prominent absorption around 4200 \r{A}, which resembles the Ti II features in subluminous, SN 1991bg-like events. Despite the blue optical-band colors, SN 2022joj exhibits extremely red ultraviolet minus optical colors at maximum luminosity ($u-v\simeq0.6$ mag and $uvw1 - v\simeq2.5$ mag), suggesting a suppression of flux at $\sim$2500--4000 \r{A}. Strong C II lines are also detected at peak. We show that these unusual spectroscopic properties are broadly consistent with the helium-shell double detonation of a sub-Chandrasekhar mass ($M\simeq1 \mathrm{M_\odot}$) carbon/oxygen (C/O) white dwarf (WD) from a relatively massive helium shell ($M_s\simeq0.04$--$0.1 \mathrm{M_\odot}$), if observed along a line of sight roughly opposite to where the shell initially detonates. None of the existing models could quantitatively explain all the peculiarities observed in SN 2022joj. The low flux ratio of [Ni II] $\lambda$7378 to [Fe II] $\lambda$7155 emission in the late-time nebular spectra indicates a low yield of stable Ni isotopes, favoring a sub-Chandrasekhar mass progenitor. The significant blueshift measured in the [Fe II] $\lambda$7155 line is also consistent with an asymmetric chemical distribution in the ejecta, as is predicted in double-detonation models. INTRODUCTION Type Ia supernovae (SNe Ia) come from thermonuclear explosions of carbon/oxygen (C/O) white dwarfs (WDs) Liu et al. in binary systems.While there is broad consensus about this fact, specifics about the binary companion and the conditions that spark ignition remain uncertain (e.g., Maoz et al. 2014;Liu et al. 2023b, for reviews).Multiple explosion channels have been proposed, though none of them can fully explain the diversity in the SN Ia population. Recent attention has been focused on the helium-shell double-detonation scenario as a potential explanation for some normal SNe Ia (e.g., Polin et al. 2019;Shen et al. 2021b) as well as a growing subclass of peculiar, red SNe Ia (e.g., Jiang et al. 2017;De et al. 2019;Liu et al. 2023a).In a double detonation, the detonation moving along the base of a helium shell (accreted from a helium-rich companion) atop the primary WD sends an oblique shock wave inward (e.g., Fink et al. 2010), which eventually converges somewhere within the core, triggers a secondary detonation, and inevitably explodes the entire WD (Nomoto 1982a,b;Woosley et al. 1986;Livne 1990;Woosley & Weaver 1994;Livne & Arnett 1995).This mechanism can dynamically ignite WDs well below the Chandrasekhar mass (M Ch ; ∼1.4 M ⊙ ).It has been proposed that a substantial fraction of SNe Ia could result from double detonations based on observations of (i) the intrinsic event rate and delay-time distribution (DTD) of the SN Ia population (Ruiter et al. 2011(Ruiter et al. , 2014)); (ii) the nucleosynthetic yields of SNe Ia as measured in their late-time spectra (Maguire et al. 2018;Flörs et al. 2020); (iii) the chemical-enrichment history of various galaxies (Kirby et al. 2019;de los Reyes et al. 2020;Sanders et al. 2021;Eitner et al. 2022); and (iv) the hypervelocity Galactic WDs, which are likely surviving donors from double-degenerate binaries where the primary WD exploded in a double detonation (Shen et al. 2018b;El-Badry et al. 2023). The remarkable observational properties for SNe Ia from double detonations are mostly associated with the helium shell.Shortly after the shell detonation, the decay of the radioactive species synthesized during the helium burning may power a flux excess in the early-time light curves (Woosley & Weaver 1994;Fink et al. 2010;Kromer et al. 2010).Afterward, the iron-group elements (IGEs) in the helium-shell ashes may provide significant line blanketing blueward of ∼5000 Å (Kromer et al. 2010), efficiently suppressing flux in the blue optical.In general, progenitors with a thin helium shell would show minimal detectable effects from the shell detonation, and reproduce "normal" (e.g., that of SN 2011fe; Nugent et al. 2011) luminosity and spectroscopic properties around maximum luminosity (e.g., Polin et al. 2019;Townsley et al. 2019;Magee et al. 2021;Shen et al. 2021b).Normal SNe Ia with a red flux excess shortly after the explosion may be associated with this scenario (e.g., SN 2018aoz; Ni et al. 2022).Meanwhile, objects involving a more massive helium shell exhibit peculiarities, such as a strong flash at early times and an extremely red color around maximum luminosity (Polin et al. 2019).Several peculiar SNe Ia have been interpreted as double-detonation SNe, including SN 2016jhr (Jiang et al. 2017), SN 2018byg (De et al. 2019), OGLE-2013-SN-079 (Inserra et al. 2015, interpreted as either a pure helium-shell detonation or a double detonation), SN 2016hnk (Jacobson-Galán et al. 2020;De et al. 2020; but see Galbany et al. 2019 for an alternative interpretation), SN 2019ofm (De et al. 2020), SN 2016dsg (Dong et al. 2022), SN 2020jgb (Liu et al. 2023a), and SN 2019eix (Padilla Gonzalez et al. 2023a). Multidimensional considerations are especially important for double detonations because the explosion of the C/O core is triggered off-center, and, as a result, all the observables (e.g., luminosities, colors, and absorption line features) are subject to viewing-angle effects (Fink et al. 2010;Shen et al. 2021b).Asymmetries in the chemical distribution of the SN Ia ejecta have been invoked to explain SN spectropolarimetric measurements (e.g., Wang et al. 2003;Kasen et al. 2003;Patat et al. 2012, see Wang & Wheeler 2008 for a review) and the kinematics of Ni and Fe in the innermost ejecta (Motohara et al. 2006;Maeda et al. 2010a,b;Maguire et al. 2018;Li et al. 2021).To accurately infer the progenitor of a double-detonation SN, one needs to compare the observations with multidimensional models. In this paper, we present observations of a peculiar SN Ia, SN 2022joj, which shows a remarkable color evolution, starting with red optical colors that quickly evolve to the blue as the SN rises to maximum luminosity.Its photometric and spectroscopic features are qualitatively consistent with that of a double-detonation SN.In Section 2, we summarize the observations of SN 2022joj, which are analyzed in Section 3, where we show its peculiarities in various aspects.In Section 4.1, we discuss existing scenarios that can lead to a red color in the early light curves of an SN Ia, of which the heliumshell double-detonation scenario is the most reasonable explanation.We also show that multidimensional effects must be taken into account to explain the spectroscopic peculiarities.The indication of a sub-M Ch progenitor and an asymmetric explosion is supported by the latetime spectra of SN 2022joj, which we describe in Section 4.2.In Section 4.3, we discuss possible origins of the carbon features in SN 2022joj at maximum brightness.We draw our conclusions in Section 5.After the submission of this paper, a separate study of SN 2022joj by Padilla Gonzalez et al. (2023b) was posted on the arXiv, drawing similar broad conclusions. Along with this paper, we have released the data utilized in this study and the software used for data analysis and visualization.They are available online at Zenodo under an open-source Creative Commons Attribution license: 10.5281/zenodo.8331024and our GitHub repository, https://github.com/slowdivePTG/SN2022joj.(Zackay et al. 2016), which is utilized by the automated ZTF discovery pipeline (Masci et al. 2019).It was first detected with r ZTF = 19.13 ± 0.06 mag at α J2000 = 14 h 41 m 40.s 08, δ J2000 = +03 • 00 ′ 24.′′ 14 and announced to the public by Fremling (2022).A real-time alert (Patterson et al. 2019) was generated as the candidate passed internal machine-learning thresholds (e.g., Duev et al. 2019;Mahabal et al. 2019), and the internal designation ZTF22aajijjf was assigned.The follow-up observations of the SN was coordinated using the Fritz Marshal (van der Walt et al. 2019;Coughlin et al. 2023).The last 3σ nondetection limits the brightness to r ZTF > 21.48 mag on 2022 May 03.27 (MJD 59702.27;5.03 days before the first detection) using the ZTF forced photometry from the ZTF Forced Photometry Survice (ZFPS; Masci et al. 2023).SN 2022joj was also independently monitored by the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al. 2018;Smith et al. 2020).With the forced photometry obtained from the AT-LAS forced-photometry server (Shingles et al. 2021),1 we identify the last 3σ nondetection with ATLAS on 2022 May 04.26, 0.99 days after the last nondetection in r ZTF , and put a limit of the brightness in the orange filter of o > 19.84 mag. The first spectrum was obtained on 2022 May 11.288 by Newsome et al. (2022), who found a best fit to a young Type I SN at redshift z = 0.03 using the Supernova Identification (SNID) algorithm (Blondin & Tonry 2007).In this early-time spectrum, prominent Si II λ6355 and Ca II infrared triplet (IRT) absorption suggest an SN Ia classification, but the overall spectral shape, featuring a relatively red continuum (see Figure 2), is atypical for a normal SN Ia at this phase.Chu et al. (2022) used a peak-luminosity spectrum to indisputably classify SN 2022joj as an SN Ia based on its blue color and persistent Si II features. In addition, the SN field was observed in grizy as part of the wide survey of the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; Aihara et al. 2018).We retrieved the stacked science-ready images from the HSC-SSP data archive using the HSC dataaccess tools. 2 The photometry was extracted with the aperture-photometry tool presented by Schulze et al. (2018).The measurements were calibrated against a set of stars from the Pan-STARRS catalog (Chambers et al. 2016), and we applied color terms from the HSC pipeline version 83 to correct for differences between the Pan-STARRS and HSC filters.Table 1 summarizes all measurements. To determine the redshift of the host, we obtained two spectra about 300 days after the SN maximum brightness.On 2023 March 14, we took a spectrum of both the SN and the host using Binospec (Fabricant et al. 2019) on the 6.5 m MMT telescope with a total integration time of 5400 s.We placed the slit across both the center of the galaxy and the position of the SN (Figure 3).On 2023 April 26, we took another spectrum using the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck I 10 m telescope.The slit was placed at the same position angle, with the Cassegrain Atmospheric Dispersion Compensator (Cass ADC; Phillips et al. 2006) module on.The total integration time was 3600 s.The LRIS spectrum has a higher signal-to-noise ratio (S/N), in which we detected a potential host emission line at 6742.4 Å (see Figure 3) with a S/N4 of 3.4.We associated this feature with Hα emission, meaning the corresponding redshift of the host galaxy is z = 0.02736±0.0007,in agreement with the ini- tial estimate, z = 0.03, from matching the SN spectra to SNID templates (Newsome et al. 2022).In the coadded two-dimensional (2D) spectrum, the trace is dominated by the light of the SN in the nebular phase, while the center of this emission feature has an offset of ∼3-4 pixels from the center of the trace.The CCDs on LRIS have a scale of 0. ′′ 135 pixel −1 , so this offset corresponds to an angular offset of ∼0.′′ 4-0.′′ 5, consistent with the astrometric offset when comparing the LS detection of the host and the ZTF detection of the SN.The Binospec spectrum has a lower S/N, and we cannot identify this emission line at the same position in the 2D spectrum via visual inspection.Nevertheless, we still marginally detect an emission feature in the 1D spectrum with a S/N of 1.5 at the same wavelength.All the evidence indicates that the Hα detection is real.We estimate the distance modulus of SN 2022joj in the following way.We first use the 2M++ model (Carrick et al. 2015) to estimate the peculiar velocity of the host galaxy to be 244 ± 250 km s −1 .Then the peculiar velocity is combined with the recession velocity in the frame of the cosmic microwave background (CMB) v CMB = 8424 km s −1 , which yields a net Hubble recession rate of 8193 ± 250 km s −1 .Using cosmological parameters H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3, and Ω Λ = 0.7, the estimated luminosity distance to SN 2022joj is 119.5 Mpc, equivalent to a distance modulus of 35.39 ± 0.03 mag. Optical Photometry SN 2022joj was monitored in ZTF gri bands as part of its ongoing Northern Sky Survey (Bellm et al. 2019b).The i ZTF data do not cover the rise.We use the forcedphotometry light curves from ZFPS, reduced using the pipeline from A. A. Miller et al. (2023, in prep.);see also Yao et al. (2019).We adopt a Galactic extinction of E(B − V ) MW = 0.032 mag (Schlafly & Finkbeiner 2011), and correct all photometry using the extinction model from Fitzpatrick (1999) assuming R V = 3.1.We do not find any Na I D absorption at the redshift of the host galaxy (we put a 3σ upper limit in the equivalent width of Na I D 1 + D 2 of <0.5 Å), indicating that the extinction from the host is negligible.The blue g ZTF − r ZTF color (∼−0.2mag) near maximum luminosity after correcting for the Galactic extinction is also consistent with no additional reddening from the host.Therefore we assume E(B − V ) host = 0.The dereddened g ZTF and r ZTF forced-photometry light curves in absolute magnitudes are shown in Figure 1.Additional observations of SN 2022joj were obtained in the o and c filters in the ATLAS survey, the griz filters with the optical imaging component of the Infrared-Optical suite of instruments (IO:O) on the Liverpool Telescope, the griBV RI filters and the clear filter on the 0.76 m Katzman Automatic Imaging Telescope (KAIT; Filippenko et al. 2001) at Lick Observatory, and the BV RI filters on the 1 m Anna Nickel telescope at Lick.ZTF, AT-LAS, LT, KAIT, and Nickel observations are reported in Table 2.The ZTF gri magnitudes, ATLAS o and c magnitudes, and Sloan griz magnitudes are reported in the AB system, while the BV RI and clear magnitudes are reported in the Vega system. Swift Ultraviolet/Optical Telescope (UVOT) Observations Ultraviolet (UV) observations of SN 2022joj were obtained using UVOT (Roming et al. 2005) on the Neil Gehrels Swift Observatory (Swift; Gehrels et al. 2004) following a target-of-opportunity (ToO) request by E. Padilla Gonzalez.Prior to the SN, UVOT images of the field had been obtained in the u, uvw1, and uvw2 filters.In each of these reference images the flux at the location of the SN is consistent with 0, and the 3σ upper limits correspond to ≲10% of the SN flux measured in the UV.In the optical bands, the LS photometry (g = 22.05 ± 0.05 mag) shows that the host galaxy contributes ≲1% of the total observed flux.We therefore conclude that the host galaxy can be neglected when estimating the SN flux in UVOT images. We determine the flux in the u, b, v, uvw1, uvm2, and uvw2 filters using a circular aperture with a radius of 5 ′′ centered at the position of the SN.2014).The magnitudes are also reported in Table 2 in the Vega system. Optical Spectroscopy We obtained a series of optical spectra of SN 2022joj using the Spectral Energy Distribution Machine (SEDM; Blagorodnova et al. 2018) on the automated 60 inch telescope (P60; Cenko et al. 2006) at Palomar Observatory, the Kast double spectrograph (Miller & Stone 1994) on the Shane 3 m telescope at Lick Observatory, the Andalucia Faint Object Spectrograph and Camera (ALFOSC)7 installed at the 2.56 m Nordic Optical Telescope (NOT), the SPectrograph for the Rapid Acquisition of Transients (SPRAT; Piascik et al. 2014) on the 2 m Liverpool Telescope (LT; Steele et al. 2004) under program PL22A13 (PI: Dimitriadis), the FLOYDS spectrograph8 on the 2 m Faulkes Telescope South (FTS) at Siding Spring as part of the Las Cumbres Observatory (LCO) (Brown et al. 2013), Binospec on the 6.5 m MMT telescope, and LRIS on the Keck I 10 m telescope.The SEDM spectra were reduced using the custom pysedm software package (Rigault et al. 2019).The Shane/Kast spectra, obtained with the slit near the parallactic angle to minimize differential slit losses (Filippenko 1982), were reduced following standard techniques for CCD processing and spectrum extraction (Silverman et al. 2012) utilizing IRAF (Tody 1986) routines and custom Python and IDL codes. 9The NOT/ALFOSC, Keck I/LRIS, and MMT/Binospec spectra were reduced using the PypeIt package (Prochaska et al. 2020).The LT/SPRAT spectra were reduced with a dedicated pipeline10 for bias subtraction, flat fielding, derivation of the wavelength solution and flux calibration, with additional IRAF/PyRAF11 routines for proper extraction of the spectra.The FTS/FLOYDS spectrum was reduced using the FLOYDS pipeline. 12We also attempted to obtain a near-infrared spectrum with the Triple Spectrograph (TSpec)13 installed at the 200 inch Hale telescope (P200; Oke & Gunn 1982) at Palomar observatory, but the observations are mostly characterized by low S/N and a few broad undulations with no immediately identifiable lines.Thus, the TSpec data were excluded from our analysis.Details of the spectroscopic observations are listed in Table 3.The resulting spectral sequence is shown in Figure 2. All of the spectra listed in Table 3 will be available on WISeREP (Yaron & Gal-Yam 2012). We also include the spectrum uploaded to the Transient Name Server (TNS) by Newsome et al. (2022) in our analysis, which was obtained using the FLOYDS spectrograph on the 2 m Faulkes Telescope North (FTN) at Haleakala. Si ii Ca ii 3000 4000 5000 6000 7000 8000 9000 10000 Optical spectral sequence of SN 2022joj showing the red to blue color transition as the SN rises to its maximum luminosity and the development of prominent absorption features around 4200 Å post-maximum.Rest-frame phase relative to the B-band peak and the instrument used to observe the SN are listed for each spectrum.Spectra have been corrected for E(B − V )MW = 0.032 mag.All spectra are binned with a bin size of 10 Å, except for the low-resolution SEDM spectra.The corresponding wavelengths of the Si II λ6355 line (with an expansion velocity of 10,000 km s −1 ) and the Ca II IRT (with expansion velocities of both 10,000 km s −1 and 25,000 km s −1 ) are marked by the vertical dashed lines.The strong optical telluric features (Fraunhofer A and B bands) are marked by the shaded region. To estimate the time of first light (t fl ), we assume an initial power-law rise in the broad-band flux f (t), where A is a constant and α is the power-law index.We only include the forced-photometry light curve with flux ≤40% of peak luminosity (Miller et al. 2020) ducted on more than three nights between −20 days and −10 days.Light curves in other bands are excluded because the coverage is significantly worse at this phase (see Section 3.2).We assume that the t fl is the same in both bands, then estimate α and A in each band with a Bayesian approach.We adopt flat priors for t fl and log A, and a normal prior for each α centered at 2 (the fireball model) with a standard deviation of 1.We sample their posterior distributions with Markov Chain Monte Carlo (MCMC) using the package PyMC (Salvatier et al. 2016).In addition, we run another model with a fixed α = 2.The estimated model parameters are listed in Table 4.We find that both light curves are consistent with a power-law rise since MJD 59703.16+0.70 −0.58 .This estimate is consistent with the α = 2 fireball model.When fixing α = 2, the model also fits the light curve well, but the estimated t fl is ∼0.5 day later (MJD 59703.66+0.10 −0.11 ).We do not find any correlated residuals as evidence for a flux excess after ∼4 days since t fl , although a flux excess before the first detection could not be ruled out. Photometric Properties The basic photometric properties of SN 2022joj are listed in Table 4.The times of the maximum luminosity and the corresponding magnitudes in the ZTF gr bands and the KAIT/Nickel BV RI bands are estimated using a fourth-order polynomial fit.We do not include the maximum i ZTF -band properties, which are relatively uncertain owing to the low cadence in i ZTF around peak. SN 2022joj shows a few peculiar photometric features compared to normal SNe Ia.In Figure 1, we compare the g ZTF and r ZTF light curves and the g ZTF − r ZTF color evolution of SN 2022joj with those of the well-observed normal SN Ia, SN 2011fe,14 as well as SN 2018cnw (ZTF18abauprj) and SN 2018cuw (ZTF18abcflnz) from a sample of SNe Ia with prompt observations within 5 days of first light by ZTF (Yao et al. 2019;Bulla et al. 2020).SN 2018cnw is slightly overluminous at peak, and belongs to either the SN 1999aa-like (99aa-like; Garavini et al. 2004) or SN 1991T-like (91T-like; Filippenko et al. 1992a) subclass of SNe Ia, while SN 2018cuw is a normal SN Ia with a red g ZTF − r ZTF color comparable to that of SN 2022joj ∼15 days prior to peak. Around maximum brightness, SN 2022joj is overluminous, comparable to SN 2018cnw, and ∼0.5 mag brighter than SN 2011fe in both g ZTF and r ZTF .But SN 2022joj clearly stands out owing to its fast evolution in g ZTF .While SN 2022joj and SN 2018cnw show a similar maximum brightness in g ZTF , upon the first detection of SN 2022joj in g ZTF at ∼−12 days, its corresponding absolute magnitude (−17.2 mag) is ∼0.8 mag fainter than that of SN 2018cnw at a similar phase.This means on average, SN 2022joj rises faster than SN 2018cnw by ∼0.06 mag day −1 in g ZTF during that period of time.On the decline, the ∆m 15 (g ZTF ) of SN 2022joj is 1.03 ± 0.03 mag, which is significantly greater than that of the overluminous SN 2018cnw (∆m 15 (g ZTF ) = 0.77 mag) or normal SN 2011fe (∆m 15 (g ZTF ) = 0.80 mag).The rapid decline of SN 2022joj is atypical for overluminous SNe Ia, which are usually the slowest decliners in the SN Ia population (Phillips et al. 1999;Taubenberger 2017).The rapid decline is probably due to the unusual and fast-developing absorption feature near 4200 Å (see Section 3.3). The color evolution of SN 2022joj does not match that of normal SNe Ia, as shown by the trail traced by SN 2022joj in the right panel of Figure 1.We overplot all the SNe from the ZTF early SN Ia sample (Bulla et al. 2020) with z ≤ 0.05.They are corrected for Galactic extinction, but K-corrections have not been performed for consistency.Given the peculiar nature of SN 2022joj, we cannot use models trained on normal SNe Ia to reliably estimate its K-correction.Nevertheless, given its relatively low redshift (z ≲ 0.03), the K-corrections are not expected to be large (K(g ZTF − r ZTF ) ≃ −0.05 mag around maximum, estimated using the Kast spectrum at −0.3 days).For the same reason we only include the SNe Ia with the lowest redshift from the sample of Bulla et al. (2020).SN 2022joj is remarkably red (g ZTF − r ZTF ≃ 0.4 mag) at ∼−12 days (∼7 days after t fl ), and is clearly an outlier compared to the normal SN Ia sample. 15During the ensuing week, SN 2022joj quickly evolves to the blue, and is among the bluest objects in the sample at ∼−5 days (∼14 days after t fl ; g ZTF − r ZTF ≃ −0.3 mag).SN 2018cuw has a comparable g ZTF − r ZTF color at early times, but the blueward evolution of SN 2018cuw is slower than that of SN 2022joj.No later than SN 2022joj reaches its peak luminosity, SN 2022joj starts to evolve redward.While other SNe Ia show qualitatively similar redward evolution, this usually happens much later (∼+10 days).When g ZTF − r ZTF reaches maximum (∼0.8 mag) at ∼+30 days, SN 2022joj is again bluer than most of the SNe Ia in the ZTF sample.Eventually as SN 2022joj steps into the transitional phase, its color evolution follows the Lira law16 (Lira 1996;Phillips et al. 1999) and shows no significant difference from that of the SNe Ia in the ZTF sample.The B − V color evolves in the similar way as the g ZTF − r ZTF color, which starts red (B − V ≃ 1.2 mag at −12 days) and quickly turns bluer, reaching B − V ≃ 0.0 mag at ∼−5 days before turning red again. While SN 2022joj shows a blue g ZTF − r ZTF color near maximum brightness, its UV − optical colors are unusually red.In Figure 4, we show the locations of SN 2022joj in the UVOT color-color diagrams compared to those of 29 normal SNe Ia with UV observations around maximum from Brown et al. (2018).SN 2022joj stands out due to the red u − v and uvw1 − v colors.After correcting for Galactic extinction, SN 2022joj shows u−v = 0.58 +0.06 −0.06 mag and uvw1−v = 2.48 +0.23 −0.19 mag.As a comparison, none of the objects in the normal SN Ia sample has u − v > 0.5 mag or uvw1 − v > 2.1 mag.This cannot be a result of the unknown host reddening, since the amount of host extinction needed to account for the red near-UV colors of SN 2022joj would require the intrinsic b−v color of the SN to be unphysically blue (shifted along the opposite direction of the arrows in Figure 4).Interestingly, SN 2022joj exhibits a moderately blue mid-UV color (uvm2 − uvw1 = 1.59 +0.55 −0.36 mag), while most of the normal SNe Ia in the sample from Brown et al. (2018) show uvm2 − uvw1 ≳ 2 mag.This might indicate that, for some reason, the flux in the near-UV (∼2500-4000 Å) of SN 2022joj is suppressed near maximum brightness. To conclude, despite a similar luminosity and color to 99aa-like/91T-like events at maximum brightness, the rapid photometric rise and decline and the unusual color evolution in SN 2022joj both indicate that it exhibits some peculiarities relative to normal and 99aa-like/91Tlike SNe Ia.Note-Parameters are defined in the text.The absolute magnitudes have been corrected for Galactic extinction.The uncertainty in the distance modulus (0.03 mag) and the systematics in the polynomial models are not included. Optical Spectral Properties In Figure 2, we show the optical spectral sequence of SN 2022joj.The −12 days spectrum exhibits prominent absorption lines associated with Si II λ6355 and Ca II IRT (this spectrum was obtained and posted on TNS by Newsome et al. 2022).It also displays a strong suppression of flux blueward of ∼5000 Å, confirming the unusually red photometric colors at early times.Near maximum brightness, the Kast spectrum and the two SEDM spectra show a very blue continuum in the range ∼5000-8000 Å with shallow absorption features, indicating a high photometric temperature.Si II λλ5972, 6355 lines, the S II W-trough, and Ca II IRT are detectable but not prominent.The C II λλ6580, 7234 lines are prominent at maximum brightness (+0 day), and quickly disappear afterward (+3 days).A wide, asymmetric absorption feature appears at ∼4000-4500 Å (the 4200 Å features hereafter).There is a break on the blue edge of this feature which we associate with Si II λ4128 that is often seen in other SNe Ia.The 4200 Å features become even wider and deeper in another SEDM spectrum at +7 days and the ALFOSC spectrum at +9 days.Weeks after the maximum, in the FLOYDS spectrum (+29 days) and the LRIS spectrum (+36 days), the bottom of the 4200 Å features becomes flat, reminiscent of the Ti-trough in the subluminous SN 1991bg-like (91bg-like; Filippenko et al. 1992b;Leibundgut et al. 1993) SNe.The nebular-phase spectra are dominated by [Fe II] and [Fe III] emission lines, but the [Fe II] features (e.g, the complex around ∼7300 Å) are weaker than in other SNe Ia (Figure 9), suggesting that the ejecta remain highly ionized about a year after the explosion.The nebular spectra are discussed in detail in Section 4.2. In Figure 5, we compare maximum-light and transition-phase spectra of SN 2022joj to those seen in other SNe Ia.Around peak, the blue continuum and shallow absorption features in SN 2022joj are similar to those of overluminous objects, including SN 1991T, SN 1999aa, and SN 2000cx.The asymmetric 4200 Å features are not seen in normal (SN 2011fe) or overluminous (SN 1991T andSN 1999aa) SNe Ia, which all show another maximum at ∼4100 Å redward of the narrow Si II λ4128 feature.In SN 2000cx, a similar (but narrower) absorption feature is interpreted as highvelocity Ti II (Branch et al. 2004).The 4200 Å features are actually much more similar to the well-known "Titrough" that is ubiquitous in subluminous 91bg-like objects, e.g., SN 1999by (Arbour et al. 1999).Prior to the peak, SN 1999by also shows this asymmetric absorption at about the same wavelength, which becomes more prominent with a nearly flat-bottom trough about a week after maximum.This absorption is caused by a blend of multiple species dominated by Ti II (Filippenko et al. 1992b;Mazzali et al. 1997).It remains prominent in the spectrum up to one month after maximum.Similarly, we find similar features in the spectra of SN 2022joj at +29 days and +36 days.Other normal/overluminous SNe Ia, unlike SN 2022joj, all exhibit a dip around ∼4500 Å.Aside from the 4200 Å features, SN 2022joj is otherwise entirely dissimilar from 91bg-like objects, which are ≳2 mag fainter at peak and exhibit much stronger Si II, Ca II and O I absorption from a cooler line-forming region (Filippenko et al. 1992b).The Ti-trough in 91bg-like SNe is interpreted as the result of a low photospheric temperature (Mazzali et al. 1997). SN 2022joj also shows remarkably shallow Si II absorption at maximum brightness.Following the techniques elaborated in Liu et al. (2023a) (see also Childress et al. 2013Childress et al. , 2014;;Maguire et al. 2014), we fit the Si II and Ca II IRT features with multiple Gaussian profiles.We find that modeling the Ca II IRT absorption requires two distinct velocity components -the photospheric-velocity features (PVFs) and the high-velocity features (HVFs).In Table 5 we list the estimates of the expansion velocities and the pseudo- equivalent widths (pEWs) of the major absorption lines from −12 days to +9 days.In Figure 6 we show the peak absolute magnitude in B band (M B,max ) vs. the velocity and pEW of Si II for SN 2022joj and a sample of normal SNe Ia from Zheng et al. (2018) and Burrow et al. (2020).Figure 6 highlights that SN 2022joj has a normal M B,max with a relatively low Si II λ6355 expansion velocity (hereafter v Si ii ).The Si II λ5972 and Si II λ6355 pEWs in SN 2022joj are lower than that for most normal SNe Ia.In fact, SN 2022joj sits at the extreme edge of the shallow-silicon group proposed in Branch et al. (2006), which mainly consists of overluminous 91Tlike/99aa-like objects.This is consistent with the high luminosity and the blue color of SN 2022joj at maximum light, since a high photometric temperature results in higher ionization, reducing the abundance of singly ionized atoms (e.g., Si II).Interestingly, the pEW of Si II λ6355 near peak is significantly lower than that in the first spectrum.In typical 91T-like/99aa-like objects, the Si II features are weak or undetectable at early times because the ejecta are even hotter, and only start to emerge around maximum light (Filippenko et al. 1992a).In the early spectrum of SN 2022joj, in contrast, stronger ab-sorption features from singly ionized Si and Ca indicate a cooler line-forming region at early times compared to that at maximum brightness. Prominent C II features are also detected in the overluminous SN 2003fg-like (03fg-like; Howell et al. 2006) SNe17 at their maximum brightness.However, 03fglike objects show blue colors in the early light curves (Taubenberger et al. 2019) and appear even brighter in near-UV compared to normal SNe Ia at peak (Brown et al. 2014), dissimilar from that of SN 2022joj.Many 03fg-like objects also show strong oxygen features both in their maximum-light and late-time spectra (Taubenberger et al. 2019), while in SN 2022joj, we do not find evidence for oxygen.Consequently, the explosion mechanism as well as the origin of carbon in SN 2022joj is likely very different from that of 03fg-like objects. In conclusion, the spectral evolution of SN 2022joj shows some similarities to 91T-like/99aa-like objects, as well as peculiarities.A reasonable model to explain SN 2022joj needs to reproduce (i) a strong suppression in flux blueward of ∼5000 Å at early times followed by a rapid evolution to blue colors; (ii) the seemingly contradictory observables at peak, namely the 4200 Å features similar to the Ti-trough in 91bg-like objects and the blue continuum/shallow Si II feature at maximum, which separately indicate low and high photometric temperatures, respectively; and (iii) prominent C II features at maximum brightness. Host-Galaxy Properties We model the observed spectral energy distribution (SED; photometry in HSC-SSP grizy and LS W 1 filters) with the software package Prospector 18 version 1.1 (Johnson et al. 2021).The LS grz photometry, which is consistent with the HSC-SSP results but has a lower S/N, is excluded from this modeling.We assume a Chabrier initial-mass function (IMF; Chabrier 2003) and approximate the star-formation history (SFH) by a linearly increasing SFH at early times followed by an exponential decline at late times (functional form t × exp (−t/τ ), where t is the age of the SFH episode and τ is the e-folding timescale).The model is attenuated with the Calzetti et al. (2000) model.The priors of the model parameters are set identically to those used by Schulze et al. (2021). The SED is adequately described by a galaxy template with a mass of log(M * /M ⊙ ) = 7.13 +0.15 −0.28 , suggesting that the host is a dwarf galaxy.The modeled starformation rate (SFR) is consistent with 0, but we note that measuring low SFRs with SED fitting is not robust and subject to systematics (Conroy 2013).In addition, the Hα emission detected in the late-time spectrum of the SN (Section 2.2) indicates at least some level of star formation in the host. SN 2022joj Compared to Model Explosions There are several physical mechanisms that can produce blue colors during the early evolution of SNe Ia, including heating of the SN ejecta following the decay of radioactive 56 Ni, interaction of the SN ejecta with a 18 Prospector uses the Flexible Stellar Population Synthesis (FSPS) code (Conroy et al. 2009) to generate the underlying physical model and python-fsps (Foreman-Mackey et al. 2014) to interface with FSPS in python.The FSPS code also accounts for the contribution from the diffuse gas based on the Cloudy models from Byler et al. (2017).We use the dynamic nested sampling package dynesty (Speagle 2020) to sample the posterior probability. In contrast, there are few proposed scenarios that can produce red colors up to a week after explosion, as in the case of SN 2022joj.If the newly synthesized 56 Ni is strongly confined to the innermost SN ejecta, then an SN may remain red for several days after explosion as the heating diffuses out toward the photosphere (Piro & Morozova 2016).Even the most confined 56 Ni configuration considered in Piro & Morozova (2016) converges to blue colors, similar to explosions with more extended 56 Ni distributions, within ∼6 days after explosion.SN 2022joj is observed to have very red colors ∼7 days after t fl (meaning more than 7 days after explosion, since SNe Ia have a "dark phase" before photons diffuse out of the ejecta; Piro & Nakar 2013).Dessart et al. (2014) considered more realistic delayed-detonation scenarios.Some of their 1D unmixed delayed-detonation models still show a red B − R color (B − R ≳ 0.5 mag) 7 days after the explosion (see the DDC20, DDC22, and DDC25 models in their Figure 1), comparable to that of SN 2022joj.However, these models never appear as blue as SN 2022joj at peak (B −R ≃ 0.0 mag), and the 56 Ni yields are relatively low (M Ni56 ≲ 0.3 M ⊙ ), so they would result in subluminous events.In addition, we do not know any multidimensional explosion models that fail to produce any 56 Ni mixing within the ejecta, and therefore disfavor this scenario. Alternatively, in the double-detonation scenario, a layer of IGEs in the ashes of the helium shell can produce significant opacity in the outer layers of the bulk ejecta, producing a red color (Polin et al. 2019).This scenario has been proposed for a few normal-luminosity SNe with red colors at early times, including SN 2016jhr (Jiang et al. 2017) and SN 2018aoz (Ni et al. 2022).In Figure 7 we compare the spectra of SN 2022joj at −12 days and +0 day with 1D double-detonation models from Polin et al. (2019) and 2D double-detonation models from Shen et al. (2021b).To create synthetic spectra, both models use Sedona (Kasen et al. 2006), a multi-dimensional radiative transfer (RT) simulator that assumes local thermodynamical equilibrium (LTE). In the 1D models, the most important parameters are the mass of the C/O core (M c ) and the mass of the helium shell (M s ).The maximum luminosity depends on the amount of 56 Ni synthesized in the explosion, which is predominantly determined by the total progenitor mass (M c +M s ; Polin et al. 2019).We find that the maximum brightness in B band (M B,max = −19.46mag) is reproduced by the 1D models with relatively massive progenitors (∼1.1 M ⊙ ).However, models with such massive progenitors tend to produce blue, featureless spectra at early times (e.g., the M c = 1.1 M ⊙ , M s = 0.05 M ⊙ model in Figure 7), inconsistent with the observations.Less-massive models provide a better match to the lineblanketing seen in the early spectra, but fail to reproduce the maximum brightness as well as the 4200 Å features in the observed spectra.The 1D models overestimate the pEW and the expansion velocity of the Si II λ6355 line at peak.As a reference, we overplot the theoretical M B,max -v Si ii relation of 1D double-detonation models for thin helium shells (M s ≃ 0.01 M ⊙ ) across a spectrum of progenitor masses in Polin et al. (2019) as the dashed black curve in the left panel of Figure 6, which is not in agreement with the properties of SN 2022joj.Besides, none of the models produce detectable C II features in the spectra at peak.While the 1D models do not fully reproduce the observed properties of SN 2022joj, some of these tensions can be resolved when considering viewing-angle effects in multi-dimensional models.In Figure 6 we also show the properties of three 2D double-detonation models from Shen et al. (2021b) with a variety of M c and M s .For consistency in comparing the synthetic brightness with the observations of SN 2022joj, of which the Kcorrections are unknown, the synthetic fluxes in the B filter of these models are evaluated after shifting the synthetic spectra to z = 0.02736, the redshift of SN 2022joj.We obtain the Si II line properties using the same fitting techniques as in Section 3.3. 19We again find that moremassive progenitors generally lead to higher-luminosity SNe, but different viewing angles produce significantly different spectral properties as a result of the asymmetry of the ejecta -materials closer to the point of helium ignition are less dense and expand faster (Shen et al. 2021b).In the plot, the cosine value of the polar angle relative to the point of helium ignition, µ, ranges from +0.93 (near the helium ignition point) to −0.93 (opposite to the helium ignition point).When the SN is observed along a line of sight closer to the detonation point in the shell (greater µ), it will appear fainter at maximum brightness and show a higher line velocity in Si II λ6355.For a relatively high progenitor mass (≳1.1 M ⊙ ), a high v Si ii (≳13, 500 km s −1 ) is predicted in 1D models.However, all of the 2D models with µ ≲ 0 show a lower v Si ii (≲12, 000 km s −1 ), much closer to SN 2022joj in the M B,max -v Si ii phase space.Models with µ > 0 are more consistent with 1D model predictions.It is suggested by Polin et al. (2019) that high-velocity SNe that follow the dashed line in Figure 6 result from sub-M Ch double detonations, while SNe in the clump centered at M B,max ≃ −19.5 mag and v Si ii ≃ 11, 000 km s −1 are likely near-M Ch explosions.Based on the 2D models, however, we should expect a similar number of high-velocity and normalvelocity double-detonation SNe Ia.A substantial fraction of the objects within the clump on the M B,maxv Si ii diagram may be sub-M Ch double-detonation events viewed from certain orientations (Shen et al. 2021b).SN 2018aoz is a double-detonation candidate that, like 19 The v Si ii is systematically higher than the values displayed in Figure 20 from Shen et al. (2021b).In Shen et al. (2021b), the v Si ii is determined using the minimum of the Si II λ6355 absorption without subtracting the continuum, such that the estimated minimum is systematically redshifted with respect to the actual line center, whereas we fit the absorption features with Gaussian profiles on top of a linear continuum.SN 2022joj, exhibits early red colors before evolving to normal luminosity and blue colors (Ni et al. 2022).Interestingly, SN 2018aoz also resides in the high-luminosity, low-velocity clump in the M B,max -v Si ii space (Ni et al. 2023), and thus, it too may be an example of a doubledetonation SN Ia viewed from the hemisphere opposite to the helium ignition point. In the bottom panels of Figure 7 we show two 2D double-detonation models (each with two viewing angles) with a total progenitor mass of ∼1 M ⊙ from Shen et al. (2021b).These models qualitatively match the ob-served spectra at maximum light.In the M c = 0.96 M ⊙ , M s = 0.042 M ⊙ model, we find that when µ = 0 (viewed from the equator), it predicts a reasonable level of line blanketing in the blue side of the spectrum at early times (lower-left panel of Figure 7).Near maximum brightness, the model also reproduces the overall shape of the observed spectrum, though the strength of nearly all the absorption lines (4200 Å features, S II, and Si II) is overestimated, and v Si ii is also overestimated (∼12,000 km s −1 ).When viewed from the hemisphere opposite to the helium ignition point (e.g., µ = −0.4), the model yields an asymmetric profile of 4200 Å features that matches the observations better.The Si II features are also predicted to be shallower, though still not as shallow as that in the observations.Nonetheless, the spectra at early times are expected to be much bluer than the observations.The M c = 0.90 M ⊙ , M s = 0.100 M ⊙ models, especially when µ = −0.4,produce even shallower and slower-expanding Si II features at maximum brightness.This is in agreement with the trend observed in the right panel of Figure 6: models with a thicker helium shell tend to exhibit shallower Si II features.However, the level of line blanketing blueward of ∼5000 Å is also underestimated at early times.In addition, none of the models produce significant C II features, though we will show in Section 4.3 that this discrepancy does not necessarily invalidate the doubledetonation interpretation. To investigate the origin of the 4200 Å features at maximum luminosity, we run additional 1D Sedona RT simulations for the M c = 0.96 M ⊙ and M s = 0.042 M ⊙ from Boos et al. (2021, adopted in the calculations of Shen et al. 2021b).We adopt the density and chemical profile of the slice between the viewing angles µ = −0.467and µ = −0.333 in the 2D ejecta at t B,max estimated in the original 2D RT simulations (16.25 days after the explosion) as the input of the 1D model.The synthetic spectrum is generally consistent with the 2D RT outcomes with a viewing angle µ = −0.4.Then we run another 1D simulation with all the titanium isotopes in the slice removed.The resultant synthetic spectrum is still broadly consistent with the 1D and 2D results redward to ∼5000 Å, but shows a significant peak at ∼4100 Å resembling those in normal SNe Ia.This indicates that an explosion that yields an extended titanium distribution in the ejecta (such as the example double-detonation model shown here), a blend of Ti lines reshapes the spectrum around 4200 Å, leading to the absence of the peak at ∼4100 Å and a deep, asymmetric absorption feature. The extremely red UV-optical colors near maximum luminosity are also broadly consistent with the doubledetonation scenario, since the heavy elements (e.g., Ti, V, Cr, and Fe) in the outer ejecta could effectively absorb the UV photons with wavelengths around 3000 Å.However, no existing models could accurately model the UVOT light curves, especially in the u-band (∼3100-3900 Å in the observed frame, or ∼3000-3800 Å in the rest frame of the host galaxy), where the flux is dominated by the re-emission of Ti, V, and Cr, and non-LTE effects could be important. While none of the models presented here provide a strong match to SN 2022joj at every phase, we draw the broad conclusion that the spectroscopic properties Boos et al. (2021).While the synthetic spectrum in the original 2D Sedona model can be well reproduced in a 1D run (blue), the model with all the titanium isotopes removed (salmon pink) cannot reproduce the remarkable 4200 Å features and exhibits a peak at ∼4100 Å. of SN 2022joj are qualitatively consistent with a sub-M Ch WD (≳1.0 M ⊙ ) double detonation viewed from the hemisphere opposite to the ignition point.Observers from such a viewing angle would observe strong absorption features in the blue portion of the spectrum dominated by Ti as well as relatively shallow and slowlyexpanding Si lines in the red portion.We emphasize that none of the models considered here was specifically developed and tuned to explain SN 2022joj.Customized models specifically tuned for SN 2022joj may reproduce all the observed features simultaneously, and we suggest more 2D double detonation simulations be performed.Furthermore, additional improvements can be made via an improved handling of the radiative transfer (e.g., non-LTE effects; see Shen et al. 2021a). The 7300 Å Region in Nebular-Phase Spectra In Figure 9 we compare the two nebular-phase spectra of SN 2022joj with the overluminous SNe Ia (SN 1991T, SN 1999aa, and SN 2018cnw) and the normal-luminosity SN 2011fe. Compared to that of other SNe Ia, the nebular spectra of SN 2022joj show a relatively low flux ratio between the complex at ∼7300 Å (hereafter the 7300 Å features) dominated by [Fe II] and [Ni II] and the complex at ∼4700 Å dominated by [Fe III].This suggests high ionization in the ejecta (Wilk et al. 2020).In addition to the smaller flux ratio, the profile of the 7300 Å features in SN 2022joj is also distinct from other SNe.Most of SNe Ia show a bimodal structure in their 7300 Å features (e.g., Graham et al. 2017;Maguire et al. 2018).The bluer peak is dominated by [Fe II] λλ7155, 7172, while [Ni II] λλ7378, 7412 usually have non-negligible contributions to the redder peak (see Figure 9).In some peculiar SNe Ia (mostly subluminous ones), the detection of [Ca II] λλ7291, 7324 has also been reported (e.g.Jacobson-Galán et al. 2020;Siebert et al. 2020).The bimodal morphology is prominent in the spectra of SN 1999aa and SN 2011fe.SN 1999T is well-known for its broader emission lines in the nebular phase, so the composition of the 7300 Å features is ambiguous.In the spectra of SN 2022joj and SN 2018cnw, however, the redder peak is absent and the 7300 Å features show an asymmetric single peak, which seems to indicate a low abundance of Ni in the ejecta. To investigate the relative contributions of [Fe II] and [Ni II] to the 7300 Å features in SN 2022joj, we model this region with multiple Gaussian emission profiles using the same technique as in Section 3.3.We include four [Fe II] lines (7155, 7172, 7388, 7453 Å) and two [Ni II] lines (7378, 7412 Å) in the fit.For each species, the relative flux ratios of lines are fixed, whose values are adopted from Jerkstrand et al. (2015).For [Fe II], we set L 7155 : L 7172 : L 7388 : L 7453 = 1 : 0.24 : 0.19 : 0.31, and for [Ni II], we set L 7378 : L 7412 = 1 : 0.31.These line ratios are calculated assuming LTE, but the departure from LTE should not be significant under the typical conditions in the ejecta (Jerkstrand et al. 2015).We allow the amplitudes A of these Gaussian profiles to be either positive or negative.The velocity dispersions σ v in different lines of each species are set to be the same.For both A and log σ v , we adopt flat priors, and only allow σ v to vary between 1000 and 6000 km s −1 .In addition, we adopt wide Gaussian priors for the radial velocities v of [Fe II] and [Ni II], both centered at −1000 km s −1 with a standard deviation of 2000 km s −1 .The fitted models are shown in Figure 10, where colored curves correspond to the [Fe II] and [Ni II] emission adopting the mean values of the posterior distributions of model parameters sampled with MCMC.The fitted parameters are listed in Table 6.In both spectra, the flux of [Ni II] is consistent with 0 (L [Ni ii] λ7378 /L [Fe ii] λ7155 = 0.13 ± 0.14 at +287 days and 0.18 ± 0.16 at +330 days; the flux ratios of lines are estimated with the ratios of their pEWs, and the uncertainties are the robust standard deviations estimated with the median absolute deviation of the drawn sample).We also find that the [Ni II] velocities cannot be constrained by the data, whose uncertainties estimated in both spectra are greater than the standard deviations of their Gaussian priors (2000 km s −1 ), again disfavoring a solid [Ni II] The relative abundance of Ni and Fe, which probes the mass of the progenitor WD, can be estimated via the flux ratio of their emission lines.At >300 days after explosion 56 Fe is the dominant isotope of Fe following the decay of 56 Ni through the chain 56 Ni→ 56 Co→ 56 Fe.Consequently, the Fe abundance primarily depends on the yield of 56 Ni.The Ni abundance, however, is sensitive to both the progenitor mass and the explosion scenario.The stable Ni isotopes ( 58 Ni, 60 Ni, and 62 Ni) are more neutron-rich compared to the α-species 56 Ni, and can only be formed in high-density regions with an enhanced electron-capture rate during the explosion (Nomoto 1984;Khokhlov 1991).Consequently, SNe Ia from sub-M Ch WDs, with central densities that are lower than near-M Ch WDs, are expected to show a lower abundance of stable Ni isotopes (Iwamoto et al. 1999;Seitenzahl et al. 2013;Shen et al. 2018a). To estimate the relative abundance of Ni and Fe, we use the equation adopted in Jerkstrand et al. (2015) and Maguire et al. (2018) 2018), this assumption proves to be valid by modeling nebular-phase spectra at similar phases (Fransson & Jerkstrand 2015;Shingles et al. 2022), with the relative deviation from the ionization balance ≲20%.We handle the uncertainties due to the unknown temperature, ratio of departure coefficients, and ionization balance in a Monte Carlo way.We randomly generate N = 4000 samples of the temperature (3000-8000 K), the ratio of departure coefficients (1.2-2.4), and the ionization balance factor (0.8-1.2) assuming uncorrelated uniform distributions.These intervals are again adopted from Maguire et al. (2018).Combining these quantities with the samples of line-profile parameters drawn with the MCMC, we obtain N estimates of Ni/Fe, which are effectively drawn from its posterior distribution.We find that Ni/Fe is consistent with 0 and we obtain a 3σ upper limit of Ni/Fe < 0.03.Such a low Ni abundance is more consistent with the yields of sub-M Ch double-detonation scenarios (Shen et al. 2018a), much lower than the expected outcomes of near-M Ch , delayed-detonation models (Seitenzahl et al. 2013) or pure deflagration models (Iwamoto et al. 1999).Alternatively, it is proposed in Blondin et al. ( 2022) that for high-luminosity SNe Ia, the absence of [Ni II] lines can be a result of high ionization of Ni in the inner ejecta, despite the fact that a significant amount of Ni exists.It is shown that the [Ni II] λλ7378, 7412 lines can be strongly suppressed even in a high-luminosity, near-M Ch delayed-detonation model, once the Ni II/Ni III ratio at the center of the ejecta is artificially reduced by a factor of 10.Nevertheless, it remains to be questioned whether a physical mechanism exists to boost the ionization in the inner ejecta, where the stable Ni dominates the radioactive 56 Ni and 56 Co, and the deposited energy per particle due to the radioactive decay is usu-ally low.One possible scenario is inward mixing, which brings 56 Co into the innermost ejecta such that the ionization would significantly increase.However, in this case calcium would inevitably be mixed inward as well, and the resultant Ca II λλ7291, 7324 lines would stand out and dominate the 7300 Å features (Blondin et al. 2022).Other physical mechanisms are thus required to reduce the Ni II/Ni III ratio at the center of a near-M Ch explosion. We also find that the [Fe II] lines are significantly blueshifted (v [Fe ii] = −(2.46± 0.38) × 10 3 km s −1 at +287 days and −(1.56±0.60)×10 3 km s −1 at +330 days).This is consistent with other SNe Ia showing low v Si ii at maximum brightness (Maeda et al. 2010b;Maguire et al. 2018;Li et al. 2021), and also in qualitative agreement with the asymmetric sub-M Ch double-detonation scenario.Specifically, along a line of sight opposite to the shell detonation point, observers would see intermediatemass elements (IMEs) with low expansion velocities, including Si II.In the meantime, the IGEs at the center of the ejecta would have a bulk velocity toward the observer (see Figure 1 and Figure 2 of Bulla et al. 2016; see also Fink et al. 2010). Carbon Features at Maximum Luminosity The spectrum at maximum luminosity exhibits strong C II λλ6580, 7234 lines at a velocity of ∼10,000 km s −1 (Figure 11).In the −12 days spectrum, we do not find strong evidence for the C II λ6580 line, though there is an absorption feature that might be associated with the C II λ7234 line at an expansion velocity of ∼14, 000 km s −1 (see the right panel in Figure 11).The feature shows a velocity dispersion that is a factor of ∼3 greater than the unambiguous C II λ7234 line at maximum luminosity, and its nature remains vague.In this section we briefly discuss the possible origin of the carbon features. In double detonations of sub-M Ch WDs, the burning of carbon is expected to be efficient, with only a small fraction of carbon left unburnt.In 1D doubledetonation models from Polin et al. (2019), only a negligible (≲10 −5 M ⊙ ) amount of carbon is unburnt in normal and overluminous SNe, which should not cause any noticeable spectroscopic features.Multidimensional simulations produce a greater amount (∼10 −4 -10 −3 M ⊙ ) of leftover carbon (Fink et al. 2010;Boos et al. 2021), which is still roughly two orders of magnitude lower than the yields of IMEs.Nevertheless, most of the unburnt carbon will be concentrated on the surface of the C/O core, forming a sharp carbon-rich shell No evident C II λ6580 lines are found in the earliest spectrum (−12 days; though there is one absorption feature that might be associated with C II λ7234 at ∼14,000 km s −1 ) or the post-maximum spectrum (+3 days).All spectra are binned with a bin size of 5 Å, except for the low-resolution SEDM spectrum (+3 days).The dashed lines correspond to wavelengths of the C II lines assuming a range of velocities. (see Figures 5-8 in Boos et al. 2021) with a high expansion velocity (>10,000 km s −1 ).Both 1D and 2D doubledetonation models have low carbon yields in the helium ashes, so the carbon features may be strengthened over time, as the photosphere moves inward to the carbonrich regions.Nevertheless, with the current resolution in the RT simulations, the contribution of such a sharp shell is unlikely to be captured in the resultant synthetic spectra. Another possible origin of unburnt carbon in a doubledetonation SN is the stripped material from a degenerate companion.In the Dynamically Driven Double-Degenerate Double-Detonation (D 6 ) model (Shen et al. 2018b), the primary C/O WD could detonate following the dynamical ignition of a helium shell from its WD companion (either a helium WD or a C/O WD with a substantial helium envelope) during the unstable mass transfer (Guillochon et al. 2010;Pakmor et al. 2013).In this case, a significant amount (∼10 −3 M ⊙ ) of materials can be stripped from a C/O WD companion, following the explosion of a 1 M ⊙ primary WD (Tanikawa et al. 2018;Boos et al. 2023, in prep.), although the amount of carbon would likely be significantly less if the companion still holds a substantial helium envelope (Tanikawa et al. 2019).The velocity of the stripped carbon is expected to be low (e.g., centered at ∼3000 km s −1 in Tanikawa et al. 2018), though more studies will need to be done to test the robustness of this estimate.Explosion mechanisms that would result in a greater amount of unburnt carbon in the ejecta include the pure deflagration (Nomoto et al. 1984) or pulsating delayed detonation (Hoeflich et al. 1995;Dessart et al. 2014) of a near-M Ch WD, and the violent merger of binary WDs (Raskin et al. 2014).None of these models reproduce the unusual red color in the early light curves or the peculiar spectroscopic features of SN 2022joj as well as the helium-shell double-detonation scenario does. We conclude that while it remains to be questioned if double detonations could really produce strong carbon features, the detection of C II λλ6580, 7234 lines in SN 2022joj does not necessarily invalidate our hypothesis of its double-detonation origin. CONCLUSIONS We have presented observations of SN 2022joj, a peculiar SN Ia.SN 2022joj has an unusual color evolution, with a remarkably red g ZTF − r ZTF color at early times due to continuous absorption in the blue portion of its SED.Absorption features observed around maximum light simultaneously suggest high (a blue continuum and shallow Si II lines similar to those of overluminous, 99aa-/91T-like SNe) and low (the tentative Ti II features resembling those of subluminous, 91bg-like SNe) photospheric temperatures.The nebular-phase spectra of SN 2022joj suggest a high ionization and low Ni abundance in the ejecta, consistent with a sub-M Ch explosion. The early red colors are most likely due to a layer of IGEs in the outermost ejecta as products of a heliumshell detonation, in the sub-M Ch double-detonation scenario.If the asymmetric ejecta are observed from the hemisphere opposite to the helium ignition point, we find that the resultant synthetic spectra could qualitatively reproduce some of the observed properties, including (i) significant line-blanketing of flux due to IGEs at early phases; (ii) strong absorption features around 4200 Å as well as relatively weak Si II features near maximum brightness, and (iii) blueshifted [Fe II] λ7155 accompanied with a relatively low expansion velocity of Si II at peak.Current double-detonation models cannot reproduce the strong C II lines in SN 2022joj at its maximum luminosity, but the double-detonation hypothesis is not necessarily disfavored and an improved RT treatment will be needed in modeling the observational effects of unburnt carbon.No existing double-detonation model can fully explain all the observational properties of SN 2022joj.As a result, it is possible that some alternative model is superior, though we find that the early red colors are difficult to explain with alternative explosion scenarios.Further refinement of multidimensional models covering a finer grid of progenitor properties may answer the question if the peculiar SN 2022joj is really triggered by a double detonation. Figure 3 . Figure 3.The LRIS spectrum reveals an Hα emission line from the host galaxy at 6742.4 Å, corresponding to z = 0.02736.Left: the Hα emission in the observed 2D and 1D spectra.This emission line sits in the region free of strong night-sky lines and is unlikely due to bad sky subtraction.Right: image of the host galaxy and the position of SN 2022joj, with the orientation of the LRIS slit overplotted. Figure 4 . Figure 4.The color-color diagrams using UVOT photometry show that SN 2022joj (black star) has very unusual UV−optical colors at maximum luminosity compared to normal SNe Ia (gray dots; a sample of 29 normal SNe Ia from Brown et al. 2018).The arrows mark how SN 2022joj would move in each color-color space if there were host reddening with the visual extinction of A V,host = 1 (assuming RV = 3.1). Figure 6 . Figure6.SN 2022joj (black star) is an SN Ia showing a normal brightness in B and remarkably shallow Si II features with a relatively low expansion velocity at maximum luminosity, compared to a sample of normal SNe Ia (gray dots) fromZheng et al. (2018) andBurrow et al. (2020).Left: the B-band absolute magnitude vs. the expansion velocity of Si II λ6355 at maximum brightness.Right: the pseudo-equivalent widths (pEWs) of the two Si II lines, Si II λλ5972, 6355, at maximum brightness.The dashed black line corresponds to the theoretical MB,max-v Si ii relation of 1D double-detonation models for thin helium shells across a spectrum of progenitor masses inPolin et al. (2019).The colored × symbols show 2D double-detonation models fromShen et al. (2021b) with different C/O core mass (Mc) and helium shell mass (Ms), and viewing angles.For each model, multiple symbols are shown to summarize the effect of different viewing angles, from µ = −0.93 to µ = +0.93(µ defined as the cosine of the polar angle relative to the point where the helium-shell detonation occurs).Parameters of the candidate double-detonation normal SN Ia, SN 2018aoz(Ni et al. 2023), are also overplotted as an orange diamond. Figure 7 . Figure 7. Comparisons of an early spectrum (−12 days) and a maximum-light spectrum (+0 days) of SN 2022joj (black) with two sets of double-detonation models.For each synthetic spectrum, the phase relative to tB,max is listed (the time since explosion is shown in parentheses).Top: comparison with 1D models from Polin et al. (2019) with different C/O core (Mc) and He shell (Ms) masses.Bottom: comparison with 2D models from Shen et al. (2021b) with different masses and viewing angles µ.The observed spectra have been corrected for Galactic extinction. Figure 8 . Figure 8.Comparison of the Sedona synthetic spectra at maximum luminosity indicates that Ti dominates the 4200 Å features in a double-detonation model.Two 1D Sedona models (blue + salmon pink) are run using the density and chemical profile of a slice between the viewing angles µ = −0.467and µ = −0.333 of the Mc = 0.96 M⊙ and Ms = 0.042 M⊙ model from Boos et al. (2021).While the synthetic spectrum in the original 2D Sedona model can be well reproduced in a 1D run (blue), the model with all the titanium isotopes removed (salmon pink) cannot reproduce the remarkable 4200 Å features and exhibits a peak at ∼4100 Å. Figure 10 . Figure 10.Fits to the 7300 Å region containing [Fe II] and [Ni II] features are consistent with a low Ni II abundance.The observed spectra are shown in gray.The colored lines correspond to the models of [Fe II] (orange) and [Ni II] (purple) features.For the model parameters we adopt the mean values of their posterior distributions.The black solid lines are the overall models. Figure 11 . Figure11.Prominent C II λλ6580, 7234 absorption lines at maximum luminosity.No evident C II λ6580 lines are found in the earliest spectrum (−12 days; though there is one absorption feature that might be associated with C II λ7234 at ∼14,000 km s −1 ) or the post-maximum spectrum (+3 days).All spectra are binned with a bin size of 5 Å, except for the low-resolution SEDM spectrum (+3 days).The dashed lines correspond to wavelengths of the C II lines assuming a range of velocities. Table 2 . Optical and UV Photometry of SN 2022joj. Note-Phase is measured relative to the B-band peak in the rest frame of the host galaxy.The resolution R is reported for the central region of the spectrum. Table 5 . Fits to the expansion velocities and pEWs of Si II λλ5972, 6355 and the Ca II IRT of SN 2022joj. detection.The 7300 Å features can be well fit with [Fe II] emission only.In addition, we have tested fitting this complex with [Ca II] in addition to [Fe II] and [Ni II], but find no evidence for [Ca II]. , 7378 /L 7155 is the flux ratio of the [Ni II] λ7378 to [Fe II] λ7155 lines, n Ni ii /n Fe ii is the number density ratio of Ni II and Fe II, and dc Ni ii /dc Fe ii is the ratio of the departure coefficients from LTE for these two ions.Since both Ni II and Fe II are singly ionized species with similar ionization potentials, we assume that n Ni ii /n Fe ii is a good approximation of the total Ni/Fe ratio.As illustrated in Maguire et al. ( Table 6 . Fits to the late-time spectra around 7300 Å with [Fe II] and[Ni II]emission.
15,281
2023-08-11T00:00:00.000
[ "Physics" ]
Planetary period oscillations in Saturn ’ s magnetosphere : comments on the relation between post-equinox periods determined from magnetic field and SKR emission data We discuss the properties of Saturn planetary period oscillations (PPOs) deduced from analysis of Saturn kilometric radiation (SKR) modulations by Fischer et al. (2014), and from prior analysis of magnetic field oscillations data by Andrews et al. (2012) and Provan et al. (2013), with emphasis on the post-equinox interval from early 2010 to early 2013. Fischer et al. (2014) characterize this interval as showing single phase-locked periods in the northern and southern SKR modulations observed in polarizationseparated data, while the magnetic data generally show the presence of separated dual periods, northern remaining shorter than southern. We show that the single SKR period corresponds to the southern magnetic period early in 2010, segues into the northern period in late 2010, and returns to the southern period in mid-2012, approximately in line with changes in the dominant magnetic oscillation. An exception occurs in mid-February to late August 2011 when two periods are again discerned in SKR data, in good agreement with the ongoing dual periods in the magnetic data. Fischer et al. (2014) discuss this change in terms of a large jump in the southern SKR period related to the Great White Spot storm, which the magnetic data show is primarily due instead to a reappearance in the SKR data of the ongoing southern modulation in a transitory interval of resumed southern dominance. In the earlier interval from early April 2010 to mid-February 2011 when Fischer et al. (2014) deduce single phase-locked periods, we show unequivocal evidence in the magnetic data for the presence of separated dual oscillations of approximately equal amplitude. We suggest that the apparent single SKR periods result from a previously reported phenomenon in which modulations associated with one hemisphere appear in polarization-separated data associated with the other. In the following interval, mid-August 2011 to early April 2012, when Fischer et al. (2014) again report phaselocked northern and southern oscillations, no ongoing southern oscillation of separate period is discerned in the magnetic data. However, the magnetic amplitude data show that if a phase-locked southern oscillation is indeed present, its amplitude must be less than ∼ 5–10 % of the northern oscillation. Introduction The phenomenon of planetary period oscillations (PPOs) has formed an important topic in studies of Saturn's magnetosphere during the Cassini era.Observations have shown that two seasonally changing oscillations are present with slightly different periods, one associated with the northern hemisphere and the other with the southern (Galopeau and Lecacheux, 2000;Kurth et al., 2008;Gurnett et al., 2009).The physical picture emerging is one in which rotating current systems are driven from the two ionospheres, most probably through coupling to rotating twin-vortex flows in the polar thermosphere (Smith, 2010;Jia et al., 2012;Southwood and Cowley, 2014).The planetary field communicates this force to the magnetospheric plasma via a current system that rotates wave-like through the sub-corotating plasma, thus leading to rotating magnetic field and plasma perturbations throughout the magnetosphere (e.g.Carbary et al., 2007;Andrews et al., 2008Andrews et al., , 2010a;;Burch et al., 2009).The force from one hemisphere may also be communicated to the other if the associated field-aligned currents (FACs) extend Published by Copernicus Publications on behalf of the European Geosciences Union.S. W. H. Cowley and G. Provan: Planetary period oscillations in Saturn's magnetosphere into an inter-hemispheric system (Southwood and Kivelson, 2007).PPO-modulation of the FACs associated with plasma sub-corotation and the solar wind interaction also leads to modulated auroral electron acceleration in regions of upward FAC (Nichols et al., 2008(Nichols et al., , 2010a, b;, b;Southwood and Kivelson, 2009;Badman et al., 2012;Hunt et al., 2014), together with associated modulated radio emissions at kilometre wavelengths generated via the cyclotron maser instability (Zarka, 1998;Lamy et al., 2010Lamy et al., , 2011)).The latter emission is termed Saturn kilometric radiation (SKR), through which the PPO phenomenon was first observed in Voyager data (Warwick et al., 1981(Warwick et al., , 1982;;Gurnett et al., 1981). To date, the phases and periods of the PPOs have been tracked near-continuously during the Cassini mission using both SKR data (e.g.Kurth et al., 2008;Lamy, 2011;Gurnett et al., 2011) and magnetic field data obtained when the spacecraft is located inside the magnetosphere (e.g.Andrews et al., 2012;Provan et al., 2013Provan et al., , 2014)).These analyses are in good general agreement that the two periods were well-separated at ∼ 10.6 h for the northern period and ∼ 10.8 h for the southern period during southern summer conditions early in the Cassini mission, and that they converged towards ∼ 10.7 h over a ∼ 1 Earth-year interval spanning Saturn equinox in mid-August 2009.Since both magnetic perturbations and SKR modulations are believed to be associated with the same rotating current systems, the rotation periods are expected in principle to be identical.However, somewhat different results have been reported for the post-equinox interval (Provan et al., 2013(Provan et al., , 2014;;Fischer et al., 2014), which it is the purpose of this paper to discuss. A potentially significant issue arises in the determination of phases and periods from SKR modulation data from the "dual modulations" that are sometimes observed in the emission apparently from a given hemisphere.The conically beamed SKR sources in one hemisphere generally illuminate the region to ∼ 20 • below the equator in the other (Lamy et al., 2008a;Kimura et al., 2013), with the exception of a nearplanet shadow zone extending ∼ 4 R S from the planet in the equatorial plane (Lamy et al., 2008b) (R S is Saturn's 1 bar equatorial radius equal to 60 268 km).Consequently, nearequatorial SKR observations outside this radius, corresponding to a significant proportion of all Cassini observations, will generally observe modulated emission from both hemispheres.However, as first shown by Lamy (2011), these can be distinguished by separating the emissions with respect to their sense of circular polarization, emissions from the north and south being predominantly right-hand (RH) and lefthand (LH) polarized, respectively, corresponding to the extraordinary mode.It is notable, however, that during some intervals, modulations associated with one hemisphere are also observed in the polarization-separated data from the other hemisphere so that dual modulations are present (e.g.Fig. 1 of Lamy, 2011).The origin of this effect is at present uncertain: it is not clear whether it is a real physical phenomenon due, e.g. to unstable auroral electron distributions propagat-ing along the field from one hemisphere to the other or to inter-hemispheric currents as mentioned above, or whether it is an effect of the polarization-separation technique.It is notable, however, that the effect appears to be more prevalent in near-equatorial data, corresponding to the majority of that discussed here, rather than in high-latitude data when the spacecraft is largely located in the shadow zone of emissions from the opposite hemisphere. Under most circumstances the resulting ambiguity in the hemispheric origin of the SKR modulations is readily resolved from consideration of the continuity of the data in both polarization channels, as well as comparison with data from adjacent intervals at higher latitudes.This is certainly true of the pre-equinox data studied by Lamy (2011).However, the dual modulation effect gives rise to the potential for confusion post-equinox, when the northern and southern periods are closer together, with a difference in period of ∼ 0.05 h compared with ∼ 0.2 h during southern summer, and where the two modulations are found in magnetic field data to undergo large simultaneous abrupt (revolutionto-revolution; rev-to-rev) changes in relative amplitude from southern to northern dominance, and vice versa, at intervals of ∼ 100-200 days (Provan et al., 2013(Provan et al., , 2014)).If in a particular interval the "secondary" modulation from a particular hemisphere (i.e. a particular sense of circular polarization) at the period of the opposite hemisphere is of lesser amplitude than that of the "primary", then following the principal peaks of the emission in time, as, e.g. in Fig. 5 of Fischer et al. (2014), will reveal the primary phase and hence period associated with that hemisphere.However, if we suppose that the relative amplitude of the modulations changes significantly, such that the amplitude of the secondary should exceed that of the primary, then the same procedure will reveal the period associated with the opposite hemisphere, regardless of whether or not a lesser primary modulation is still present.Not only this, but the phase of the secondary emission will correspond exactly (within observation uncertainties) to that in the opposite hemisphere, i.e. it will appear that the modulations from the two hemispheres are "phaselocked" at a common period.If changes in the relative amplitudes of the two modulations then result in an abrupt switch between these two conditions in the emissions from a given hemisphere (i.e.circular polarization state), the modulations from that hemisphere will appear to undergo an abrupt switch to or from the phase-locked period of the opposite hemisphere.As we will suggest below, just such conclusions have recently been drawn by Fischer et al. (2014) studying SKR modulations in the post-equinox interval, while analyses of the magnetic field oscillations clearly show two oscillations with separated periods generally to be present. Comparison of PPO periods determined from SKR and magnetic field data Figure 1 provides an overview of the PPO periods determined from Cassini SKR modulation and magnetic field oscillation data spanning the interval from the beginning of 2004 to mid-2013.Saturn vernal equinox in mid-August 2009 is marked by the vertical arrow.The coloured spectrogram is taken from Fig. 1 of Fischer et al. (2014), here inverted to show the period vs. time in days, where throughout this paper t = 0 corresponds to 00:00 UT on 1 January 2004.This shows the peak-to-peak modulation power of the complete SKR emission integrated over 80-500 kHz (i.e.not separated by circular polarization), thus in principle exhibiting modulations associated with both hemispheres.Analysis details are given by Gurnett et al. (2009Gurnett et al. ( , 2011)), but we note here that the powers were derived using a 240-day data window shifted in 30-day steps.The over-plotted red and blue lines show the southern and northern periods, respectively, derived from magnetic field data over the same interval by Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014)).These periods were determined from rev-to-rev phase measurements of the PPO magnetic oscillations observed within the quasi-dipolar magnetospheric "core" region (dipole L ≤ 12) on few-day near-equatorial Cassini periapsis passes where the northern and southern oscillations combine to produce beats, augmented with polar phase data obtained on highly inclined orbits where the oscillations are found to be purely characteristic of the hemisphere concerned (Andrews et al., 2010b(Andrews et al., , 2012)).The nature of the orbits, and hence of the magnetic phase data, is indicated by the coloured bar at the top of the panel, where blue indicates near-equatorial orbits and green highly inclined, lettered sequentially from the start of the inorbit mission (mid-2004).The periods were determined from running ∼ 200-day segments of phase data evaluated every ∼ 10 days for intervals A-E1 (Andrews et al., 2012), thus similar to the intervals employed in the SKR analysis.Beyond E1, however, a sequence of abrupt (rev-to-rev) changes in the magnetic PPO properties were observed at ∼ 100-200day intervals shown by the white dotted lines, characterized principally by simultaneous changes in the amplitudes of the two oscillations, together with small changes in the oscillation periods and phases.Only single linear fits to the phase data yielding single values for the northern and southern periods have been determined in these short intervals (Provan et al., 2013(Provan et al., , 2014)).The vertical dashed lines in Fig. 1 also show the interval of the Saturn Great White Spot (GWS) storm, a central theme of the discussion by Fischer et al. (2014), specifically the interval of storm-associated lightning strokes observed in radio data, from initiation on 5 December 2010 to complete cessation on 28 August 2011. Prior to equinox two clearly separated periods were present in both data sets, the longer ∼ 10.8 h corresponding to the southern oscillation, and the shorter ∼ 10.6 h to the northern, thus separated by ∼ 0.2 h (i.e.∼ 2 %).The agree-ment between SKR and magnetic determinations is generally excellent (Andrews et al., 2012), with the exception of some minor deviations in intervals of weak northern magnetic oscillations indicated by the blue and white dashed lines.During the ∼ 1-year interval across the equinox, however, the two oscillations converged towards a common period of ∼ 10.67 h, and remained in this vicinity, separated by ∼ 0.05 h, until the end of the interval shown.Fischer et al. (2014) interpret the SKR data as indicating a crossing of the two periods post-equinox until early 2010, following which there is generally only one SKR period until early 2013, shown in Fig. 1 by the black dashed line.An exception occurs in an interval starting mid-February 2011 (E2 in the top bar) in which separated periods are again apparent in the SKR data, which the authors associate with the GWS.However, the magnetic data presented by Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014)), and referenced but not discussed by Fischer et al. (2014), shows northern and southern periods to be present essentially throughout the post-equinox interval, separated by ∼ 0.05 h, with the southern period remaining longer than the northern.An exception in this case occurs during the second half of 2011 to early 2012 (interval E3) when no southern oscillations were detected in the magnetic data, such that no southern period is then shown in the figure.Comparison with the magnetic periods shows that in E1 the single period identified by Fischer et al. (2014) corresponds approximately to the southern magnetic period in early-to-mid-2010 but segues into the northern magnetic period in late 2010 and early 2011, remains at the northern period throughout 2011 and into 2012 (E2 and E3), and then segues back to the southern magnetic period in mid-2012 (interval E4).This variation approximately mirrors the behaviour of the relative amplitudes of the northern and southern oscillations observed in the equatorial magnetic data by Provan et al. (2013Provan et al. ( , 2014)).The amplitudes were modestly southern-dominant in the initial part of E1 but became modestly northern-dominant towards its end, and then became strongly northern-dominant in E3 before relaxing again from less strongly northern-dominant to weakly southern-dominant across E4, F1 and F2 (see Sect. 3).The exception is interval E2, where, as discussed below, the single period indicated by the black dashed line in Fig. 1 continues to correspond to the northern period, while the magnetic data show the resumption of strong southern dominance (k ≈ 0.3), in line with the second SKR period that is evident in Fig. 1.Although the post-equinox SKR and magnetic periods are thus often seen to be in good agreement in Fig. 1, it is notable that there is no obvious counterpart in the SKR data of the northern magnetic period in the post-equinox interval from late 2009 to late 2010 (i.e.throughout much of E1), or of the southern magnetic period between late 2010 and early 2011 (i.e. the end of E1), or of the northern magnetic period late in 2012 in E4, despite the magnetic oscillations providing clear evidence of their existence.2014), shown here as modulation period vs. time in days since 00:00 UT on 1 January 2004, over the interval from the beginning of 2004 to mid-2013.At the top we show the start of each year, the Cassini rev numbers defined from apoapsis to apoapsis and plotted at periapsis, and a coloured bar which indicates the nature of the Cassini orbit either near-equatorial (blue) or highly inclined (green), lettered from A to F, which determines the nature of the data employed to undertake the magnetic field PPO analysis.Over-plotted are the PPO periods determined from the magnetic field analysis of Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014)), red for the southern period and blue for the northern (shown dashed when the northern oscillations were not prominent in these data).The vertical white dotted lines indicate the times of six abrupt transitions in the PPO properties observed in the magnetic field data post-equinox (indicated by the white vertical arrow), defining intervals E2-F2 (as in Provan et al., 2013Provan et al., , 2014)).No southern magnetic field oscillation was discerned in E3 such that no southern period is shown in that interval.The white vertical dashed lines indicate the interval of the GWS, shown as in Fischer et al. (2014) from the start of storm-associated lightning activity to its complete cessation (5 December 2010 to 28 August 2011). Results using polarization-separated SKR data are also shown by Fischer et al. (2014) in their Figs.5-7, where emission intensity modulations were tracked by eye, augmented by directional statistics analysis, to yield phases vs. time and hence periods.As above, they report corresponding modulations in both polarization channels throughout the interval indicated by the black dashed line in Fig. 1 indicative of a single phase-locked period, with the exception of interval E2, when dual modulations become evident.Figure 2 compares these periods with those derived by Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014) ) over the interval from the beginning of 2010 to mid-2012 (later E1 to early E4), where the circles joined by dotted lines show the SKR periods (Fischer et al., 2014, Fig. 6) and the solid lines the magnetic periods, where red data show the southern period and blue the northern.The solid squares during E1 also show magnetic periods derived by Provan et al. (2013) using an alternative multi-parameter fit which employs 150-day intervals of phase data separated by 50 days.Figure 3 also shows the phase difference (modulo 360 • ) between the two oscillations over the same interval, northern phase minus southern phase, where the green line shows the SKR data (Fischer et al., 2014, Fig. 7) and the purple line the magnetic.No magnetic data are shown in E3 since southern oscillations were not discerned in this interval as indicated above, but resume in E4 with an arbitrarily assigned modulo 360 • value. At the beginning of the interval in Figs. 2 and 3, the SKR modulations indicate separate northern and southern periods, with the northern longer than the southern, but become essentially the same near t ≈ 2300 days (mid-April 2010) and remain so until the beginning of E2 near t ≈ 2600 days (mid-February 2011).The SKR phase difference in Fig. 3 also varies about zero during the latter interval, such that the phases of the northern and southern (polarization-separated) oscillations are approximately equal (modulo 360 • ), i.e. phase-locked, as well.The two SKR periods then separate in E2, the southern being longer than the northern such that the phase difference in Fig. 3 increases by three cycles over the interval, before coalescing again at the beginning of E3 just prior to t ≈ 2800 days (late August 2011) to a common value similar to that at the end of E1 and the northern period in E2.The phase difference in E3 is again near zero (modulo Fischer et al., 2014), while values derived from magnetic field phase data are shown by solid lines (from Andrews et al., 2012 andProvan et al., 2014) and squares (Provan et al., 2013).Blue data correspond to the northern oscillations and red to the southern.Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Fig. 1, together with vertical dashed and dotted lines as also in Fig. 1.B , plotted vs. time between the start of 2010 and mid-2012.The green line was derived from SKR modulation data (from Fig. 7 of Fischer et al., 2014), corresponding to the circles joined by dotted lines in Fig. 2, while the purple line and squares were derived from magnetic field data by Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014)), corresponding to the solid lines and squares in Fig. 2. No magnetic data are shown in E3 since no southern oscillation was discerned in these data, such that the phase in E4 is shown starting at an essentially arbitrary modulo 360 • value.Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Fig. 2, together with vertical dashed and dotted lines as also in Fig. 2. 360 • ), i.e. the two modulations are again phase-locked.Comparison with the magnetic periods in Fig. 2 shows little correspondence at the start of the interval early in 2010, where the SKR periods are apparently "reversed" (northern longer than southern) compared with the magnetic, but the phase-locked SKR periods in E1 are seen to correspond to the southern magnetic period between t ≈ 2300 and ∼ 2450 days (late April to mid-September 2010), and to the northern magnetic period between ∼ 2500 and ∼ 2600 days (late October 2010 to mid-February 2011), as indicated above.The phase difference of the magnetic oscillations in Fig. 3 undergoes ∼ two clear beat cycles during the interval in E1 in which the SKR modulations remain phase-locked.Overall, the northern and southern SKR and magnetic periods then agree well in E2, and show closely similar phase gradients in Fig. 3. Following this, in E3, the resumed phase-locked SKR modulations have a closely similar period to the single northern magnetic period, while in E4 the similarly phase-locked SKR modulations have a closely similar period to the northern magnetic period and not the southern.As discussed in Sect. 1, we here suggest that the phase-locked single periods deduced from the SKR data, but not the magnetic data, are likely due to the phenomenon first noted by Lamy (2011), in which the SKR modulation associated with a dominant hemisphere also appears in the (polarization-separated) data for the other. The different perceptions to which these analyses give rise, however, can lead to significant consequences.In discussing the SKR periods in relation to the GWS in their Sect.2, Fischer et al. ( 2014) characterize the interval in Fig. 2 as showing a "large jump discontinuity" of the southern modulations to a longer period in E2 compared with the southern phaselocked periods they deduce at the end of E1 and in E3, the magnitude of the jump being ∼ 0.05 h.They suggest that this "unique feature" is associated with the presence of the GWS, with which it is approximately contemporaneous.However, the magnetic results instead suggest that these effects result primarily from changes in the relative amplitude of the northern and southern oscillations, from northern-dominant at the end of E1, to southern dominant in E2, and then back to northern-dominant in E3, thus leading to the temporary reappearance of southern modulations in the SKR data, with the oscillation periods, to a first approximation, ongoing.The magnetic data do indicate an increase in the southern period across the E1/E2 boundary, but by ∼ 0.02 h, less than half that discussed by Fischer et al. (2014), while no conclusions can be drawn at all concerning changes in the southern period at the following E2/E3 boundary since no southern oscillations were observed in E3.We note, however, that when southern oscillations resume in interval E4 the period is again similar to that determined in E2 (and E1), within ∼ 0.01 h.The variation in the northern oscillation periods between the end of E1 and E4 is of similar magnitude, less than ∼ 0.01 h. In following sections we further address issues related to the discrepancies noted above in the PPO periods derived from magnetic and SKR data, a topic on which Fischer et 2014) did not comment.Specifically, in Sect. 3 we review the magnetic PPO data, focussing on the evidence for separated dual periods in E1 when single phase-locked northern and southern periods were deduced by Fischer et al. (2014).In Sect. 4 we also quantify and strongly limit the possible amplitude of a phase-locked southern oscillation in E3. Post-equinox magnetic oscillations and relation to SKR modulations in interval E1 The amplitudes and phases of the oscillations in the three spherical polar field components i (referenced to the planetary spin and magnetic axes), observed during each few-day periapsis pass through the core magnetosphere, are obtained by fitting the data to the azimuthally rotating function where g (t) is a guide phase corresponding to a fixed period τ g close to that of the oscillations ( g (t) = (360t/τ g ) deg), and ϕ is azimuth measured from noon increasing in the sense of planetary rotation, thus determining B 0i and ψ i for each component i = (r, θ, ϕ) on each pass.The time variations of the amplitudes and phases are then given by the rev-to-rev data, with total phase The observed oscillations are taken to consist of the sum of northern (N) and southern (S) contributions, where B 0N,Si are the amplitudes of the two oscillations with north / south ratio k = B 0Ni /B 0Si assumed the same for each field component, N,S (t) are the magnetic phases with corresponding periods τ N,S = 360/(d N,S /dt) if the phases are expressed in degrees, and the constant angles γ iN,S describe the relative phases of the three field components of the two systems, where by definition we take γ rN,S ≡ 0 • such that N,S (t) corresponds explicitly to the phases of the r components.Observations then show that γ ϕN,S = 90 • for both systems, such that the ϕ components are in lagging quadrature with r in the equatorial region, forming a rotating quasiuniform perturbation field, while γ θS = 0 • and γ θ N = 180 • , such that the θ components are in phase with r for the southern system, but in anti-phase with r for the northern.The phase i (t) and the amplitude B 0i (t) of the observed combined oscillations are then modulated at the beat period of the two oscillations τ B (t) = 360/(d B /dt), where B is the beat phase The form of the modulations depends on both B and k (Andrews et al., 2012;Provan et al., 2013), such that in general both northern and southern phases can be determined by fits to the modulated phase data, together with the amplitude ratio.The individual component amplitudes can then be found by fits to the observed amplitude data.Practically, the limit at which one oscillation can be discerned in the presence of the other, determined by uncertainties of ∼ 10 • in the fitted phases, is that the amplitude of the weaker oscillation should not be smaller than a factor of ∼ 5 less than that of the stronger, i.e. dual oscillations can be discerned if 0.2 ≤ k ≤ 5 (Provan et al., 2013).For k outside this range only the properties of the stronger oscillation can be determined. Results adapted from Figs. 6 and 7 of Provan et al. ( 2013) are shown in Fig. 4, spanning the same interval as Figs. 2 and 3.In Fig. 4a we show the rev-by-rev phase data relative to a guide phase of period 10.64 h, where specifically we plot ( i + γ iN ) − g = −(ψ i − γ iN ) corresponding to "N format", such that the phases of each field component would fall on a common line for a pure northern oscillation.Red, green, and blue circles correspond to the r, θ , and ϕ field components, respectively, shown for clarity over two phase cycles.Figure 4b similarly shows the same phase data relative to a guide phase of period 10.69 h, where now we plot ( i + γ iS ) − g = −(ψ i − γ iS ) corresponding to "S format", such that the phases of each component would fall on a common line for a pure southern oscillation.When both oscillations are present, however, the beat-modulated phases are clustered about the northern phase in N format, and about the southern phase in S format, allowing both to be determined when k lies within the above limits.The black lines show these phases determined using directional statistics methods, where in E1 we show the results of a running determination taking 25 data points at a time typically spanning ∼ 200 days at steps of ∼ 10 days (Andrews et al., 2012), while in E2-E4 we show the results of single linear fits similarly spanning ∼ 200 days between each abrupt transition (Provan et al., 2013).The northern and southern periods follow from the slopes of these lines, as shown by the blue and red lines for the northern and southern oscillations in panel (d) (and previously for the intervals concerned in Figs. 1 and 2).The beat modulation of the data about these phases allows subsequent determination of amplitude ratio k shown by the black lines in panel (e), where the centre dotted line indicates k = 1 (equal amplitudes), with k being plotted below the line (0 ≤ k ≤ 1), and (1/k) above (1 ≥ (1/k) ≥ 0).The modelled beat-modulated phases are shown in panels (a) and (b) by the green lines for the θ component and the purple lines for (r, ϕ).Alternatively, and equivalently, k may be determined from fits to the beat-modulated difference between the θ and (r, From top to bottom we show (a) oscillation phase data in N format (see text) relative to a guide phase of 10.64 h, where red, green, and blue data correspond to r, θ, and ϕ field components, shown for clarity over two phase cycles with all data plotted twice, (b) the same phase data in S format (see text) relative to a guide phase of 10.69 h, (c) the phase difference between the θ and (r, ϕ) field components θ −r (Eq.6), red corresponding to θ − r and blue to θ − ( ϕ + 90 • ), (d) the oscillation periods, blue for the northern period and red for the southern, (e) the oscillation amplitude ratio north/south k, where the centre dotted line indicates k = 1, with k plotted directly below the line (0 ≤ k ≤ 1), and and (f)-(h) the amplitudes of the r, θ , and ϕ field components.The black lines in (a) and (b) show the northern and southern phases, respectively, determined from fits to the N-and S-format phase data (Andrews et al., 2012), while in E2-E4 we show the results of single linear fits between each abrupt PPO transition (Provan et al., 2013).The green (θ component) and purple (r and ϕ components) lines in (a) and (b) and the black lines in (c) show the model beat-modulated phase data derived using the northern and southern phases and k values (using the central value k = 1 in E1).Similarly, the coloured lines in (f)-(h) show the model beat-modulated oscillation amplitudes for the three field components.The squares in (a), (b), (d), and (e) in E1 show the results of simultaneous five-parameter fits to the phase data (Provan et al., 2013).Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Figs. 2 and 3, together with vertical dotted lines as also in Figs. 2 and 3. al., 2013).Figure 4f-h then show the oscillation amplitude data for the three field components, together with the beatmodulated model fitted to the data using the northern and southern phases and the amplitude ratio k. These results clearly show that k ≈ 1 conditions (nearequal amplitudes) pertain throughout E1, with indications of slight southern dominance prior to the near-coalescence of the periods near t = 2450 days (mid-September 2010) and slight northern dominance thereafter.Here the central value k = 1 has been used to derive the beat-modulated model values in E1 in Fig. 4a-c and f-h.Provan et al. (2013) then find k ≈ 0.32 in E2 (a return to southern dominance), k>5 in E3 (strongly northern dominant with no evident southern oscillation), and k ≈ 1.56 in E4 (northern dominant but not as strongly as in E3), as indicated in Sect. 2. These changes are entirely evident in the varying beat modulations of the phase data shown in Fig. 4a and b, with the phases in E1 rastering near-equally between −90 • and +90 • about the northern and southern phases in N and S format, respectively, while in E2 the phases are much more closely clustered in S format than in N format, and vice-versa in E3 with no evident clustering in S format in this case.Correspondingly, θ −r in Fig. 4c varies about 0 • (the southern value) in E2 and about ±180 • (the northern value) in E3, while switching between −90 • and +90 • in the two halves of the beat cycle in E1, this being characteristic of near-equal amplitudes.Similarly, the model beat-modulated amplitudes in Fig. 4f-h show good overall agreement with the amplitude data, in particular showing antiphase variations in the amplitudes of the θ and (r, ϕ) field components with the θ amplitude decreasing and the (r, ϕ) amplitude increasing during the half-beat interval when θ −r = +90 • , and vice versa when θ−r = −90 • , this being the unequivocal signature of the beat phase given by Eq. (4) increasing with time, i.e. the northern period being shorter than the southern (Provan et al., 2013). We note in passing, however, one brief interval in E1 where the amplitude data do not behave in the manner just described.If we examine the θ and ϕ component amplitude data in Fig. 4g and h, which have the largest values and are less affected by measurement fluctuations than the r component, we note that the data near t ≈ 2440 days (rev 137 in early September 2010), marked by the vertical arrows, behave in the opposite sense to that expected above.That is, during an interval when θ−r ≈ +90 • , the θ component amplitude briefly increased with time while the ϕ component amplitude simultaneously decreased, opposite to the behaviour of the surrounding data.This represents evidence that the beat-phase B decreased with time during the previous ∼ 20-day interval, implying that the two periods briefly crossed, northern longer than southern, and then crossed back again.If so, this behaviour is not quite captured by either of the magnetic period analyses shown, which use either ∼ 200day (Andrews et al., 2012) or 150-day (Provan et al., 2013) segments of phase data.Instead, these show the two periods as almost coalescing at a closely adjacent time as noted above.Although clearly tentative, this behaviour represents the only evidence in the magnetic field data for "crossed" periods during the interval shown (or indeed to date).Otherwise, the combined phase and amplitude data require the southern period to remain longer than the northern throughout. Overall, these results are clearly inconsistent with those derived in interval E1 from SKR data by Fischer et al. (2014) as shown in Fig. 2, particularly the crossed periods, northern longer than southern, at the beginning of the interval shown, from mid-January to late March 2010.They are also inconsistent with the Fischer et al. (2014) interpretation of the SKR data as showing a phase-locked single period between early April 2010 and mid-February 2011 that we have shown in Sect. 2 segues from the magnetic southern period early in this interval into the magnetic northern period later in the interval, approximately in line with the modestly dominant oscillations then observed in the equatorial core magnetosphere.Instead, the magnetic field data demonstrate the presence of two oscillations with separated periods, with the southern longer than northern (with the possible brief exception mentioned above), and with near-equal amplitudes throughout E1.The key features are (a) near-equal clustering of the phase data about the northern and southern phases in N and S formats, with component phases that raster nearlinearly between −90 • and +90 • about these phases at the beat period, (b) altered relative phases between the oscillations in the θ and (r, ϕ) field components, with θ −r varying between −90 • and +90 • in the two halves of the beat cycle rather than around 180 • for northern-dominant oscillations and 0 • for southern-dominant oscillations, and (c) corresponding antiphase modulations of the θ and (r, ϕ) component amplitudes.We again suggest that the reported apparent phase-locking of the SKR modulations in this interval results instead from the phenomenon reported by Lamy (2011) in which modulations associated with one hemisphere can appear in the polarization-separated data associated with the other.Indeed, Fischer et al. (2014) themselves report such behaviour in their data for interval E2 when two separate periods are again recognized in the SKR data (see their Fig. 5 and related discussion), with magnetically dominant "southern" signals being evident in "northern" (i.e.RH polarized) emissions. Relation of magnetic field oscillations with SKR data in interval E3 The results in Fig. 4 thus provide unequivocal evidence in the magnetic field data for northern and southern PPO oscillations of separate periods essentially throughout interval E1 (and in interval E4 from the extended data set), contrary to the single phase-locked periods deduced from SKR data by Fischer et al. (2014).However, the same arguments do not hold for E3, a ∼ 240-day interval from mid-August 2011 2014) again deduce phase-locked northern and southern modulations in SKR data (Figs. 2 and 3), but for which the magnetic phase data provide evidence only for oscillations with northern polarization, with no detectable clustering in S format (Fig. 4).Instead, the N-format data are tightly clustered about the northern phase, with the exception of two adjacent phase data sets mid-interval that show greater scatter indicative of shortterm phase variability that is found to occur from time to time (e.g. the S-format data in E2).With the exception of these points, the θ−r phase difference data in Fig. 4c show variations of ∼ ±20 • about a mean value of ∼ 170 • , these being essentially at the noise level given a typical uncertainty in the individual phase determinations of ∼ 10 • .This implies that any second oscillation producing beat modulations in the phase data must be smaller in amplitude than the northern oscillations by a factor of at least ∼ 5, i.e. that (1/k) ≤ 0.2 (Provan et al., 2013). As noted by Provan et al. (2014), however, this does not immediately preclude the presence of a southern oscillation of larger relative amplitude, provided that it remains either closely in phase or antiphase with the northern oscillations throughout E3, that is to say, a nearly phase-locked oscillation of nearly the same period as the northern oscillation, as suggested by the results of Fischer et al. (2014).In order to examine the requirements more closely, in Fig. 5 we show a contour plot of θ−r , plotted against (1/k) over the range 0 ≤ (1/k) ≤ 1 (for northern-dominant oscillations) on the vertical axis and beat-phase B over the full range 0 • ≤ B ≤ 360 • on the horizontal axis.It can be shown from Eq. (3) that this phase difference is given by (see, for example, Provan et al. (2013), their Eq.2g).In this expression the signs of the numerator and denominator are considered separately to define the arctangent function over the full 360 • range.The value of θ−r is exactly 180 • on the bottom and vertical black bounding lines of the figure as indicated, i.e. the polarization is that of a pure northern oscillation if either (1/k) = 0 corresponding to no southern oscillation, or if B = 0 • or 180 • (modulo 360 • ) corresponding to a southern oscillation of lesser amplitude than the northern that is either exactly in phase or in antiphase with the northern oscillation.The red contours on the left of Fig. 5 for 0 • ≤ B ≤ 180 • then show θ −r increasing vertically with (1/k) at 10 • steps from 180 • at the bottom to 270 • (equivalent to −90 • modulo 360 • ) at the top, while the blue contours on the right similarly show θ−r decreasing vertically from 180 • at the bottom to 90 • at the top.Thus for small values of (1/k), i.e. strongly northern-dominant oscillations, θ−r varies modestly about 180 • over the beat cycle ( B varying between 0 • and 360 • ), while when (1/k) approaches unity, i.e. near-equal amplitudes, θ −r flips between −90 • and +90 • in the two halves of the beat cycle, as indicated in Sect.3. The shaded region in Fig. 5 shows where the value of | θ−r | lies within ±20 • of 180 • , consistent with the majority of the E3 data in Fig. 4. If we then consider the presence of a secondary southern oscillation of a significantly different period to the northern, such that the interval contains at least one beat cycle, then (1/k) must be less than ∼ 0.2 if θ −r is to remain within this range, i.e. the southern oscillation must be less than the northern by a factor of at least ∼ 5, as indicated above.For larger values of (1/k), restriction of | θ−r | to this range requires the relative phase of the oscillations B to be restricted to increasingly narrow ranges about B = 0 • or 180 • (modulo 360 • ) as (1/k) increases toward unity, as indicated by the shaded regions.The two periods must therefore be sufficiently close so that B does not move out of these ranges throughout E3, with the oscillations closely either in phase or antiphase, i.e. they must constitute essentially phase-locked single periods, as proposed by Fischer et al. (2014) (we note, however, that the magnetic oscillations would have to be in antiphase to produce SKR modulations that are in phase from the two hemispheres as observed, associated with upward field-aligned currents that are co-located in local time in the two hemispheres, rather than on opposite sides of the planet).The difference in period allowed is given approximately by where τ N is the northern period in E3 ∼ 10.63 h, T is the duration of E3 ∼ 240 days, and the deviation in θ −r is expressed in degrees.If we suppose, for example, that the southern amplitude is half the northern, such that (1/k) = 0.5, then for a maximum phase deviation of ∼ 20 • we require δτ ≤ 8 × 10 −4 h, i.e. less than 3 s.We note that this limiting period difference is approximately 2 orders of magnitude smaller than those measured in adjacent intervals E2 and E4, where the differences are ∼ 3-4 min, in addition to which the phase difference has to lie closely either in phase or antiphase.Thus very particular conditions are required to have been maintained throughout this interval in this scenario. The presence of such a southern oscillation will not only affect the phases, however, but also the relative amplitudes of the component oscillations.As shown by Eq. ( 3), if B ≈ 0 • (modulo 360 • ) the southern oscillation will add to the northern for the r and ϕ components, but will subtract for the θ component, and vice versa if B ≈ 180 • .The presence of such a southern oscillation at significant amplitude relative to the northern will thus result in unusual values of the component amplitude ratios R r/θ = (B 0r /B 0θ ) and R ϕ/θ = (B 0ϕ /B 0θ ), both becoming larger than typical for B ≈ 0 • , and smaller than typical for B ≈ 180 • .The modified ratios are given by where R (r,ϕ)/θ represents the amplitude ratios for the individual northern and southern oscillations, R (r,ϕ)/θ represents the corresponding ratios for the combined oscillations given by Eq. ( 3), and the upper signs correspond to B ≈ 0 • and the lower to B ≈ 180 • .Thus, for example, if (1/k) has the significantly large value of ∼ 0.5, say, we find R ≈ 3R for B ≈ 0 • , while correspondingly R ≈ R/3 for B ≈ 180 • , effects that should be readily discernible in the amplitude data. It is already evident from Fig. 4f and h, however, that no large effects of this nature are present in E3; the amplitudes both in absolute and relative terms are comparable to those in adjacent intervals (with the exception of interval E4 where all amplitudes are severely reduced over an initial interval of ∼ 100 days, Provan et al., 2013).More quantitatively, the ratios of the amplitudes found for earlier intervals A and C by Andrews et al. (2012) (see Fig. 1), together with those for adjacent intervals E1-E4 by Provan et al. (2013), excluding E3, are R r/θ = 0.65 ± 0.19 and R ϕ/θ = 1.22 ± 0.29.The values for E3 are R r/θ = 0.56 and R ϕ/θ = 0.96, both thus lying within the usual ranges.Noting, however, that both are slightly smaller than the overall mean values, we can use Eq.(7b) for B ≈ 180 • (consistent with in-phase SKR emissions) to find the value of (1/k) = (R − R )/(R + R ) that would account for these small deviations.From the ratio of r and θ we find (1/k) ≈ 0.06, while from the ratio of ϕ and θ we find (1/k) ≈ 0.12.This analysis thus shows that any such phase-locked southern oscillation must be much smaller in amplitude than that of the primary northern oscillation, being restricted to values ∼ 5-10 % of the latter.We thus again conclude it to be likely that the phase-locked oscillations observed by Fischer et al. (2014) in the RH-polarized SKR emissions in E3 are due to the dual modulation phenomenon discussed above. Summary and conclusions In this paper we have discussed the divergent results on the nature of the PPOs in Saturn's magnetosphere in the postequinox interval presented from analysis of magnetic field oscillations by Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014)), and from analysis of SKR modulations by Fischer et al. (2014).The comments on this topic presented subsequently by Fischer et al. (2015) will be the subject of separate discussion in due course.As shown by Andrews et al. (2012), the SKR and magnetic data sets are in good agreement in the pre-equinox southern summer interval showing well-separated periods of ∼ 10.6 h for the northern oscillations and ∼ 10.8 h for the southern, then converging towards a common period of ∼ 10.7 h over a ∼ 1-year interval centred near equinox in mid-August 2009.In the postequinox interval, however, Fischer et al. (2014) report that the northern and southern SKR modulations, as determined from polarization-separated near-equatorial data, are characterized by phase-locked single periods over the interval from early 2010 to early 2013, with the exception of an interval from mid-February to late August 2011 when two separate periods were again observed, which they link to the occurrence of the GWS.By contrast, Andrews et al. (2012) and Provan et al. (2013Provan et al. ( , 2014) ) characterize this interval as being generally associated with continuing dual PPO periods that nearly coalesce in mid-September 2010, but then separate again with the northern period value of ∼ 10.64 h continuing shorter than the southern value of ∼ 10.69 h.The magnetic oscillations are also shown to undergo abrupt (revto-rev) changes in relative amplitude at ∼ 100-200-day intervals, as well as small changes in period and phase.The abrupt changes noted by Fischer et al. (2014) that occur after the beginning and near the end of the GWS event represent the first two of these transitions (defining interval E2), six of which have now been reported up until the end of 2013 as shown in Fig. 1 (Provan et al., 2014;Cowley and Provan, 2013).Noting these differences in reported PPO properties, we make the statement of principle that the SKR and magnetic field phenomena cannot fundamentally show contrary properties, since both are believed to be directly related to the same underlying phenomenon, i.e. rotating magnetosphereionosphere coupling current systems that produce the magnetic field signatures directly via Ampère's law and the SKR emission signatures indirectly by acceleration and precipitation of electrons in upward-directed FAC sheets, leading to cyclotron maser unstable distributions.Indeed, we have shown that the Fischer et al. (2014) single phase-locked pe-riod corresponds to the southern magnetic period early in 2010, segues into the northern magnetic period in late 2010 (both in interval E1), and then returns back to the southern magnetic period again in mid-2012 (interval E4), in line, to a first approximation, with the variations in the dominant oscillation observed in the magnetic data.The exceptional interval in 2011 (E2), in which the southern modulation separates from the ongoing (northern magnetic period) modulations in the SKR data, leading to the re-emergence of dual periods, corresponds in the magnetic data to an interval of resumed southern dominance, in which the derived periods again agree well with the dual periods deduced from the SKR data.The jump in period by ∼ 0.05 h identified by Fischer et al. (2014) in this interval, which they relate to the GWS, is thus associated with the reappearance of ongoing southern oscillations in the SKR data related to the amplitude change in E2, and hence corresponds to the difference between the northern and southern periods rather than to an actual change in the PPO periods.The magnetic data do suggest an increase in southern period in this interval, but by a factor of less than half that suggested by Fischer et al. (2014) on the basis of their misidentification of the nature of the changes involved.The topic of the possible connection of these abrupt PPO changes with the GWS storm has previously been addressed by Cowley and Provan (2013), and need not be repeated here.We note, however, that while such changes certainly began during the GWS storm, many similar events have continued afterwards with ∼ 100-200-day cadence, as illustrated in Fig. 1.Since the most recent of these events could clearly have no connection with the GWS, it seems reasonable to suppose that those occurring in February and August 2011 were similarly unconnected. We have also reviewed the properties of the magnetic oscillations in the two intervals adjacent to E2 in which Fischer et al. (2014) report single phase-locked northern and southern SKR modulations.For the earlier interval in E1 spanning early April 2010 to mid-February 2011, we have shown that the magnetic data provide unequivocal evidence for the presence of northern and southern oscillations with near-equal amplitudes and separated periods throughout, the northern period remaining shorter than the southern (Provan et al., 2013(Provan et al., , 2014)).The evidence consists of clear variations of the oscillation phases and amplitudes at the beat period of the two oscillations, specifically phases that raster near-linearly through ±90 • about the northern and southern phases, corresponding changes in the relative phases of the three field components, and antiphase modulations of the θ and (r, ϕ) component amplitudes.The analysis also shows unequivocally that the northern magnetic period remains shorter than the southern throughout E1 (thus contradicting the opposite conclusions of Fischer et al. (2014) for the interval early in 2010 before apparent phase-locked SKR modulations appeared, see Fig. 2), though they converge closely in August-September 2010.Indeed, we have newly pointed out tentative evidence from the magnetic amplitude data that the two periods may briefly have crossed and re-crossed in late August to early September 2010.Though certainly tentative, this remains the only evidence for crossed periods, northern longer than southern, in the magnetic field data analysed to date (to late 2013).These results strongly suggest that the phase-locked single periods deduced by Fischer et al. (2014) are instead due to the phenomenon first reported by Lamy (2011) in which the period associated with one hemisphere is also found in the polarization-separated data associated with the other, particularly in near-equatorial data as is relevant here.The origin of this effect, whether physical or technique-associated, however, remains unclear. Following interval E2, Fischer et al. (2014) again report the occurrence of phase-locked single northern and southern periods to mid-2012 (Fig. 2), corresponding to intervals E3 and the beginning of E4 of Provan et al. (2013Provan et al. ( , 2014)).While separated northern and southern periods are again clearly present in the magnetic field data in E4, following a nearcomplete suppression of the oscillations early in that interval, only a single magnetic period is present in E3, the field component phases indicating a pure northern oscillation.In particular, no oscillations are evident near the ongoing southern periods before or after this interval (i.e.E2 and E4) within an upper limit of ∼ 20 % of the northern amplitude.This finding, however, does not directly preclude the presence of a southern oscillation of larger relative amplitude provided, first, that its period is very close to that of the observed northern period such that the beat period is long compared with the ∼ 240-day length of E3, and, second, that it is near to either in phase or antiphase with the northern oscillation, i.e. it is nearly phase-locked in this condition as suggested by Fischer et al. (2014).However, the presence of such an oscillation would then produce clear signatures in the relative amplitudes of the three spherical polar field components, since a southern oscillation would either have its (r, ϕ) components in phase with the northern oscillation and its θ component in antiphase, or vice versa.No significant signatures of this nature are found in the amplitude data for this interval, however, showing that if such a southern oscillation is present, its amplitude is at most ∼ 5-10 % of the dominant northern oscillation.We thus suggest it to be far more likely that the dual phase-locked oscillations found by Fischer et al. (2014) in E3 are again due to the presence of a single northern period of typical strength combined with the above SKR phenomenon discussed previously by Lamy (2011). Figure 1 . Figure1.Spectrogram of the normalized peak-to-peak power of SKR modulations taken from Fig.1ofFischer et al. (2014), shown here as modulation period vs. time in days since 00:00 UT on 1 January 2004, over the interval from the beginning of 2004 to mid-2013.At the top we show the start of each year, the Cassini rev numbers defined from apoapsis to apoapsis and plotted at periapsis, and a coloured bar which indicates the nature of the Cassini orbit either near-equatorial (blue) or highly inclined (green), lettered from A to F, which determines the nature of the data employed to undertake the magnetic field PPO analysis.Over-plotted are the PPO periods determined from the magnetic field analysis ofAndrews et al. (2012) andProvan et al. (2013Provan et al. ( , 2014)), red for the southern period and blue for the northern (shown dashed when the northern oscillations were not prominent in these data).The vertical white dotted lines indicate the times of six abrupt transitions in the PPO properties observed in the magnetic field data post-equinox (indicated by the white vertical arrow), defining intervals E2-F2 (as inProvan et al., 2013Provan et al., , 2014)).No southern magnetic field oscillation was discerned in E3 such that no southern period is shown in that interval.The white vertical dashed lines indicate the interval of the GWS, shown as inFischer et al. (2014) from the start of storm-associated lightning activity to its complete cessation (5 December 2010 to 28 August 2011). Figure 2 . Figure 2. Plot of the PPO periods vs. time from the start of 2010 to mid-2012.Values derived from SKR modulations are shown by circles joined by dotted lines (from Fig. 6 ofFischer et al., 2014), while values derived from magnetic field phase data are shown by solid lines (fromAndrews et al., 2012 andProvan et al., 2014) and squares(Provan et al., 2013).Blue data correspond to the northern oscillations and red to the southern.Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Fig.1, together with vertical dashed and dotted lines as also in Fig.1. Figure 3 . Figure 3. Phase difference (modulo 360 • ) between the northern and southern PPO oscillations (northern minus southern),B , plotted vs. time between the start of 2010 and mid-2012.The green line was derived from SKR modulation data (from Fig.7ofFischer et al., 2014), corresponding to the circles joined by dotted lines in Fig.2, while the purple line and squares were derived from magnetic field data byAndrews et al. (2012) andProvan et al. (2013Provan et al. ( , 2014)), corresponding to the solid lines and squares in Fig.2.No magnetic data are shown in E3 since no southern oscillation was discerned in these data, such that the phase in E4 is shown starting at an essentially arbitrary modulo 360 • value.Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Fig.2, together with vertical dashed and dotted lines as also in Fig.2. ϕ) component phases shown in panel (c), where the red circles show θ−r = θ − r = ψ r − ψ θ and blue circlesθ − ( ϕ + 90 • ) = (ψ ϕ − 90 • ) − ψ θ .The squares in E1 also show the results of an alternative fit methodology using simultaneous five-parameter fits to the phase data (two linear phases and k) employing 150-day segments of data separated by 50 days, as discussed in Sect. 2 in relation to Figs.2 and 3, which show closely similar results(Provan et Figure 4 . Figure 4. Summary plot of magnetic field oscillation data over the same interval as shown in Figs.2 and 3, from the start of 2010 to mid-2012.From top to bottom we show (a) oscillation phase data in N format (see text) relative to a guide phase of 10.64 h, where red, green, and blue data correspond to r, θ, and ϕ field components, shown for clarity over two phase cycles with all data plotted twice, (b) the same phase data in S format (see text) relative to a guide phase of 10.69 h, (c) the phase difference between the θ and (r, ϕ) field components θ −r (Eq.6), red corresponding to θ − r and blue to θ − ( ϕ + 90 • ), (d) the oscillation periods, blue for the northern period and red for the southern, (e) the oscillation amplitude ratio north/south k, where the centre dotted line indicates k = 1, with k plotted directly below the line (0 ≤ k ≤ 1), and(1/k) above (1 ≥ (1/k) ≥ 0),and (f)-(h) the amplitudes of the r, θ , and ϕ field components.The black lines in (a) and (b) show the northern and southern phases, respectively, determined from fits to the N-and S-format phase data(Andrews et al., 2012), while in E2-E4 we show the results of single linear fits between each abrupt PPO transition(Provan et al., 2013).The green (θ component) and purple (r and ϕ components) lines in (a) and (b) and the black lines in (c) show the model beat-modulated phase data derived using the northern and southern phases and k values (using the central value k = 1 in E1).Similarly, the coloured lines in (f)-(h) show the model beat-modulated oscillation amplitudes for the three field components.The squares in (a), (b), (d), and (e) in E1 show the results of simultaneous five-parameter fits to the phase data(Provan et al., 2013).Year boundaries, rev numbers, and interval identifiers are shown at the top of the plot as in Figs.2 and 3, together with vertical dotted lines as also in Figs.2 and 3.
13,825
2015-07-24T00:00:00.000
[ "Environmental Science", "Physics" ]
3DMMS: robust 3D Membrane Morphological Segmentation of C. elegans embryo Background Understanding the cellular architecture is a fundamental problem in various biological studies. C. elegans is widely used as a model organism in these studies because of its unique fate determinations. In recent years, researchers have worked extensively on C. elegans to excavate the regulations of genes and proteins on cell mobility and communication. Although various algorithms have been proposed to analyze nucleus, cell shape features are not yet well recorded. This paper proposes a method to systematically analyze three-dimensional morphological cellular features. Results Three-dimensional Membrane Morphological Segmentation (3DMMS) makes use of several novel techniques, such as statistical intensity normalization, and region filters, to pre-process the cell images. We then segment membrane stacks based on watershed algorithms. 3DMMS achieves high robustness and precision over different time points (development stages). It is compared with two state-of-the-art algorithms, RACE and BCOMS. Quantitative analysis shows 3DMMS performs best with the average Dice ratio of 97.7% at six time points. In addition, 3DMMS also provides time series of internal and external shape features of C. elegans. Conclusion We have developed the 3DMMS based technique for embryonic shape reconstruction at the single-cell level. With cells accurately segmented, 3DMMS makes it possible to study cellular shapes and bridge morphological features and biological expression in embryo research. Electronic supplementary material The online version of this article (10.1186/s12859-019-2720-x) contains supplementary material, which is available to authorized users. C. elegans conserves many genes that play significant role in the cell development of advanced animals [16]. More importantly, a C. elegans embryo develops via an essentially invariant pattern of divisions, termed as fate determination [17,18]. The cell division information provides a road map that includes the ancestry and future of each cell at every time point of the development [19]. Therefore, C. elegans is used extensively as a model organism to study biological phenomena, such as the genes that influence cell fate decision. It is also important to consider cell shapes during cell division in addition to the timing of the division. Some existing algorithms perform cell morphological segmentation and provide cell shape information, but they are often error-prone on the focal plane, and are exposed to segmentation leakage when the membrane signal is missing. In RACE [20], layerby-layer results were fused into a 3D cell shape, making RACE a high-throughput cell-shape extractor. However, RACE would segment the membrane surface into one cell instead of interface when the membrane is parallel to the focal plane. This led to the confusing boundaries of two cells in 3D segmentation results. By adding multiple embryos with weak signal, Azuma et al. prevented segmentation leaking into the background in BCOMS [21]. However, the leakage still existed in channel-connected regions caused by the cavity of incomplete membrane surface. Small cavity might lead to totally undistinguishable segmentations. This paper develops a method for 3D Membranebased Morphological Segmentation (3DMMS) to extract cell-level embryonic shapes. Novel methods are used to guarantee the precision and robustness of 3DMMS in segmenting a wide range of membrane images. First, intensity degeneration along the slice depth is adjusted statistically through normalization. Hessian matrix transformation is used to enhance the membrane surface signal. Then, a region filter is adopted to remove noisy regions by calculating the location relationship between different components. Subsequently, surface regression is utilized to recover missing surfaces. For the sake of computational efficiency, a membrane-centered segmentation is implemented. Finally, time-lapse fluorescent embryos are segmented at the single-cell level. Combined with the nucleus lineage, 3DMMS can further perform name-based retrieval of cell shape features. Source code is publicly available at [22]. In this paper, "Methods" section presents critical steps in 3DMMS, including pre-processing, membranecentered watershed segmentation and division correction. "Results" section provides experiment results and a comparison with different algorithms. "Discussion" section explains the advantages and limitations of 3DMMS and points out other possible applications. "Conclusion" section summarizes our contributions and describes our future work. Results Segmentation results from 3DMMS were quantitatively evaluated and compared with two state-of-the-art methods, RACE and BCOMS. To elaborate the performance of 3DMMS, time points with a large number of cells are preferred. However, membrane signal becomes blurry as the number of cells increases, especially for slices at the top of the stack. This prevents experts annotating highdensity cells confidently. To enhance the reliability and feasibility of manual annotation, semi-manual segmentation was applied. Six membrane stacks corresponding to time points t = 24, 34, 44, 54, 64, 74 were selected. When annotated by experts, all membrane stacks were overlaid with pre-segmentations, which came from nuclei seeded watershed algorithm. After one expert finished the annotation in ITK-SNAP [23], two other experts checked the results individually. All annotations are available at the source code repository. Comparison with RACE and BCOMS To obtain the results from RACE and BCOMS, all images were resampled and resized into 205×285×134. In RACE, parameters, such as Max 2D Segment area and Min 3D Cell Volume, were tuned for optimal performance. For BCOMS, three consecutive stacks were concatenated into one stack because BCOMS required summing 4D image to generate a single 3D stack for embryonic region segmentation. Only results at the middle time points were used for comparison. For example, we concatenated stacks at t = 23, 24, 25 into one stack with size 205 × 285 × 402. Slices from 135 to 268 were extracted as the segmentation results of the stack at t = 24. The reader is recommended to read more details about parameter settings [see "Additional file 1"]. Dice ratio is universally used in measuring the overlap between the segmentation results I seg and ground truth I truth . In this paper, is adopted to evaluate the segmentation with multiple cell labels, where n is the number of cells in I truth . Evaluation results are show in Fig. 1. 3DMMS achieves better segmentation precision and robustness over different time points than other methods. A deeper insight into the difference among 3DMMS, RACE and BCOMS is illustrated in Fig. 2. RACE provides segmentation with clear and smooth boundaries among neighboring cells. It reconstructs 3D segmentations by fusing results slice-by-slice, making it difficult to distinguish boundaries parallel to the focal plane. In Fig. 2f, cells are sliced off at the top and bottom area. Slice-by-slice segmentation is error-prone in keeping boundary details in 3D because inter-slice information is lost when segmenting a 3D object in 2D. The fusion stage in RACE uniforms labels of fragments, but hardly revises segmentation boundaries. In BCOMS, fewer parameter settings are involved owning to the biological constrains. Moreover, the embryonic eggshell is extracted first to prevent segmented area leaking into the background. This strategy relies on an assumption that the embryonic surface attaches to the eggshell closely. However, the embryonic is not always closely attached to the eggshell, as the manual annotation at t = 54 in Fig. 3. Constrained by a static eggshell boundary, a cell regions may flow into the gaps between the eggshell and the embryonic surface if a cavity occurs on the embryo surface. 3DMMS shows advantage Segmentation of cells on the boundary During cell imaging, an embryo is stained with a fluorophore and then it is illuminated though a high-energy laser. The membrane signal intensity is determined by the number of photons available to each voxel. The image quality is strongly limited by photo-bleaching, fluorophore concentrations, and small exposure time for acquiring stacks. A membrane image inevitably suffers from the lost information, especially for cells at the boundary of the embryo. Incomplete embryonic surface is a major factor influencing the overall precision. To check the accuracy of the segmentation on the boundary cells, we calculated the Dice ratio corresponding to cells inside and at the boundary of the embryo, respectively, as shown in Fig. 4. Comparing Figs. 4a and b, we find that three methods produce a higher Dice ratio inside the embryo, particularly for BCOMS. This observation meets our expectations because inside the embryo, the image has a higher signal-to-noise ratio. The primary error of BCOMS originates from the leakage around the embryonic surface. In 3DMMS, embryonic surface is well repaired in the surface regression procedure, effectively preventing cell region flooding into the background. To emphasize the necessity of repairing cavity in Fig. 4a, the Dice ratio of the results from 3DMMS without cavity repair is also shown in Fig. 5. Discussion In "Results" section, 3DMMS is compared with two stateof-the-art methods. 3DMMS provides better segmentation results of the whole embryo. Note that our contributions focus on processing membrane stack images and producing 3D embryo structure. In order to elaborate the benefits of 3DMMS fully, nucleus lineage information is utilized from AceTree [24]. After integrating cell shapes into the lineage, researchers can not only obtain cell morphological features, such as volume, surface area and neighboring cells, but also make a longitudinal comparison of cellular shapes. To our best knowledge, 3DMMS is the first software that can achieve the cell-name-based retrieve for shape features, such as volumes and interface between neighboring cells. This dramatically expends our study from the nucleus to the whole cell. In this section, we will discuss other potential applications of 3DMMS. Applications to the study of internal features Recent studies indicate that gene expression and protein synthesis are influenced by the nuclear shape [25]. In fact, 3DMMS can provide a way to study whether biological expression modulates cell shapes. Previous algorithms are designed for either individual cell image or time-lapse nucleus image. They neglect the shape deformation of a cell with time. Although AceTree provides cell trajectory, it is limited to the nuclei without any cell shape information. Segmentation in 3D is essential for tracking the whole dynamic cell across multiple slices. With the cell shape lineage, we can track time series of cellular shape deformation. One cell division process is demonstrated in Fig. 6 as an example. Thus, our method is useful for the study of temporal morphological deformations of cells. Applications to the study of external features Ratajczak et al. reported that information can be transferred through cell membrane, further affecting the cell's development [26]. Various works have qualitatively analyzed the communication between cells, but few of them were involved in measuring the interface of two cells. Statistical analysis is also needed to enhance the reliability of shape deformation. It leads to a demand for the 3D shape information in 3DMMS. With the region of each cell clearly identified, we can easily infer cell's contextual information, such as neighboring cells. Example in Fig. 7 presents the interface ratio of cell "ABala" to its neighboring cells. Applications to other types of images This paper utilizes C. elegans to explain the implementation of 3DMMS. However, methods in 3DMMS are not confined to the segmentation of C. elegans embryos. Our algorithm provides a systematic procedure for cell segmentation. No assumptions dependent on C. elegans are made in the entire process. With algorithms, such as TGMM [27], MaMuT [28], which can produce the Weakness of the 3DMMS Based on the watershed algorithm, 3DMMS builds boundary lines if and only if two basins contact with each other. Therefore, 3DMMS might fail to detect gaps inside the embryo. In our experiments, most of cells were closely attached to its neighbors. However, some cases did appear where small gap arose among neighboring cells, as shown in Fig. 8. We will conduct much more experiments and study different configurations of various gaps to improve the performance of 3DMMS in the future. Conclusion This paper reports an effective method based on 3DMMS to analyze embryonic morphological features at the single-cell level. 3DMMS is robust and can adapt to images at different time points. Based on this method, it is feasible to analyze cell shape longitudinally and transversally. Our future work will include designing specific geometric model, such as the formulation proposed by Kalinin et al [29]. Then, we will carry out statistical analysis on a large dataset of C. elegans embryos. We envision that 3DMMS could help biologists investigate morphological features related to biological regulations. Methods Optical appearance of cell membrane is variable due to different size, number, and position of fluorescent signals on the focal plane. In our method, a membrane image is preprocessed with multiple steps. A fluorescent microscope produces membrane stack (red) and nucleus stack (blue) simultaneously. While nucleus channel is used to generate (nucleus-level) seeds matrix by existing methods, we obtain the cellular shapes by leveraging the membrane channel. The framework of 3DMMS can be divided into three parts, membrane image preprocessing, membranecentered segmentation and division correction, as illustrated in Fig. 9. Data C. elegans was first stained with dual labelling in cell nucleus and membrane. All the animals were maintained on NGM plates seeded with OP50 at room temperature unless stated otherwise. Membrane marker and lineaging marker were rendered homozygous for automated lineaging. To improve the overall resolution, 4D imaging stacks were sequentially collected on both green and red fluorescent protein (mCherry) channels at a 1.5-min interval for 240 time points, using a Leica SP8 confocal microscope with a 70-slice resonance scanner. All images were acquired with resolutions of 512 × 712 × 70 stack (with voxel size 0.09 × 0.09 × 0.43 μm). All the images were deconvoluted and resized into 205 × 285 × 70 before analysis. Membrane image preprocessing Statistical intensity normalization Fluorescent images are often corrupted by noise, such as Poisson distributed incoming photos. Besides, signal intensity decreases along the z-axis because of the attenuation of laser energy. To achieve parameter generalization through the whole stack, Gaussian smoothed membrane image was adjusted by statistical intensity normalization, which balanced the intensity distribution of symmetrical slices in each stack. First, pixel intensity histogram of each slice was embedded into an intensity distribution matrix as a row. Background pixels were ignored for computational stability. An example of Gaussian smoothed intensity distribution matrix is shown in Fig. 10a. A threshold of the pixel number was applied, thus a threshold line (red in Fig. 10a) was formed across all slices. Slices at the deeper half of the stack were multiplied by the ratio of this slice's intensity on the red line to that of its symmetrical slice. The stack intensity distribution after the adjustment is shown in Fig. 10b. Additionally, the membrane stack was resampled to 205 × 285 × 134 with linear interpolation on the z-axis. Hessian matrix enhancement Cell surfaces are composed of plane components. Membrane signals can be enhanced by selecting all pixels that belong to a plane structure. We took the associate quadratic form to exploit intensity changes surrounding a pixel, and further determined its structure components. (c) shows when a noise spot is added into the largest membrane surface, the influenced region R (transparent white mask in (c) and (e)) in the EDT tends to be round. Conversely, Path (a)-(d)-(e) indicates if a valid membrane region is added into the membrane surface, the influenced region has notable polarization. Note that noise spot (yellow in (b)) and valid membrane region (blue in (d)) all exist in binary filtered membrane I bn , but shown here separately for better demonstration By diagonalizing the quadratic form, the Hessian descriptor is defined as where λ 1 , λ 2 , λ 3 are eigenvalues with |λ 1 | < |λ 2 | < |λ 3 |, and e 1 , e 2 , e 3 are the corresponding eigenvectors. Pixels could be allocated to three structures regarding the eigenvalues: (1) when |λ 1 |, |λ 2 | < 1 and |λ 3 | ≥ 1, the pixel locates on a plane; (2) when |λ 1 | < 1 and |λ 2 |, |λ 3 | ≥ 1, the point locates on a stick; and (3) when |λ 1 |, |λ 2 |, |λ 3 | ≥ 1, the point locates in a ball. So membrane surface signal can be enhanced with I en (x, y, z) = |λ 3 (x, y, z)| max (|λ 3 (x, y, z)|x, y, z ∈ stack voxels) where I en is the stack image after enhancement. Region filter Preliminary experiment shows membrane based EDT (in "Membrane-centered segmentation" section) is highly dependent on the quality of binary membrane image. The region filter is designed to screen noise regions in I en . After suppressing noise and enhancing membrane signal, we choose a threshold to convert I en into binary image I bn . It is composed of disconnected regions, denoted as = {φ i }, some of which are noise spots. The largest connected region φ i belongs to valid cell surface signal χ , but other regions need to be screened. Keeping noise spots would introduce erroneous cell boundaries, whereas missing valid signal results in segmentation leakages. Herein, principal component analysis (PCA) was employed to analyze the location relationship between φ max and small regions in { \φ max }. Noise and valid regions had different influence on the Euclidean distance transformation (EDT) of the membrane surface φ max . The flow chart of the region filter is shown in Fig. 11. Cell surface signal was initialized as χ = {φ max }. Following steps were repeatedly used to update χ : to obtain the influenced EDT region R when we add φ i into L. 3. Use PCA to analyze the polarization features of R. Variance percentage on three directions are γ 1 , γ 2 , γ 3 and γ 1 < γ 2 < γ 3 . The coefficient for adding φ i into χ is measured by γ 1 γ 1 +γ 2 +γ 3 . Our experiments shows that if this coefficient is larger than 0.1, φ i can be regarded as membrane signal and should be grouped into χ . Otherwise, φ i will be ignored. An example result is shown in Fig. 12. Filtered membrane stack I fm is a binary image whose points in χ is positive. Surface regression The embryonic surface cannot be imaged completely because of a balance between the phototoxicity and signal intensity. Moreover, the stain concentration is much lower at the boundary where only one layer of the membrane exists. Incomplete surface degrades the performance of 3DMMS because of the leakage between different targets, as shown in Fig. 13b. We use surface regression to recover the boundary surface signal around the missing embryonic surface area, noted as surface cavity. In surface regression, we only modify surfaces in the cavities and this is different from the embryonic region segmentation in BCOMS. We apply the active surface first to obtain the initial surface of the entire embryo. The smooth factor is tuned to be a large value to prevent segmented surface dropping into the cavity. From Fig. 14, we know that cavity surface can be found according to the vertical distance between the segmented embryo surface and the membrane signal I fm . We defined a distance matrix as the same size as one slice. For the upper half surface of the segmented embryonic surface S eu , the distance matrix delineated the vertical distance between S eu and membrane signal I fm . The distance was set to zero when there were no corresponding signals. Distance matrix was smoothed, and further thresholded using Ostu's method [30], to construct a binary mask R cavity . Positive masks in R cavity indicated the location where the membrane signal should be modified with S eu . We used to repair I fm . Partial surfaces with positive mask were added into I fm , shown as gray points in Fig. 13c. Membrane-centered segmentation Watershed segmentation is a fast algorithm to group points with different labels according to specific terrain map based on image intensity. Along the steepest descent, all pixels are classified into different catchment basin regions by tracing points down to the corresponding local minima [31], which are also termed as seeds. After watershed transformation, each region consists of points whose geodesic descent paths terminate at the same seed. The number of seeds controls the number of regions. Redundant seeds result in over-segmentation where one region is split; whereas, absent seeds lead to undersegmentation with two regions combined together. The terrain map plays a dominant role in generating region boundaries. In 3DMMS, a well-defined terrain map, combined with nucleus channel, accommodates the difficulty of lost information and membrane perception. The nucleus image is simultaneously acquired with the membrane image, which can be used as seeds to eliminate merge-or-split mistakes. Generally, the terrain map is the linear combination of membrane intensity in nucleuscentered watershed segmentation [21,[32][33][34]. However, it is difficult to make a tradeoff between two sources of influence on the final region boundary, as shown in Fig. 15 (combination of EDT and membrane). To overcome this problem, we combined nucleus and membrane stacks in a different way, noted as membrane-centered watershed. The nucleus stack was processed by AceTree to generate the nucleus matrix. The nucleus matrix I n was constructed as I n = l i (6) where (x i , y i , z i ) and l i were the nucleus location and label in the lineage, respectively. We noted D m as the membrane-centered EDT on I fm . Then D m was reversed and normalized by The nucleus matrix I n , plus a background minimum, were used as seeds for the watershed segmentation on new terrain map D m . This map can, to a certain extent, relieve the segmentation leakage by building a ridge at the holes of the binary membrane signal, as demonstrated in Fig. 15 (membrane-centered EDT). Channel-connected cells were well separated with each other. It produces reasonable boundaries in both the blurry area and surface cavities. Cell division revision Two nuclei in a dividing cell would lead to a split, indicated with red lines in Fig. 16b. We resolved this problem by considering the membrane signal distribution of the interface between two cells. First, we analyzed nucleus lineage information and found out the daughter cells (or nuclei). Details on the rules of finding daughter cells can be found in ["Additional file 1"]. For each pair of daughter cells, the intensity of their interface is examined to determine whether the division has finished. The membrane-centered segmentation yields cell boundaries with the membrane signal or ridges in EDT. We calculated the average intensity of two cells' interface to determine whether this interface located at ridges with a hole. If the interface includes a hole, the division is in process and two cells should be merged. The average intensity threshold is experimentally determined to be 40. Segmentation results after cell division correction is shown in Fig. 16c.
5,276.6
2019-04-08T00:00:00.000
[ "Biology", "Computer Science" ]
Road marking abrasion defects detection based on video image processing Pavement marking occupies an important position in the road traffic safety operation. A large number of domestic and foreign research and practice has proved that the effective use and maintenance of road signs and markings reduce traffic accidents and improve the capacity of great significance. The main content of this paper is the design of mobile video image acquisition device to road damage to the road traffic marking defect detection, video image to carry out research on road marking based on wear and breakage from video images. The process is random image acquisition of pavement markings, respectively the image pre-processing, image binarization, marking angle correction, marking extraction and detection. Finally, developing software on marking the wear and damage degree detection based on Matlab software then have an image processing experiments. The results show that the software can accurately detect the road wear and damage situation. Introduction With the rapid development of human science and technology, the car is playing a more and more important role in people's life.However, the development of the automobile has improved people's living standards, but at the same time, it also brings a lot of traffic accidents.Related materials show: only in 2009, China has thousand people died in traffic accidents, accounted for one fifth of the world's total [1].This makes us alert and aware of the urgency to solve the problem of traffic safety. According to a large number of domestic and foreign research and practice has proved that the effective use and maintenance of road signs and markings can reduce traffic accidents and increase traffic capacity, save energy, protect the environment of road [2].Thanks to the rapid development of artificial intelligence computer vision technology, image and video technology has been widely used in intelligent transportation systems and unmanned vehicle systems.Carnegie Mellon University developed a Navlab.X [3] series product system that is based on road flat assumption and using the characteristics of the parallel to the road, such as the lane line, rut, road boundary.University of Parma's PROMETHEUS project has an important part of the GOLD (Generic Obstacle and Lane Detection) [4].The basic principle of the visual system is using camera as a navigation tool, to analysis the captured image, to detection and extraction of the lanes to avoid obstacles using stereo vision.France Citroen technology center and Pascal university automation and electronic materials science laboratory in France have cooperation in the development Puegoct system [5].The main principle is using Gauss filter and filter to calculate lane using gradient derivative of the noise left (or right) edge.Real-time and robustness are good.Xin Zhou [6] proposed a of lane lin the lane, w transportat pavement This article wear and te Image Image p Image gray brightness the dynam reduce noi Image In the R color imag value, the g The im Gray str range of g scale (slop level of th (slope<1). Gray str Image The Mathem The basic i form eleme The bas and closin expansion. Conclusions The road markings defect detection based on video image processing technology are mainly about image preprocessing, binarization, morphological processing, extraction and marking correction by Matlab process.Next we can get the marking of the wear and damage performance.Based on all those, develop markings wear detection software.Experiments prove that the software can be a very good marking processing and can get marking the wear and breakage accurately. Fig.3.(a)Be hological pr matical morp idea is to m ents to achi sic operation g operation .on and eros n is indicate First, mappi about x to n is indicate After Structu A. Fig. 2 . Fig.2.(a) B line extrac orithm n is the mo ages, one or rget and the called the t ass variance wo kinds) w he image wh are shown in
880.6
2016-12-31T00:00:00.000
[ "Engineering", "Computer Science" ]
AN INTERACTIVE PLATFORM FOR ENVIRONMENTAL SENSORS DATA ANALYSES The increased usage of the environmental monitoring system and sensors, installed on a day-to-day basis to explore information and monitor the cities’ environment and pollution conditions, are in demand. Sensor networking advancement with quality and quantity of environmental data has given rise to increasing techniques and methodologies supporting spatiotemporal data interactive visualisation analyses. Moreover, Visualisation (Vis) and Visual Analytics (VA) of spatiotemporal data have become essential for research, policymakers, and industries to improve energy efficiency, environmental management, and cities’ air pollution planning. A platform covering Vis and VA of spatiotemporal data collected from a city helps to portray such techniques’ potential in exploring crucial environmental inside, which is still required. Therefore, this work presents Vis and VA interface for the spatiotemporal data represented in terms of location, including time, and several measured attributes like Particular Matter (PM) PM2.5 and PM10, along with humidity, and wind (speed and direction) to assess the detailed temporal patterns of these parameters in Stuttgart, Germany. The time series are analysed using the unsupervised HDBSCAN clustering on a series of (above mentioned) parameters. Furthermore, with the in-depth sensors nature understanding and trends, Machine Learning (ML) approach called Transformers Network predictor model is integrated, that takes successive time values of parameters as input with sensors’ locations and predict the future dominant (highly measured) values with location in time as the output. The selected parameters variations are compared and analysed in the spatiotemporal frame to provide detailed estimations on how average conditions would change in a region over the time. This work would help to get a better insight into the urban system and enable the sustainable development of cities by improving human interaction with the spatiotemporal data. Hence, the increasing environmental problems for big industrial cities could be alarmed and reduced for the future with proposed work. INTRODUCTION The cities generate and store a lot of spatial and temporal information continuously using sensors that collect a large set of real-time spatial data stream and responses. In monitoring and keeping track of the surroundings, managing spatial data includes cities, rivers, roads, and countries with increasing demand for environmental monitoring, smart cities planning and resource management. The development and industrial advancement for uplifting human standards have contributed to a comfortable life on one hand while consequences of environmental changes, and pollution on another. Chimneys' discharge, waste from industries, vehicle smokes, and construction sites release consist of tiny air pollutants that could be inhaled with the air, leading to heart diseases, lung and respiratory problems worldwide. Therefore, the meteorological parameters i.e., humidity, wind (speed and direction) along with air pollutants like Particular Matter (PM) PM2.5 and PM10 require regular monitoring. The surrounding air quality and well being fluctuate with these parameters atmospheric concentrations (Chen and Zhao, 2011). Over the developed areas, the elevated levels of pollution parameters are incorporated with both local emission sources, and regional transportation (Chen andZhao, 2011, Jasen et al., 2013). Regional transportation with diesel vehicles are the primary sources of particular matters and contribute significantly to their levels (Wallace andHobbs, 1977, Hardin andKahn, 1999). Moreover, sensors' (spatial and temporal) data is a combination of the georeferenced geographical entity represented in terms of location, dimensions, attributes, and time as continuous more extensive size data. Data Visualisation (Vis) is not an instrument for Visual Analytics (VA). More- * Corresponding author. over, VA is a sub-field of Vis which integrates data analyses with highly interactive visualisations. Furthermore, in the scientific domain, it helps to place the geospatial data in a visual context by identifying trends, patterns that usually go unrecognised in the text-based data (Sun et al., 2013, Sun and liu, 2016, Harbola and Coors, 2018. Some existing studies have been performed to infer the seasonality and patterns insides for meteorological and pollution parameters independently (Garrett andCasimiro, 2011, Harbola andCoors, 2020). Integrating interactive Vis techniques help in representing the geospatial data and attached environmental information together in one frame, beyond the typical spreadsheets, charts and graphs, along with presenting it in more sophisticated formats using infographics, maps, detailed bars, pie and heat maps to communicate in between relationships (Horvitz, 2007, Aigner, 2013, Liu et al., 2017. However, VA combines automated analysis techniques with interactive Vis, thereby assisting in the easier understanding of the temporal sensors data along with decision-making capabilities by dividing the cities into several components varying over space, time, and different spatial scales (Kurkcu et al., 2017, Panagiotopoulou andStratigea, 2017). Several above discussed studies used smoothening and filtering techniques, ignoring the data noise and modifying the originality of the temporal dataset. Interactively visualising the sensors, and their data concerning timeframe helps monitor these parameters. The comprehensive study of meteorological parameters and their contribution to PM10-2.5, could be helpful. The above research suggests that several questions remain to be addressed, such as temporal wind variations, PM10-2.5 concentration fluctuations and in connection with user desired time frame, without modifying the authenticity of the original temporal dataset. An interactive system, Air Quality Temporal Anal-yser (AQTA), is developed, supporting the visual analyses of air quality data with time (Harbola et al., 2021). It discovers temporal relationships among complex air quality data, interactively in different time frames, by harnessing the user's knowledge of factors influencing the behaviour with the aid of Machine Learning (ML) models, but on a small scale, detailed for each sensor (individually) with lacking the spatial knowledge attached. A better insight into the sensor's system by improving human interaction with recorded measurements and spatiotemporal information is still required. This motivates the current research. The climate fluctuation and meteorological data monitoring concerns increased the demand for such a Web interface to study the measured data history interactively along with the sensors' nature monitoring for the future. This idea is implemented and expanded as a case study for Stuttgart (Germany). Thus, unsupervised Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) based clustering algorithm and Machine Learning (ML) model using Transformers Network designed for sensors nature monitoring and highlighting the dominating sensors locations working on the original temporal datasets by taking into consideration the above-listed gaps, and addressing solutions. These frameworks combine together to form (shown in Figure 1) Environmental Sensors Visual Prediction Assessment (ESVPA) an interactive visualisation platform with user choice parameters' selection freedom, delivering temporal variations of spatiotemporal information. Therefore, the current study proposes HDBSCAN clustering and sensors nature monitoring queries with the following contributions: (i) interactive temporal visualisation of unsupervised cluster identifications to support the user in the interpretation of the meteorological and pollution parameters, (ii) predicting sensor nature using Transformers Network, supported with visualisation of designed model dynamic training, testing and accuracy metrics assessments. Thereby highlighting the respective model's success and failure for inference data, (iii) visual preservation of spatial, non-spatial context and historical dataset information on user-selected temporal frame, and (iv) Unboxing the complexities of ML design with visualisation to making concept understanding more explainable and straightforward. This interactive visualisation platform would help to infer smart decisions for surrounding quality planning, which would help in proficient management and development of the city's resources. The remaining paper is organised as follows: Section 2. presents the methodology, datasets, and results, including discussion are explained in Section 3. and Section 4., respectively, followed by the conclusion in Section 5.. METHODOLOGY The proposed interactive web interface provides a platform to view and analyse in detail several sensors and their measurements in Stuttgart city along with spatial and temporal information. Each of the sensors are measuring parameters like PM2.5 and PM10, humidity, wind (speed and direction). Following sec-tion explains proposed system architecture comprising of unsupervised HDBSCAN clustering in 2.1, sensors nature prediction using Transformers Network in 2.2, and interactive visualisation platform inside in 2.3. Unsupervised HDBSCAN Clustering All sensors time series measurements (for each sensor location) are studied using unsupervised clustering and sensor's location queries. Initially, values of each parameter are preprocessed before applying the clustering. The preprocessing involves normalising of the data followed by temporal filtering. The mean and standard deviation of a parameter are calculated. The values of a parameter are then subtracted by mean, followed by division with standard deviation, to get the normalised value. Further, the temporal filtering is applied on these normalised values. In the current study, the interactive temporal filtering based on user selection in a years is applied. These user selection temporal query division helps in detailed analysis of the considered parameters as per user desires. HDBSCAN is applied in this study on sensors' measurements with noise which is an extension of Density-Based Spatial Clustering (DBSCAN) by converting it into a hierarchical clustering algorithm. It performs DBSCAN over varying epsilon (eps) values (i.e., eps-neighborhood of point X, defining the radius of neighborhood around a point X) and integrates the results to find a clustering that gives the best stability over eps (Campello et al., 2013). This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN). Therefore, for historical data HDB-SCAN returns good clusters with little parameter tuning. The minimum cluster size parameter, is intuitive and decided empirically in this study. Values of k1 and k2 (Table 1) were empirically taken as 0.75 and 0.35 respectively (same for all the parameters), so that a sufficient number of samples occur in each class (as shown in Table 1). Thus, HDBSCAN is one of the strongest clustering option with theses advantages, it is applied on the temporal measurements of sensors' and produce interactive filtering output. The generated distance matrix in hierarchical clustering helps in identifying the similarities of the clusters and combines most similar clusters hierarchically until the desired number of clusters are obtained minimising the variance within the cluster by using the objective function of the error sum of squares (McInnes and Healy, 2017). The sum of the squares starting from the designed three clusters (low, mild and high) is kept minimised. This gives a hint through the merging cost. The number of clusters is kept fixed until the merging cost increases and then used the cluster (value ranges), right before the merging cost increased simultaneously (Paul and Murphy, 2009). Transformers Network In order to provide more detailed comparison and trends analysis, each sensors' nature monitoring using Machine Learning (ML) approach called Transformers Network predictor model were designed and integrated, that takes successive time values in terms of parameters as input with sensors' locations and predict the future dominant (high measurements) value and location with time as the output (Wu et al., 2020). A Transformers Network is an encoder-decoder architecture, here the encoder consists of some set of encoding layers that process the input iteratively, one layer after another, and the decoder consists of a group of decoding layers that do the same thing to the encoder's output (Vaswani et al., 2017). Each encoder layer's function is to process its input to generate encodings, containing information about which parts of the inputs are relevant to each other. It passes its set of encodings to the next encoder layer as inputs. Each decoder layer does the opposite, taking all the encodings and processes them, using their incorporated contextual information to generate an output sequence. Each decoder layer also has an additional attention mechanism that draws info from previous decoders' outputs before the decoder layer draws data from the encodings (Parmar et al., 2018). The encoder and decoder layers have a feedforward neural network for additional processing of the outcomes and contain residual connections and layer normalization steps. The used dataset comprises of wind (speed and direction), humidity, PM2.5 and PM10, with temporal resolution epoch and epochj (j → 1 to n) denotes wind (speed and direction), humidity, PM2.5 and PM10 at time j, where 1 and n are the first and last values in the dataset, respectively. Multiple samples are designed using the dataset for training and testing the proposed algorithms. A sample consists of a feature vector as an input with a corresponding three output classes. W indow b (a scalar) consecutive values of wind (speed and direction), humidity, PM2.5 and PM10 from epochj to epoch j+W indow b form a feature vector of dimension W indow b × 1 which is the input of the sample for each parameter. W indow f (a scalar) successive values of considered five parameters after the last value in the input i.e., epoch j+W indow b , are used to define the sample's output class. Mean (µ), and standard deviation (σ) of the wind (speed and direction), humidity, PM2.5 and PM10 of the entire dataset are calculated. Various class boundaries are designed using µ and σ as shown in Table 1. Among W indow f , count of values occurring in each class in Table 1 is noted, and the class that has a maximum count i.e., dominant, is assigned to the sample. Similarly, multiple samples based on each of the parameters are created by taking W indow b values in the corresponding input from epochj to epoch j+W indow b by varying j from 1 to n -W indow f , at an increment of 1. The outputs of these samples are designed as discussed above. Thus, at this stage, for W indow b values in the input from epochj to epoch j+W indow b , there would be five sets of samples, one based on humidity, wind speed, wind direction and other based on PM2.5 and PM10. Here in this analysis the size of W indow b and W indow f are kept equal with user option to predict the next 6 hours. These conditions ensured comprehensive and accurate analysis of the data with respect to independent and different parameter selections. Visualisation Platform Moreover, an interactive platform is developed to provide an in depth analytic and nature patterns clarity in between the meteorological and pollution parameters for user desired inputs in the desired time frame. This platform is called as Environmental Sensors Visual Prediction Assessment (ESVPA) for sensors nature monitoring. ESVPA also provides tooltiping, brushing and linking for maintaining the transparency and combining different visualisation methods between user-computer efficient interactions (Shneiderman, 1996, Horvitz, 2010. Figure 1 provides an overview of ESVPA workflow, with highlighting the systemuser interfaces of visual sensors prediction and analyses. The System combines with historical meteorological and pollution parameters temporal database, unsupervised clustering outputs, sensor nature monitoring Transformers Network, structure of various graphs and charts, and accepts user queries. The U ser raise quarries, selects, inspects and views the states of the parameters interactively. ESVPA uses a time series dynamic stack chart with a calendar selection to visually interact with the parameters and spatial information attached with the help of the interactive map. This is accompanied with map to provide a detailed time series data inspection of parameters magnitudes with line chart, calendar chart and heat-map view option in order to compare the trends among Sensors' parameters based on the months during a year, week, day-wise. Figure. 2, and Figure. 3 provides a glimpse of the de-ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume VIII-4/W1-2021 6th International Conference on Smart Data and Smart Cities, 15-17 September 2021, Stuttgart, Germany signed Web interface where the selected sensors' are visualised with spatial and non-spatial information over the map in the specified temporal frame. The user could select the parameters over the desired time frame and compare the patterns interactively (as shown in Figure. DATASET The temporal datasets of meteorological and pollution parameters are used and analysed in this study. The luftdaten selber messen provides city sensors measurements at several locations in Stuttgart, Germany. Moreover, the historical data from 2016 to 2020 from Hauptstaetter Strasse 70173 Stuttgart corner station sensor also considered . These datasets contain total eleven city centre sensors locations with wind (speed and directions), humidity along with PM10-2.5, measured in a 30-minute time inter- val ( Figure. 6 shows selected sensors on map). The areas dataset were organised separately into individual years for each parameter with spatial information attached, using time information with past data first, followed by current data. This helps to perform pollution and meteorological parameters temporal datasets trends and in depth analyses along with sensor's nature monitoring. RESULTS The designed algorithms and platform helps to perform in depth study of sensors measurements, and also to estimate their nature monitoring for 6hrs in future. This ESVPA is implemented as Web-based application using Altair, D3.js, kepler.gl, Streamlit, Keras library (Chollet, 2017) with TensorFlow in backend in Python and executed on Intel ® Core TM i7-4770 CPU @3.40 GHz having four cores. Result in following section followed by discussion subsection analysed and validates the outcome of the proposed framework. In order to provide a more detailed comparison of the selected parameters, the visualisation of historical measurements with both spatiotemporal information attached was explored using the available interactive option of data overview in the designed platform. Figure. 2 and Figure. 3 show the historical data visualisation for the user selected time frame of PM10, and wind flow on the map with the help of line charts, which helps to connect spatiotemporal information with the respective sensors measurements visually. The platform was also used to visualise the output of the HDB-SCAN clustering (as shown in Figure. 4 and Figure. 5). The unsupervised hierarchical clustering here directed for inferring the trends and inner structure of the meteorological and pollution parameters dynamically. Figure. 4 show the obtained selection for selecting clusters in the temporal data set for wind flow (WS) in selected time frame. Similar parameter analyses were conducted for rest of the parameters. Here the class value ranges of each assigned class were also displayed and compared. Moreover, the performed clustering with visualisation helped the user unboxed the complexities of datasets and their available trends in the best possible way dynamically. Furthermore, the obtained wind rose plot helped visualise wind speed and direction in a circular format in the same graph. The length of each spoke around the circle indicates the number of times (count) that the wind blows from the indicated direction. Colors along the spokes indicate classes of wind speed. Figure. 5 shows the generated wind rose plot on the selected temporal frame. Besides, each different color denoted the wind speed divided into value range boundaries at the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume VIII-4/W1-2021 6th International Conference on Smart Data and Smart Cities, 15-17 September 2021, Stuttgart, Germany differences (within the class assigned maximum and minimum value) with varying spoke length and direction highlighting the wind blows count from the indicated directions in this study. Figure. 7 show the output of sensors nature monitoring using Transformers Network predictions. These options were integrated together with the selection of the desired user query. Here, the map highlights the location of the respective sensor selected, with day-wise sensors' measurements visualised with the bar chart. Moreover, the attached heat map, represented the intensity of the color, governed by the magnitude of parameter values. A similar heat map display existed for other parameters as well. Selected parameter (anyone i.e., wind speed and humidity along with PM10-2.5), having higher values (range) over the time, had been assigned a darker color in the respective heat map. In order to provide a more detailed comparison and trends analyses, user desired time frames were considered for all the parameters. Figure. 8 and Figure. 10 show the designed network achieved accuracy with the selection of the desired user query. Here, precision and recall values for predicting dominant speed for various classes for January month (of Stuttgart) are represented in Figure. 8,with reaching values for all the classes above 61% and achieved total accuracy achieved was 96.33%. Figure. 9 shows the randomly selected date for model validation. Highlighting the obtained Transformers Network visual prediction accuracy analyses with presented model success-failure (red rows). Furthermore, this has supported sensitivity analyses for calculating the success and failure of the model highlighted with color and dynamic interaction. Here networks' success was represented by yellow color and failure with red. These color combinations were used to deliver more insides making the understanding for the user more straightforward and unboxing the complexities of ML. Thus, this platform (all together) helped to discover all the possible changes by enhancing the ability to dig in detail inside the data with accuracy for each of the considered meteorological and pollution parameters as per the user choice visually. Discussion The hierarchical clustering for meteorological and pollution parameters highlighted the trends at which any selected parameter was analysed in the clustering diagram, with each class assigned set lower and upper value ranges. HDBSCAN performed exploratory data analysis as it is a fast and robust algorithm that helped to work over the unsmoothed temporal meteorological and pollution parameters to return meaningful clusters. A sequential scale of color brewer for rose plot scale color map used for showing classes (low, mild and high) with the color frequency differentiates low values class from high values class. The blended progression using, typically of a multi hue, from the least to the most opaque shades with respect to value ranges lied in the clusters, represents low to high values. The 2D map view of all the selected sensors on the map, long with time based data filtering query with tool-tipping helps to easily interact and visualise all the information together in one domain Figure. 6. Each year dataset for the considered parameter over the selected time frame that joined together sooner (in clustering) are more similar to each other than those that are joined together later. The total within- cluster variance is minimised during clustering. At each step, the paired clusters with minimum between-cluster distance are merged. As a result it is observed that in February month higher magnitude of wind flow is also observed over 2016 to 2020. On the other hand, the Transformers Network helped to estimate sensors nature interactively. The input sample compressed of W indow f consecutive values from the data with five features of PM2.5, PM10, humidity, and wind (speed and direction) provide temporal information and Transformers Network operations are able to detect trends and features. During the sample designing phase, their output classes were decided statistically using µ and σ of the total samples particular to year's data set of respective parameter (i.e., anyone out of five), thereby representing the dataset better. Moreover, the total samples for a given year were divided into training and testing samples with a ratio of 7 : 3 (i.e., 70% of the total of training and rest for testing). The dynamic network metrics analyses (total accuracy, precision and recall) of the Transformers Network supported with interactive visualisation helped the user verify and understand the specified with the selected parameter. Visual exploration has also been contributed to make ML more easily understandable and explainable in the sense of network inside and explainable. Moreover, the developed ESVPA for sensors nature monitoring is used to provide interactive selections of considered meteoro-logical and pollution parameters to analyse the concurred pattern in the dataset, in a time frame. ESVPA is also compared with existing literature that are near to the proposed framework. Air Quality Temporal Analyser (AQTA) proposed by (Harbola et al., 2021), has provided visual analyses platform of air quality data with time but lacks sensors nature monitoring. It discovers temporal relationships among complex air quality data, on a small scale for each sensor's (individually) with missing the spatial knowledge attached. However, the developed ESVPA connect temporal, spatial and non-spatial information together visually. Further, enhancing the time series analyses using the unsupervised HDBSCAN clustering on a series of (above mentioned) parameters. Therefore, with the in-depth sensors nature understanding and trends, ML approach called Transformers Network predictor model is integrated, that takes successive time values of parameters as input with sensors' locations and predict the future dominant (highly measured) values with the location in time as the output. Thus, making ESVPA a work extension provides a big picture of sensors' nature monitoring and temporal data measurement analyses. This has helped in making the data trends analyses, and sensors nature monitoring easy, user interactive and comparable in the time domain. CONCLUSION Geovisualisation, together with visual analytics, encourages a better understanding of geospatial data by identifying trends, patterns, and contexts with making the economy, mobility, environment, people, and governance of a city smarter. In this paper, ESVPA, an interactive visualisation Web successfully designed and demonstrated for time series meteorological and pollution parameters. The temporal data analysed using the unsupervised HDBSCAN clustering on a series of these parameters. Furthermore, for sensors nature understanding and trends, Machine Learning (ML) approach called the Transformers Network predictor also integrated, which takes successive time values of parameters as input with sensors' locations and predict the future dominant (highly measured) values with location in time as the output. The interactive platform for meteorological and pollution parameters would help to plan the future with more renewable resources awareness and understanding. The designed visualisation platform (a small demonstration version) in this work and could be further improved with the ensemble of advanced visualisation approaches. The future focus for the authors would be to improve the visual analysis and utilising more advance deep learning models. Meanwhile, the devised work can help create environmental awareness among humankind and provide foreknowledge for better city planning.
5,874.4
2021-09-03T00:00:00.000
[ "Computer Science" ]
Systematic functional interrogation of human pseudogenes using CRISPRi Background The human genome encodes over 14,000 pseudogenes that are evolutionary relics of protein-coding genes and commonly considered as nonfunctional. Emerging evidence suggests that some pseudogenes may exert important functions. However, to what extent human pseudogenes are functionally relevant remains unclear. There has been no large-scale characterization of pseudogene function because of technical challenges, including high sequence similarity between pseudogene and parent genes, and poor annotation of transcription start sites. Results To overcome these technical obstacles, we develop an integrated computational pipeline to design the first genome-wide library of CRISPR interference (CRISPRi) single-guide RNAs (sgRNAs) that target human pseudogene promoter-proximal regions. We perform the first pseudogene-focused CRISPRi screen in luminal A breast cancer cells and reveal approximately 70 pseudogenes that affect breast cancer cell fitness. Among the top hits, we identify a cancer-testis unitary pseudogene, MGAT4EP, that is predominantly localized in the nucleus and interacts with FOXA1, a key regulator in luminal A breast cancer. By enhancing the promoter binding of FOXA1, MGAT4EP upregulates the expression of oncogenic transcription factor FOXM1. Integrative analyses of multi-omic data from the Cancer Genome Atlas (TCGA) reveal many unitary pseudogenes whose expressions are significantly dysregulated and/or associated with overall/relapse-free survival of patients in diverse cancer types. Conclusions Our study represents the first large-scale study characterizing pseudogene function. Our findings suggest the importance of nuclear function of unitary pseudogenes and underscore their underappreciated roles in human diseases. The functional genomic resources developed here will greatly facilitate the study of human pseudogene function. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-021-02464-2. public available data from genome wide CRISPR-Cas9 recessive screens, available on project score (PMID: 33068406) and the cancer dependency map portal (https://depmap.org/portal/), to this aim. -The authors do not specify if the screens were performed in duplicated batches and how reproducibility across these was assessed. -The authors should discuss the rationale beyond the selection of the two models employed in their screen and also mention in title, abstract and introduction that their results are specific to breast cancer (they might actually be model specific, i.e. linked to genomic background of MCF7 and MDA-MB-231, this should be mentioned in the discussions) -the legends of the supplementary figures are sloppy and including repetitions/errors, I would suggest a proper revision. Furthermore what is the difference between supplementary figure 1B and 1C ? Finally, effects on fitness are hardly readable if plotting sgRNAs read counts individually for the two condition. Histograms of sgRNA representation fold-changes between day 21 and day 0 should be used. -did the author observed any enriched guides? i.e. targeting pseudo genes that upon inactivation, increased cellular fitness? They mention only their filtering strategies, accounting for depleted guides. -it is not clear how/if the author checked that observing negatively selected parent genes was not due to off-target effects of the sgRNA targeting the corresponding pseudo-genes (due to their sequence similarity). The number of pseudo-genes and parent-genes hits is the same (69). Is it the case that for all the pseudo-gene hit also the corresponding parent gene is called as significantly impacting cellular fitness? combined with the larger effect magnitude of the parent genes with respect to their corresponding pseudo genes, this looks a bit suspicious to me and suggesting that it might actually be due sgRNA off targets effect. How the authors can exclude this? This control is performed only for the handful of guides put forward for further validation. -Why the authors picked and decided to put forward only hits from screening the MCF7 cell line? What about those from the MDA-MB-231 cell line? shouldn't a consensus set of hits have been selected and put forward for further validation? pseudogenes that have a possible cellular fitness effect. Among these, they investigate the MGAT4EP pseudogene more deeply, suggesting a role for its regulation of FOXA1 and FOXM1. Overall, the study is interesting. The experiments are well executed and done with careful consideration. In particular, the sgRNA library appropriately targets pseudogenes and parent genes, and the authors attempt to de-convolute bidirectional promoters. The authors also take effort to define appropriate 5' transcriptional start sites prior to sgRNA design. Overall, this is an intriguing study that tackles a challenging question in genome biology. The data are promising, and the concept is good. However, I have reservations about the execution of several key experiments that need to be addressed by the authors. 1. The negative controls for the CRISPR screen are unbalanced and unusual. There are reportedly 267 sgRNAs to AAVS1 and 83 non-targeting sgRNAs. This is not typical. Generally, there are hundreds of random genome-cutting sgRNAs to non-coding/nontranscribed loci, not just AAVS1. Likewise, typically there are hundreds of non-targeting sgRNAs. While I recognize that the authors cannot easily re-design their sgRNA library and reperform the screens, I think that more negative control sgRNAs are required for their validation experiments (eg. Figure 3B-H). Specifically, these experiments should include two different genome-targeting sgRNA controls from different genomic regions (e.g. not two AAVS1targeting sgRNAs). 2. The authors should investigate putative off-target effects of the sgRNAs more fully. The authors can analyze the number of genomic targeting sites by BLAT (as sgRNAs targeting many sites are confounded) as well as use CasOFFinder or another similar tool to predict off target effects of sgNRAs. 3. Rescue experiments should be performed for 4 pseudogenes validated by CRISPR assays in Figure 3C-F. This would presumably be overexpression of a sgRNA-resistant cDNA, which would obviate the loss-of-viability phenotype. 4. How do the authors know that the pseudogenes targeted here do not encode a protein or peptide? A subset of pseudogenes are thought to be translated by ribosomes (see Ji, eLife, 2015) and some have mass spectrometry peptides to support translation. The authors should ensure that MGAT4EP does not translate a protein via in vitro transcription/translation assays, and analysis of publicly-available mass spectrometry data. 5. Why is GAPDH mRNA not significantly enriched in the cytoplasm in Figure 4A-B? It seems that these experiments lack a cytoplasmic control. 6. For Figure 4D-E, the authors use GAPDH mRNA as a negative control for FOXA1 pulldown. GAPDH mRNA is not an appropriate negative control since it should be in the cytoplasm, and MGAT4EP should be located in the nucleus. The correct negative controls would be other nuclear RNAs that do not bind FOXA1. With the current data, it is possible that all nuclear RNAs bind FOXA1 non-specifically. 7. The concept of a sponge/ceRNA effect is controversial, with many publications on this topic but mostly of low-quality data. There is highly-quality evidence clearly suggesting that the ceRNA hypothesis is largely invalid (Denzler, Molecular Cell 2014). Moreover, the data from original ceRNA paper (Poliseno, Nature 2010) has been difficult to reproduce (Kerwin, Replication Study in eLife, 2020). Some researchers suggest that any ceRNA effects are highly nuanced and highly context-specific, and do not occur with many miRNAs at all (Bosson, Molecular Cell, 2014). To address this, the authors should (1) ensure that MGAT4EP does not encode a protein responsible for FOXA1 regulation, and (2) generate a mutant control MGAT4EP that mutates the putative binding site between FOXA1 and MGAT4EP and include this for their pull-down assays in addition to the antisense RNA as a control. Reviewer #1 General comments: "The authors present results from the first CRISPRi screen performed to date for interrogating human pseudogenes' function. They devised an effective bioinformatics pipeline to design a library of single guide RNA targeting pseudo genes, combining a CAGE analysis to identify transcription starting sites at a genome scale (including TSS of pseudogenes) with Sequence scan for CRISPR to identify possible target regions for sgRNAs. Secondly, they selected two breast cancer cell lines and focused only on pseudogenes targeted by at least 3 sgRNAs and expressed at the basal level in the two models (using public available RNA-seq data from the cancer cell line encyclopedia). The final library included also guides targeting pseudo-gene parent genes, plus positive and negative controls, i.e. core-fitness genes and non targeting guides. Using previously described experimental setting, the authors screened the two selected models with their library and were able to identify a number of pseudo-gene hits, that upon inactivation reduce cellular fitness. They selected the top hits and put them forward to further experimental validation, functional characterization and clinical relevance estimation. Among these, they extensively characterize the role of MGAT4EP as a cancer-testis unitary pseudogene promoting cellular fitness in breast cancer. Briefly, this is a great piece of work describing an unprecedented effort, setting standards for future studies employing CRISPRi screens to characterize pseudo genes and reporting results of potential impact. Particularly, the bioinformatics pipeline used to design the library is well thought, comprehensively described and the author included appropriate positives and negative controls. A number of points should be addressed and others clarified, in my opinion before proceeding with the publication of this manuscript in genome biology." Response: We have performed additional analyses to address the reviewer's concerns. Please find our responses to individual comments/points as follows. Specific points: Comment 1: "The authors should mention that the final library is specific to the two screened models (as they excluded pseudogenes not expressed in these two models)." Response: Following the reviewer's suggestion, we have added a clarification in the Results section that as a proof-of-principle systematic study of human pseudogene function with CRISPRi screens, our focus is breast cancer and the library used in our CRISPRi screens is specific to the two breast cancer cell line models that represent distinct breast cancer subtypes: luminal A and triple negative/basal-like breast cancer in the revised manuscript. Comment 3: "Among the parent genes included in the library is there any MCF7 or MDA-MB-231 specific essential gene? if so these could be used as further positive controls. The authors might use public available data from genome wide CRISPR-Cas9 recessive screens, available on project score (PMID: 33068406) and the cancer dependency map portal (https://depmap.org/portal/), to this aim." Response: Following the reviewer's valuable suggestion, we have used the essential parent genes that were identified by previous CRISPR-Cas9 knockout screens in MCF7 cells as further positive controls (Supplementary Fig. 1D and E), based on the publicly available data curated in the project score database (PMID: 33068406). Indeed, the CRISPRi sgRNAs that target the essential parent genes identified by previous CRISPR-Cas9 knockout screen, showed a statistically significant larger fold change of decrease between day 21 and day 0, compared with the ones targeting all parent genes (Mann-Whitney U test, p<1.21x10 -4 , Supplementary Fig. 1D). We only performed CRISPRi screens in the MCF7 cell line, although we originally planned to perform CRISPRi screens in both MCF7 and MDA-MB-231 cell lines to gain insight into the common and different hits from these two cell lines that represent different breast cancer subtypes. It was very unfortunate that, due to family reason during COVID-19 pandemic, the first author Dr. Ming Sun decided to end his postdoctoral training and went back to China to stay with his families during the pandemic. Therefore we were unable to start the CRISPRi screen in the MDA-MB-231 cell line, although we included the sgRNAs that target expressed pseudogenes in MCF7/MDA-MB-231 cell lines in our CRISPRi sgRNA library. We have added a clarification that the screens were only performed in luminal A breast cancer cells in the revised manuscript. Comment 4: "The authors do not specify if the screens were performed in duplicated batches and how reproducibility across these was assessed." Response: Like many published CRISPR/Cas9 knockout screens or CRISPRi screens, our screens were conducted in triplicates in a single batch. We have added a clarification about this and added an analysis of the correlation in sgRNA abundance between different replicates to demonstrate data reproducibility ( Supplementary Fig. 1B). Comment 5: "The authors should discuss the rationale beyond the selection of the two models employed in their screen and also mention in title, abstract and introduction that their results are specific to breast cancer (they might actually be model specific, i.e. linked to genomic background of MCF7 and MDA-MB-231, this should be mentioned in the discussions)." Response: Following the reviewer's valuable suggestions, we have added a description of the rationale for the selection of the two models employed in our screen in the Results section. We have also mentioned in abstract and introduction that our proof-of-principle CRISPRi screens for pseudogenes are specific to breast cancer. Furthermore, we have added a discussion mentioning that the results we obtained might be model specific in the Discussion section. Although the CRISPRi screens we performed are specific to breast cancer, the scope of our study is more general. First, we developed a genome-wide library of CRISPR interference (CRISPRi) sgRNAs that target human pseudogene and will greatly facilitate the study of human pseudogene function under diverse biological contexts. Second, we performed integrative analyses of multi-omic data from the Cancer Genome Atlas (TCGA) and revealed many unitary pseudogenes, whose expressions are significantly dysregulated and/or associated with overall/relapse-free survival of patients in diverse cancer types. Last but not least, our study is not only the first systematic functional interrogation of human pseudogene in breast cancer, but also the first one in any biological contexts. Therefore we keep the same title in the revised manuscript to reflect the general scope of our study that is beyond breast cancer. Fig. 1D and E). Comment 7: "Did the author observed any enriched guides? i.e. targeting pseudo genes that upon inactivation, increased cellular fitness? They mention only their filtering strategies, accounting for depleted guides." Response: Yes, we observed some enriched guides, but did not identify any significant positively selected pseudogene hits with the filters of p<0.05, FDR<0.25 and log 2 Fold-Change≥log 2 (1.5). In the revised manuscript, we mentioned that we did not find any significant positively selected pseudogenes using the above-mentioned criteria. We also added the positive selection information provided by MAGeCK program for pseudogenes/parent genes to the Supplementary Table 3. Comment 8: "It is not clear how/if the author checked that observing negatively selected parent genes was not due to off-target effects of the sgRNA targeting the corresponding pseudo-genes (due to their sequence similarity). The number of pseudo-genes and parent-genes hits is the same (69). Is it the case that for all the pseudo-gene hit also the corresponding parent gene is called as significantly impacting cellular fitness? combined with the larger effect magnitude of the parent genes with respect to their corresponding pseudo genes, this looks a bit suspicious to me and suggesting that it might actually be due to sgRNA off targets effect. How the authors can exclude this? This control is performed only for the handful of guides put forward for further validation." biology--that of the function of pseudogenes. The authors design a custom CRISPR screen to systematically identify pseudogenes potentially implicated in cellular fitness. They identify 69 of 850 targeted pseudogenes that have a possible cellular fitness effect. Among these, they investigate the MGAT4EP pseudogene more deeply, suggesting a role for its regulation of FOXA1 and FOXM1. Overall, the study is interesting. The experiments are well executed and done with careful consideration. In particular, the sgRNA library appropriately targets pseudogenes and parent genes, and the authors attempt to de-convolute bidirectional promoters. The authors also take effort to define appropriate 5' transcriptional start sites prior to sgRNA design. Overall, this is an intriguing study that tackles a challenging question in genome biology. The data are promising, and the concept is good. However, I have reservations about the execution of several key experiments that need to be addressed by the authors." Response: We have performed additional analyses and experiments to address the reviewer's concerns. Please find our responses to individual comments/points as follows. Specific points: Comment 1: "The negative controls for the CRISPR screen are unbalanced and unusual. There are reportedly 267 sgRNAs to AAVS1 and 83 non-targeting sgRNAs. This is not typical. Generally, there are hundreds of random genome-cutting sgRNAs to non-coding/non-transcribed loci, not just AAVS1. Likewise, typically there are hundreds of non-targeting sgRNAs. While I recognize that the authors cannot easily re-design their sgRNA library and re-perform the screens, I think that more negative control sgRNAs are required for their validation experiments (eg. Figure 3B-H). Specifically, these experiments should include two different genome-targeting sgRNA controls from different genomic regions (e.g. not two AAVS1-targeting sgRNAs)." Response: Thanks for raising this important issue. Unlike CRISPR-Cas9 knockout, CRISPRi mediated knockdown is based on the sgRNA guided dCas9-KRAB binding to the promoter region of target gene for transcription repression and does not involve genome cutting. Based on the published screens so far, a typical genome-wide CRISPR-Cas9 screen indeed includes more negative control sgRNAs than our screen. However, our screen is not a genome-wide screen and the library only includes ~9,400 sgRNAs that target pseudogene/parent gene, which is only ~1/10 of the number of gene-targeting sgRNAs (~100,000) in a genome-wide screen. Therefore less negative control sgRNAs were included in our screens compared with a genome-wide screen. For example, the popular human GeCKO v2 libraries (https://media.addgene.org/cms/files/GeCKOv2.0_library_amplification_protocol.pdf) contain ~120,000 sgRNAs that target 19,052 human genes and <2000 negative control sgRNAs designed not to target the genome (non-targeting sgRNAs). The ratio between the number of negative controls vs. the number of gene-targeting sgRNAs is actually higher in our library than GeCKO. v2 libraries. The negative control sgRNA used in our validation experiments is a non-genome-targeting sgRNA (sg-NT). To control for the effect of different negative control sgRNAs, we have performed additional validation experiments for the 4 pseudogenes with two new negative control sgRNAs, including one AAVS1-targeting sgRNA (sg-AAVS1) and one (sg-nAAVS1) that targets the region on chromosome 4 and is distant from AAVS1 on chromosome 19 (Supplementary Table 2). We selected the sg-AAVS1 because it showed insignificant fold-change in abundance in our CRISPRi screens. The sg-nAVVS1 was selected because it showed insignificant fold-changes in abundance across multiple cell lines in previous CRISPR-Cas9 knockout screens (PMID 28162770). Using these two genome-targeting negative controls, we successfully confirmed our previous findings. We have added these new results to the revised manuscript (Supplementary Fig. 2A-C). Comment 2: "The authors should investigate putative off-target effects of the sgRNAs more fully. The authors can analyze the number of genomic targeting sites by BLAT (as sgRNAs targeting many sites are confounded) as well as use CasOFFinder or another similar tool to predict off target effects of sgNRAs." Response: Following the reviewer's valuable suggestion, we further investigated the putative off-target effects of the sgRNAs on the results of our CRISPRi screen. Similar to BLAT, Cas-OFFinder performs a sequenced-based search for putative off-target sites, but provides a more flexible and powerful framework for executing this task than BLAT. Therefore we used Cas-OFFinder to identify the putative off-target sites of individual sgRNAs in the human genome. Because the off-target effect is much weaker when there are >1 nucleotides (nt) of mismatches (Nature Biomedical Engineering 2020; PMID: 31937939; Figure 5d) or there is any RNA/DNA bulge (Nature Biotechnology 2016, PMID: 26780180; Figure 5c, d and e) in the potential off-target sites, we focused on the predicted off-target sites with only 1-nt mismatch from a given sgRNA sequence. We found that most sgRNAs targeting pseudogene/parent gene were associated with no or very small number (≤1) of predicted off-target sites in the human genome (Supplementary Fig. 1F). Moreover, the number of predicted genomic off-target sites associated with off-targeting sgRNAs did not show significant difference between pseudogene/parent gene hits and non-hits (Mann-Whitney U test, p≥0.18, Supplementary Fig. 1G). Importantly, we found that the pseudogene/parent gene hits did not have a significantly larger proportion of off-targeting sgRNAs with a large number (≥10) of predicted off-target sites, compared with the other pseudogenes/parent genes (Fisher's exact test, p>0.32). Collectively, these results suggest that in overall, the potential off-targeting sgRNAs may have little impact on differentiating the screen hits from the other genes and thus the results of our CRISPRi screens. Aside from the global analysis of off-target effect, we further analyzed the potential off-target effects between identified pseudogene and parent gene hits and found the impact of putative off-target effects on the identified pseudogene/parent gene hits is little (see details in our response to Comment 8 from reviewer #1). One caveat associated with our analysis of CRISPRi off-target effect is that it was based on the established knowledge about the off-target effect of CRISPR-Cas9 system that involves genome cutting, whereas CRISPRi is based on catalytically inactive Cas9 and does not involve genome cutting. Given our limited knowledge about CRISPRi-mediated off-target effect, the predictive power of such in silico analysis remains unclear. We have added a discussion about potential limitations of the in silico CRISPRi off-target effect analysis in the Discussion section. Figure 3C-F. This would presumably be overexpression of a sgRNA-resistant cDNA, which would obviate the loss-of-viability phenotype." Response: Following the reviewer's suggestion, we performed the rescue experiments for the 4 pseudogenes validated by CRISPRi assays in Figure 3C-F, by overexpressing the corresponding cDNA upon CRISPRi-mediated knockdown. In these rescue experiments, no mutations was introduced into the cDNAs because CRISPRi mediated knockdown does not involve the cutting of the gene body region, but only affects the promoter region that is not part of the cDNA. Unlike CRISPR/Cas9 knockout, CRISPRi mediated knockdown is based on the sgRNA guided dCas9-KRAB binding to the promoter region of target gene for transcription repression. We found that overexpression of the corresponding pseudogene cDNA was able to rescue the loss-of-viability phenotype caused by the CRISPRi-mediated knockdown ( Supplementary Fig. 2D). Furthermore, we found that cDNA overexpression rescued the loss-of-function phenotype in colony formation assay ( Supplementary Fig. 2E). These results support that the observed loss-of-function phenotypes for these 4 pseudogenes are not due to off-target effect. Comment 3: "Rescue experiments should be performed for 4 pseudogenes validated by CRISPR assays in Comment 4: "How do the authors know that the pseudogenes targeted here do not encode a protein or peptide? A subset of pseudogenes are thought to be translated by ribosomes (see Ji, eLife, 2015) and some have mass spectrometry peptides to support translation. The authors should ensure that MGAT4EP does not translate a protein via in vitro transcription/translation assays, and analysis of publicly-available mass spectrometry data." Response: To rule out the possibility that MGAT4EP translates a protein, we first analyzed publically available and in-house (unpublished) ribosome profiling (ribo-seq) data in MCF7 cell line, and found no ribo-seq reads that support ribosome occupancy on MGAT4EP RNAs. Second, we predicted putative ORFs encoded by MGAT4EP using an ORF prediction module that solely relies on the sequence information and is implemented in our Ribo-TISH package (Nature Communications 2017 PMID: 29170441), and searched the publically available mass-spectrometry (MS) data in MCF7 and T47D cells for the MS/MS spectra that matched the protein sequences corresponding to these putative ORFs. We found no MS evidence of the candidate proteins encoded by these putative ORFs. Finally, we performed an in vitro translation assay and found no evidence of any protein products generated by MGAT4EP translation, with the appropriate positive and negative control ( Supplementary Fig. 3E). Taken together, these results indicate that MGAT4EP does not translate a protein and functions as an ncRNA. Comment 5: "Why is GAPDH mRNA not significantly enriched in the cytoplasm in Figure 4A-B? It seems that these experiments lack a cytoplasmic control." Response: To verify the quality of the nuclear and cytoplasmic fractionation experiments, we collected both RNAs and proteins from the nuclear and cytoplasmic fractions. We used β-tubulin as the cytoplasmic protein control and histone H3 as the nuclear protein control. The western blot results of these two protein markers ( Supplementary Fig. 3H) supports the good quality of our nuclear/cytoplasmic fractionation. We have also changed the way in which we present the results for the cytoplasm/nucleus enrichment of RNAs based on the new batch of data so that it is clearer that there is a significant enrichment of GAPDH mRNA in the cytoplasm ( Figure 4A). Figure 4D-E, the authors use GAPDH mRNA as a negative control for FOXA1 pull-down. GAPDH mRNA is not an appropriate negative control since it should be in the cytoplasm, and MGAT4EP should be located in the nucleus. The correct negative controls would be other nuclear RNAs that do not bind FOXA1. With the current data, it is possible that all nuclear RNAs bind FOXA1 non-specifically." Comment 6: "For Response: To rule out the possibility that all nuclear RNAs bind to FOXA1 non-specifically, we included two additional nuclear RNAs as negative controls: the U1 for the short nuclear RNAs (<200bps) and the MALAT1 for the long nuclear RNAs (>200bps). We found that FOXA1 did not bind to U1 or MALAT1 based on RIP experiments (see figure below). Because MGAT4EP is a long pseudogene RNA, the MALAT1 is a more relevant negative control and thus only the result for MALAT1 was included into the revised manuscript ( Figure 4E). Comment 7. "The concept of a sponge/ceRNA effect is controversial, with many publications on this topic but mostly of low-quality data. There is highly-quality evidence clearly suggesting that the ceRNA hypothesis is largely invalid (Denzler, Molecular Cell 2014). Moreover, the data from original ceRNA paper (Poliseno, Nature 2010) has been difficult to reproduce (Kerwin, Replication Study in eLife, 2020). Some researchers suggest that any ceRNA effects are highly nuanced and highly context-specific, and do not occur with many miRNAs at all (Bosson, Molecular Cell, 2014). To address this, the authors should (1) ensure that MGAT4EP does not encode a protein responsible for FOXA1 regulation, and (2) generate a mutant control MGAT4EP that mutates the putative binding site between FOXA1 and MGAT4EP and include this for their pull-down assays in addition to the antisense RNA as a control." Response: Thanks for the valuable comments and suggestions. For clarification, our data suggested that MGAT4EP did not function via a sponge/ceRNA mechanism, but supported its nuclear localization/function because for sponge/ceRNA regulation to be effective, the RNA has to be mainly localized in the cytoplasm to be accessible to miRNAs. But we agreed with the reviewer that it would be important to point out the limitation about the ceRNA mechanism/hypothesis and we have added a description, including the relevant literatures, about the limitation/controversial aspect of the sponge/ceRNA regulation in the Introduction section. To ensure that MGAT4EP does not encode a protein responsible for FOXA1 regulation, we have performed an analysis of publically available ribosome profiling and mass-spectrometry data. We also performed an in vitro translation assay. Our results indicate that MGAT4EP does not translate a protein and functions as an ncRNA (see details in our response to #4 comment). To identify the regions in the MGAT4EP RNA that was required for its interaction with FOXA1, we generated four serial deletion mutants with the deletion of 1-700, 700-1400, 1400-2100 or 2100-2819 bps, respectively. The RNA pull-down of antisense, full-length and serial deletion mutants of MGAT4EP RNA followed by anti-FOXA1 western blotting showed that the deletion of 1400-2100 bps of MGAT4EP abolished its interaction with FOXA1 (Fig. 4D), suggesting that this region is critical for MGAT4EP-FOXA1 interaction. We have added these new results to the revised manuscript (Fig. 4D).
6,307
2021-08-23T00:00:00.000
[ "Biology", "Computer Science" ]
UAV Object Tracking Application Based on Patch Color Group Feature on Embedded System : The discriminative object tracking system for unmanned aerial vehicles (UAVs) is widely used in numerous applications. While an ample amount of research has been carried out in this domain, implementing a low computational cost algorithm on a UAV onboard embedded system is still challenging. To address this issue, we propose a low computational complexity discriminative object tracking system for UAVs approach using the patch color group feature (PCGF) framework in this work. The tracking object is separated into several non-overlapping local image patches then the features are extracted into the PCGFs, which consist of the Gaussian mixture model (GMM). The object location is calculated by the similar PCGFs comparison from the previous frame and current frame. The background PCGFs of the object are removed by four directions feature scanning and dynamic threshold comparison, which improve the performance accuracy. In the terms of speed execution, the proposed algorithm accomplished 32.5 frames per second (FPS) on the x64 CPU platform without a GPU accelerator and 17 FPS in Raspberry Pi 4. Therefore, this work could be considered as a good solution for achieving a low computational complexity PCGF algorithm on a UAV onboard embedded system to improve flight times. Introduction Unmanned aerial vehicles (UAVs) have rapidly evolved, and there are lots of examples of UAV analysis applications in various fields, such as transportation engineering systems [1], UAV bridge inspection platforms [2], UAV-based traffic analysis [3], and oil pipeline patrol factory inspection [4], etc. In recent years, several object monitoring strategies have been proposed for discriminative object tracking. The solutions for lowcost computational complexity, accurate object tracking, real-time operation speed, and embedded-system implementation must turn out to be necessary problems. In general, the UAV object tracking algorithms are categorized into the deep learning (DL) method and the generic method. The DL method has been repeatedly used in many works for data compression [5], noise reduction [6], image classification [7], speech recognition [8], disease diagnosis [9], and so on. To extract the important features from input data, the DL model consists of several convolutional layers, activation functions, and pooling layers. The loss function is compared with the DL output error and the ground truth. To reach higher accuracy, the model needs to pre-train before using real cases [10]. In the DL tracking [11][12][13][14][15][16][17][18][19][20][21][22], the CNN [11][12][13]16,22] and RNN [18,19,21] are used to achieve higher accuracy on the discriminative object tracking. The DL model presents a powerful tracking performance. In addition, high-end computational platform, high-performance X86-64 multi-core CPU, GPU accelerator and huge computational complexities are required, which are also powerconsuming. Therefore, it is hard to achieve DL tracking on UAV onboard embedded systems because of its limitations. In the generic method, the image features of each frame are obtained through the filters. These features are further used to acquire the object's location. Recently, the discriminative correlation filter (DCF) method has turned out to be a popular and efficient approach to obtain features for discriminative object tracking [23][24][25][26][27][28][29][30][31][32][33]. The objective of DCF is to find a suitable parameter to maximize the features extracted from the object. Because of the cyclic correlation operation, the boundary effect causes periodic expansion in the DCF, which degrades tracking performance [24]. In [25], the regularization condition is applied to suppress the boundary impact and enhance performance to overcome the disadvantage and improve performance. However, when the object moves faster than the variation of local response in the frame, the lack of information and reformation are restricted by the regularization condition. Additionally, more computation increases the complexity, and the decrease in the FPS tends significantly in deformed DCF. The authors suggested the PCGF algorithm, which is a general approach that works well. It can minimize computational complexity and enhance the object tracking control system's efficiency on UAVs. The PCGF is made up of four GMMs [34] derived from the hue, saturation, and value (HSV) color model. The tracking object is divided into numerous non-overlapping patches, which are then converted into PCGFs, to represent attributes. By comparing the PCGFs from previous and current frames, the object's location is calculated. The backdrop feature of the item has been removed from the picture, ensuring that the current PCGFs have no background feature. In addition, the object is not moved very fast, and difference in the pixels from one frame to another is almost similar. The object position in the sequence frames only needs to be searched around the previous position in order to find the features matching. The main contributions of this work are summarized as follows: O An efficient PCGF algorithm for a generic approach is proposed to present the object features and compare the features matching score of the previous frame and the current frame. O We introduced a background subtraction technique for PCGF to eliminate background characteristics and ensure the object features are reserved. O The proposed method has been implemented on an embedded system, in addition, the proposed approach achieved the real-time speed on a CPU platform without a GPU accelerator. The rest of this paper is organized as follows: Section 2 revisits the introduction of the previous works. Section 3 describes the proposed method. Section 4 exhibits experimental results as well as related discussions. Finally, the concluding remarks are drawn in Section 5. Deep Learning-Based Tracking The DL-based method consists of neural network layers, which contain millions of parameters. The DL-based process finds values for every parameter to minimize the error between output and the ground truth. In recent years, the DL-based method has been broadly used for discriminative tracking. The object location is predicted by the features, which are extracted through hidden layers of neural network layers. In [11,12,14,17,18,23], a fully convolutional neural network was used to extract the feature from the input image. The pre-trained VGG Net [7] was implemented as the first layer to improve the object's feature performance [11,12,17,18,21,22]. In [11], two networks, the generic network (GNet) and the specialized network (SNet) were used to produce two heat maps from the input image's area of interest (ROI). The object position was then identified by distracter detection, which was determined by the two heat maps. According to the tracking sequence, Yun et al. [21] presented an action-decision network to anticipate the object's action. The video's recurrent neural network (RNN) shows a strong correlation between frames [19,20,22]. The object feature was used to compare the changing temporal features in multiple spatial LSTM cells in [22]. Multi-directional RNNs were used to calculate the spatial confidence map in [19]. Background suppression is evident in the results. The authors of [20] utilized the quicker R-CNN to discover all probable item candidates in the immediate search area. In addition, the three-stage LSTM cascade decided whether the tracker should update its location based on the search region. The coarse-tracker and fine-tracker [12,15,22] were utilized to execute the coarse-to-fine searching method in [15]. The object location was predicted using a combined response map based on the divided parts of the object in [18]. The Siamese network, which learns the similarity response with template and search region, was proposed in [16]. In addition, various methods to the DL-based tracking system included correlation filters (CF) [12,[17][18][19]22]. The accuracy of discriminative tracking can be improved by combining the CF and DL-based methods. In short, DL-based discriminative tracking methods have high performance, but they are not practical on the UAV onboard platform due to the enormous computing required. Generic Method To enhance the object features in the generic approach, a filter is designed to reinforce the discriminative object characteristics. The object location is determined by the filter response. Furthermore, the filter's coefficient updates every frame to detect object position. For object tracking, the discriminative correlation filters (DCF) are worked as the framework. In [23], the CF process the 256 × 256 image for object tracking, and a good result was achieved using a single-core 2.4 GHz CPU platform. In [24], the spatially regularized correlation filters (SRDCF) are proposed to eliminate the effect of the boundary effect. Fu. et al. [25] kept images into background-aware correlation filter (BACF) through different levels to obtain multiple features. Shi et al. [26] proposed an effective visual object-tracking algorithm based on locally adaptive regression kernels and implemented it using two support vector machines. In [27], the software de Reconhecimento de Cor e Contorno Ativo (SRCCA) is presented as target recognition and tracking system that uses the hue, saturation, and value color space for color-based object identification in the active component model. The Kalman filter was used for preliminary tracking in [28], and the findings were improved using a saliency map for local detection. Choi et al. [29] created a visual tracking algorithm for attention-modulated disintegration and integration, which they used to disintegrate and integrate target data. A target item is fragmented into various structural characteristic components, such as size, colors, and forms. The dual regularized correlation filter (DRCF) is presented in [30], which is employed to more effectively mitigate the boundary effect. The time slot-based distillation algorithm (TSD) was introduced by Li et al. [31] to improve the BACF. The log-Gabor filter is utilized to encode texture information during tracking in [32], and boundary distortion may be mitigated via saliency-guided feature selection. In conclusion, the DCF-based tracking system achieves a good mix of speed and accuracy. The described method for UAV tracking achieved real-time operation in [24][25][26][27][28]. However, once the target is detected missing, this may affect the tracking in the other frame severely. The accuracy is also affected by object deformation, rapid motion, illumination variation, and background complexity. Figure 1 depicts the suggested method's working procedure schematic flowchart. Feature extraction, feature matching, and background removal are the three portions of the schematic flowchart, respectively. The feature extraction portion retrieves the target object's characteristics from each current frame. Furthermore, the RGB pixel values are transformed to HSV color space and then divided into numerous non-overlapping blocks. Proposed Method In addition, the object patches are segmented, and each patch's value combines with GMM to obtain the hue and saturation (µ, σ) values into four groups. The GMM enhances the feature extraction dynamically, and also the K means++ algorithm is developed to acquire the center point of Gaussian distribution before analyzing image features. In addition, the obtained features reinforce for improving target image object precision as well as object deformation, illumination variation, rapid motion, and visual obstruction. The features of the object are extracted from the first frame to evaluate object characteristics, then block patches segmented for the next operation. Before background subtraction, object features are compared, and the matching score is calculated for the current frame and the previous frame. If similar characteristics were identified, the object information of the current frame is sent into the background removal section; otherwise, the search window is expanded, and the target image object can be attempted to be found again from the input window. The properties of the target object and its background patches were compared to determine whether the patches belonged to the same target object or not. The segmented object image patches are scanned in four directions to eliminate the background from the object window: half-length from up to down and down to up, similarly half-width from left to right and right to left. The target picture object's coordinates, width, height, and patch characteristic values, such as dimensions, colors, and forms, are then changed before proceeding to the next frame assessment. The Function of the Input Window Initially, the object window uses the input window shown in Figure 2 to locate the target item (a). The upper-left side of the input window extracted (Xobj, Yobj) feature and the object window recognized the object's height (Hobj) and width (Wobj). The characteristic values of the target object and its surrounding pixels were retrieved from the input window and object window to decrease computational complexity. The initial frame determines the object's information. Because the target object is moving, the search window is enlarged by 72 pixels on either side of the object window in Figure 2b to identify the target object's current frame. Patch Segmentation The object window is captured using the input window, and 12 pixels are placed around it, as seen in Figure 3. Both the object window and the enlarged region are divided into non-overlapping 6 × 6-pixel patches, which are divided into two groups: the object patch (Po) and the outer backdrop patch (Out) (POB). The height and width of the new window are (Hobj+24) and (Wobj+24), respectively. To minimize object tracking complexity and computational cost, and to speed up the performance, the RGB color model has converted to HSV color and just hue and saturation were processed for the computation. PCGF Extraction of K means++ and GMM After converting the HSV color gamut and determining the patch color, the next step is the GMM clusters represent several feature values from the patches that are named patch color group feature (PCGF). Afterward, the hue and saturation values of each patch were captured, which requires the initial center value to be defined, in order to analyze the performance. Initial center values were determined by using the K means++ algorithm. The K means method occasionally produced a poor cluster because of excessively random beginning settings. To solve the problem, Arthur et al. [33] increased the distance between each group's original center locations. We utilized K-means++ to obtain four cluster center points (K0, K1, K2, K3) in each patch, allowing us to determine the maximum distance from each group's starting center location. The detailed procedure is as follows: As illustrated in Figure 4, the HSV histogram is transformed first, then one index of the non-zero value is chosen to represent the initial cluster center point (K0) (a). Following the discovery of the original cluster center point K0, the second, third, and fourth cluster points (K1, K2, K3) were discovered. K1 is the result of calculating the distance between the non-zero index and the center point K0. After determining the center values, the closest distance (Dn) between each pixel's hue and saturation value is calculated using Equation (1) for the non-zero index value and the i-th group. Figure 4 depicts the calculated distance, center point, and center values because of the computations (b) (The distance between two locations is denoted by d i ). The D n of each pixel is used to determine the new center point of a group. A higher D n value is more likely to become the new center value K i . The patch pixels are divided into four groups, therefore Equations (1) and (2) must be performed four times to calculate the center values of the four groups. The values of these four groups are applied in the GMM to determine the initial conditions. The GMMs [33] are prevalent in background subtraction, pattern recognition, and performance. The image distribution cannot be described only by using Gaussian distribution, a GMM enables the analysis of data distribution to improve the precision. The GMM consists of multiple Gaussian models and is expressed as follows: where x represents the input vector data, K denotes the total number of single Gaussian models (four are used to represent the characteristic values of a single block), k is the group index value of a single Gaussian model, w k represents the probability of an input belonging to the k-th single Gaussian model, and φ(x|θ k ) denotes the Gaussian distribution probability density of the k-th Gaussian model [θ k = (µ k , σ 2 k )]. In this study, the color blocks of the input image were grouped. The K means++ algorithm is first employed to identify the group center values of a color block, and the GMM is used to categorize each pixel of the color block to a specific group. To reduce the calculation complexity, the optimization equations proposed by Lin et al. [34], as expressed in Equations (4)-(6), are applied. Here, α k,t = α w /ω k,t , ω k,t represents the probability value of the k-th single Gaussian model in the t-th frame, µ k,t represents the mean value of the single Gaussian model, σ 2 k,t represents its standard deviation, and α w represents the learning rate. Figure 5b shows the hue value computed using the K means++ method and GMM for one of the patches seen in Figure 5a. There is a total of four Gaussian distributions found, and the characteristic values of the patch's hue value are stated in Equation (7), where m and n are the patch's index values. Both hue and saturation values are required for each patch. In the first frame, just the object window and background window's characteristic values are extracted; in subsequent frames, all the search window's characteristic values are extracted. Background Patch Removal from the Object Window After defining the characteristic values (P m,n ) of every patch, the next step is removing the background patches from the object window. The proposed algorithm extracts patches and finds a threshold window from the background window for every vertical and horizontal patch. The threshold values are calculated by subtracting the background characteristic values of P OB (i.e., thr(m, n) = |P m,n − P m+1,n |), which are depicted in Figure 6a,b. The picture is split and scanned in four directions. As indicated in Figure 6, the half-length height from up to down and down to up was scanned at the same time, as was the half-length height from left to right and right to left Figure 6c,d. Figure 7 illustrates patch acquisition based on the upper half of object characteristics. The upper half part of the object patches is scanned and expressed as the label by the following Equation (8). When the upper half part of the object patch is scanned, (m, n) is the uppermost threshold (thr) coordinate, and k represents the displacement value. The thr value is compared with the characteristics of the upper half of the object patch (H obj ). The Label m,n+k is marked as 1 if the deviation between any characteristic values of the patch and the threshold are greater than the threshold, otherwise it is set to 0. To reduce the computation, when Label m,n+k−1 is determined as object patch, the remaining blocks require assessment that can be directly identified as the object patch. Label m,n+k = 1, thr m,n − P m,n+k > thr m,n Label m,n+k−1 == 1 0, otherwise Figure 8b shows each column pixel after scanning and computing the upper half, with 1 and 0 patches indicated in white and black, respectively. After finishing the upper-and lower-part scanning, the vertical scanning mask is subjected to a "OR" operation, as shown in Figure 8d. As shown in Figure 8, the left and right half of the patches are scanned in the same way, and a "OR" operation is calculated for horizontal mask scanning (Figure 8g). Finally, the entire mask is obtained by performing an "AND" operation on the vertical and horizontal scan results. Figure 8 shows the acquisition outcome of all patches collected by four directional scanning's Figure 8h. The object's characteristic values are updated to the next frame to monitor the object's location. Figure 8a depicts the original frame, whereas Figure 8i depicts the background removal mask in combination with the image generated from the scanning results. Consequently, obtaining the object's upper-left coordinate, width, and height may be used to define the frame. Feature Matching Based on PCGF After all the characteristic values are defined in the search window, all patch characteristics within the window are compared with the previous frame to determine their similar mask ( Figure 9). Equation (9) is used to find the coordinate which makes the minimal difference between the search window and object window in Equation (10). Specifically, OW SET represents the patch set of the object window in the previous frame, SW SET indicates the search window group, (Q, V) is the searching window coordinates, and (m, n) is the block scan coordinate. After identifying the searching coordinate (MatchPos), the feature matching scores are calculated by the following Equation (11). Specifically, DiffMAX is the maximum difference for color value (set as 255 in this paper), and NS represents the number of object patches defined as an object in the previous frame. If the feature matching score is lower than 40%, then the detection will be invalid, and the searching system requires additional assessment; otherwise, the system enters the background-removal block to update the upper-left coordinate, width, height, and patch characteristics of the object in the current frame. Object Out of Range (Boundary Area) If the feature matching is insufficient (<40%), it means the object is out of the boundary area. The searching system assesses whether the object is out of the boundary area or not. While the object is out of bounds, it may appear again from any edge of the boundary area. Therefore, the searching window switches the boundary area, and its height and width are determined using the object window, as shown in Figure 10. Object Occlusion If the feature matching score is low, but the object is still inside of the window, then the object may be obstructed by other objects illustrated in Figure 11b. The search window then expands to twice the size of the object window to increase the searching range, and the frame maintains its existing specifications to find the object. Figure 11. (a) The searching window extends 72 px from the object when unobscured; (b) object occlusion. The yellow box represents the object label of the last mark which was detected successfully; the blue box represents the size of searching windows increases since the PCGF doesn't detect the object. Experimental Results and Analysis The method is written in C++ and implemented with the OpenCV library to assess the experimental results. Following that, the code is translated to ARM architecture on the x86-64 platform to evaluate the embedded system's speed and correctness. The correctness of several algorithms was assessed in Section 3.2, and the execution efficiency of the algorithms was evaluated in Section 3.3. Measurement Results The multiple video test performance and tracking results for UAVs are compared with the various algorithms. The UAV123 dataset by Mueller et al. [35] is used to perform this experiment. The most difficult patterns are used to recognize the objects. Figure 12a,b depicts the precise tracking of a fast-moving and deforming object. The method is shown in Figure 12c looking for the pictured item that is out of the window. When the object reenters the window, the algorithm correctly selects the item, as seen in Figure 12d. Figure 13 depicts a case in which the tracking item is covered by a tree. #1399 is the final frame that PCGF can follow the object, as seen in Figure 13a. Because of the item covered by the tree in #1400-1446, the matching score is less than 40. As illustrated in Figure 13, the PCGF expands the search window to discover the item (b). The matching score in #1446 is more than 40. As a result, as illustrated in Figure 13, the object label can be re-marked (c). Figure 14 shows how the system correctly tracked an item while a human was dressed in the same color as the backdrop (grass color), enhancing the object's resemblance to the background. Figure 15 depicts the algorithm accurately identifying an object that is obstructed by leaves and crossing the intersection between sunlight and shade, thereby causing partial obstruction and illumination changes. Figure 15 shows how the algorithm correctly recognized an object that is partially obscured by leaves and crosses the intersection of sunshine and shadow, resulting in partial blockage and lighting variations. Furthermore, we also compared the result of PCGF with some other experiments. This work makes a fair comparison with some state-of-the-art tracking algorithms, which include three DL algorithms: C2FT [14], PBBAT [17], and ADNET [20], and one generic algorithm: SCT [29]. The five test patterns and the results are shown in Figure 16. All of the algorithms have better capabilities for detecting objects with simple backgrounds in person1, person7, and bike1. However, in more complex scenery, PBBAT [17] and SCT [29] obtained a bad tracking and lost the object location in wakeboard4 #0417 because of the complicated waves. In car18, the car moved rapidly, which caused most algorithms to correctly frame the target, but there has a significant error in the size of the image. Figure 16. Examples of the UAV datasets tracking results. The first, second, third, fourth, and fifth columns show the challenging image sequences. From left to right are person1, person7, wakeboard4, car18, and bike1, respectively. Algorithm Precision Assessment Wu et al. [36] utilized Equation (12) to define and compute the center point placement error, the ground truth is defined by x gt and y gt in each frame, and the center position of the forecast object window is defined by x rs and y rs the error values of each algorithm. To reduce the error and more accurately track an object, the overlapping rate is calculated by the following Equation (13), whereas the overlapping area between the ground-truth object frame represents by (Win gt ) and the tracking object frame (Win rs ); a higher overlapping rate indicates that the tracking result is close to the ground truth. OverlapRate = Win rs ∩ Win gt Win rs ∪ Win gt (13) After defining the center point location error and overlap rate of each frame, and the precision rate is calculated using Equation (14) as a reference, the success rate (15) can be obtained. The label value FT indicates whether the center point location error and overlap rate of the frame exceed their threshold values. If so, FT is set to 1; otherwise, FT is set to 0. SuccessRate(OverlapThr) = ΣFT(OverlapRate,OverlapThr) In terms of precision and success, the proposed approach has been compared to those used in C2FT [14], PBBAT [17], ADNet [20], and SCT [29]. One-pass evaluation (OPE) was used in this experiment to compute accuracy, 12 distinct patterns, and various threshold levels for the findings shown in Figure 17. The accuracy is shown by the y-axis, while the threshold value is represented by the x-axis. Except for person18, person20, car18, and wakeboard5, the object center error between the proposed PCGF's projected results and the ground truth supplied by UAV123 is less than 50 pixels. Images of person20, car18, and wakeboard5 reveal that the precision is not closed to 1 for all the compared algorithms, even threshold set as 50 pixels. It means that the object center error is greater than 50 pixels in some frames because the background is complicated and the filming angle of the UAV changes continuously. Table 1 shows the precision result of 5 algorithms when the threshold is set to 20, which is the same evaluation terms as in [20,29]. For relatively simple patterns such as person1, person7, and person16, significant differences were not detected between the algorithms in their precision. However, more complex patterns such as person16 and person20, car18, and boat3, have observed significant differences. The center position predicted by SCT [29] has a big gap with other algorithms and has the same appearance as that shown in frame 417 of wakeboard4 in Figure 16. Under the average rating, although the PBBAT [17] which is a deep learning method has shown the highest average precision that mentions in the green color in Table 1, the operational function for real-time applications is difficult because of the advanced training procedure and highly complex calculation process. However, the proposed generic method has achieved the second-highest average precision that mentions in the blue color in Table 1, and the proposed generic method can operate at real-time applications on the UAV object tracking application with less calculation process. Figure 18 illustrates the success plot of OPE, which investigated the overlapping rates of the algorithms; x represents the threshold value of the overlapping rate, and y represents the success rate. The result shows that in all patterns there exists an intersection between the PCGF result and the UAV123 ground truth. In person18, car18, and boat7, these algorithms reveal significantly different curves in boat3, boat7, wakeboard2, and wakeboard4; the SCT [29] shows that the success rate doesn't reach 1 even though the threshold set in a small value. The overlapping rate is measured by the areas under the curves (AUCs), which were calculated by using the Equation (16). Total AUCs are displayed in Table 2, and it should be mentioned that the proposed algorithm yielded good results, written in green in the table, which are better than the deep learning-based PBBAT [17] method. For the overall evaluation of the average precision result and the average AUC score with the threshold set to 20, our algorithm reveals a great performance on object tracking. The result indicates that our approach achieved a very near result to PBBAT [17], which is a DL method, and its computational cost is more complex than our proposed approach. Algorithm Speed Assessment The suggested method was tested on the i5-6400 CPU architecture, which has a 14 nm lithography process, four cores and four threads, a maximum processing speed of 3.3 GHz, and a TDP of 65 W. In terms of performance, the PCGF is implemented using C++ and the Open CV library, and it achieves 32.5 FPS on the CPU platform. This work may also be carried out on the Raspberry Pi 4, which has a low-power BCM2711 SoC processor with an ARM-based cortex A72 4 core. On the high-configurational GPU platform, the ADNet [20] Table 3. On the other hand, the proposed PCGF algorithm achieved 32.5 FPS on the CPU platform and 17 FPS when executed on Raspberry Pi 4. To compare with the power consumption, the algorithms in [17,20,29] require a GPU accelerator for achieving the proposed, which takes 65 W (GTX 650) extra power consumption compared with the CPU-only platform. In addition, the Raspberry Pi 4 only requires 6.4 W on stress benchmark testing, which saves 97% of the power consumption of desktop computers with a 300 W power supply. It is the better solution for achieving the PCGF on the UAV onboard platform for discriminative tracking. Discussion In terms of precision comparability, the PCGF average result is displayed in Table 1. Achieving 100 percent precision is difficult because the UAV123 [35] dataset has some limitations corresponding to object occlusion, boundary, visibilities, and so on. In addition, for the field which may request higher precision such as defense, the PCGF can further combine with the re-identification or recognition function to reach the higher standard. However, there is still the opportunity to continue further research on the following areas: (1) To reduce the impact of light effects on PCGF, an additional process needs to convert the pixels into HSV color gamut. (2) When the object color and its neighboring area are similar colors, then the object feature value similar characteristics. (3) The proposed technique incorporates several conditional clauses that cause the parallel process to accelerate at a slower rate. (4) When the UAV is close to the object, the number of PCGF increases, affecting the FPS result due to computational costs. Conclusions In conclusion, in the general method, a novel algorithm for the discriminative tracking system has been developed. The target object features have been separated by the PCGFs which represent the object and its location. The object location is predicted from the searching window and calculating the PCGF matching score. Thereby, object pixels and the background pixels distinguished successfully with the strong resistance of object deformation, rotation, distortion. The proposed technique considerably reduced the computational complexity and increased the performance which achieved real-time proceeding speed in the CPU platform without the GPU accelerator. Lastly, we implemented the proposed technique on Raspberry Pi 4 as a discriminative tracking system for UAV applications. Data Availability Statement: Publicly available datasets-UAV123 [35] were analyzed in this study. This data can be found here: https://cemse.kaust.edu.sa/ivul/uav123 (accessed on 8 July 2021). Conflicts of Interest: The authors declare that no conflict of interest.
7,491.4
2021-08-03T00:00:00.000
[ "Computer Science" ]
Zn(II) and Co(II) 3D Coordination Polymers Based on 2-Iodoterephtalic Acid and 1,2-bis(4-pyridyl)ethane: Structures and Sorption Properties Metal-organic frameworks [M2(2-I-bdc)2bpe] (M = Zn(II) (1), Co(II) (2), 2-I-bdc = 2-iodoterephtalic acid, and bpe = 1,2-bis(4-pyridyl)ethane) were prepared and characterized by X-ray diffractometry. Both compounds retain their 3D structure after the removal of guest DMF molecules. Selectivity of sorption of different organic substrates from the gas phase was investigated for both complexes. The sorption ability of MOFs is closely related to the features of supramolecular contacts between organic ligands and guest molecules. These non-covalent interactions can be different. Usually, the main role is played by hydrogen bonding, but reports also appear where other types of supramolecular interactions, in particular, halogen bonding (XB) [39], have a major contribution. These examples are yet rare [40][41][42], but, in our opinion, they demonstrate that this topic can evolve further. Results and Discussion The structures of 1 and 2 are based on similar building blocks. In both cases, there form dimeric paddlewheel-type {M 2 (OOCR) 4 (bpe) 2 } units (Figure 1), very common for carboxylate complexes. The M-O bond lengths in 1 and 2 are 2.029-2.051 and 2.013-2.036 Å, respectively, the M-N bonds are 2.019-2.030 and 2.044 Å, respectively. The iodine atoms of 2-I-bdc ligands are disordered over three positions in both structures (the occupancies are 0.444:0.307:0.249 and 0.641:0.160:0.199, respectively). In structure 1, one of two arene rings in bpe ligand is disordered as well (two positions with 0.68:0.32 occupancy). (2). Both compounds feature 3D structure; sorption selectivity was investigated and compared for 1 and 2. Results and Discussion The structures of 1 and 2 are based on similar building blocks. In both cases, there form dimeric paddlewheel-type {M2(OOCR)4(bpe)2} units (Figure 1 The {M2(I-bdc)4} secondary building units are further connected via bpe ligands to a cuboid framework with rhombic-rod pores. Previously reported non-iodinated congener, [Zn2(bdc)2(bpe)] [63], reveals a tetragonal structure with bpe disordered over eight positions due to its proximity to a special position (4/mmm). Compound 1 comprises more ordered bpe (two crystallographically independent ligands) and two independent I-bdc ligands (Figure 2), thereby exhibiting a less symmetrical structure of ca. 4-fold increased cell volume as compared to [Zn2(bdc)2(bpe)]. For 1, the non-equivalence of the bpe and Ibdc ligands manifest in a different relative arrangement of the aromatic rings and (in the case of I-bdc) in a different disorder pattern of I atoms. The {M 2 (I-bdc) 4 } secondary building units are further connected via bpe ligands to a cuboid framework with rhombic-rod pores. Previously reported non-iodinated congener, [Zn 2 (bdc) 2 (bpe)] [63], reveals a tetragonal structure with bpe disordered over eight positions due to its proximity to a special position (4/mmm). Compound 1 comprises more ordered bpe (two crystallographically independent ligands) and two independent I-bdc ligands ( Figure 2), thereby exhibiting a less symmetrical structure of ca. 4-fold increased cell volume as compared to [Zn 2 (bdc) 2 The structure of 2 is more symmetrical (namely, it has fewer translational symmetry elements per the same fragment of the structure), comprising only one independent I-bdc and half of bpe ligand (Figure 3). In structures of 1 and 2, the frameworks show two-fold interpenetration (Figure 4), which reduces the volume of voids accessible for the inclusion of solvate molecules. The potential volume is estimated to be 22% and 25% from the total volume of structures 1 and 2, respectively, although the actual values are higher due to the unaccounted volume taken by the disordered atoms. The topology of MOFs 1 and 2 is shown in Figure 5. The structure of 2 is more symmetrical (namely, it has fewer translational symmetry elements per the same fragment of the structure), comprising only one independent I-bdc and half of bpe ligand (Figure 3). In structures of 1 and 2, the frameworks show two-fold interpenetration (Figure 4), which reduces the volume of voids accessible for the inclusion of solvate molecules. The potential volume is estimated to be 22% and 25% from the tota volume of structures 1 and 2, respectively, although the actual values are higher due to the unaccounted volume taken by the disordered atoms. The topology of MOFs 1 and 2 is shown in Figure 5. The experimental PXRD pattern for 1 resembles that for the sub-structure akin to 2 ( Figure S1); compound 1 retains its structure after the removal of guest DMF molecules (see Experimental for details). The notable difference between the experimental and sim ulated patterns of 1 may arise from a slight change of the structure upon partial loss o solvate molecules under sample preparation and/or due to difference in the temperature of the single-crystal (150 K) and powder (298 K) experiments. To clarify this, we carried temperature patterns for as-synthesized and activated samples ( Figure S1). The same trend was observed for the PXRD patterns of sample 2 (Figures S2 and S4). Upon activation of sample 2, the structure changes more noticeably, which was reflected in the appearance of strong superstructural reflections. Sorption selectivity of 1 and 2 towards different organic substrates was examined using NMR (see Supplementary Information for NMR spectra). Results are given in Table 1. Both 1 and 2 more readily absorbed 1,2-dichloroethane from its mixture with benzene than MOFs of the dabco family. Although the difference was not large, these results are rather inspiring. The best selectivity was demonstrated by 1 for benzene/cyclohexane mixtures-it exceeded one of [Zn2(bdc)2dabco]. Interestingly, the selectivity in 1-butanol/1bromobutane pair was reversed for bpe and dabco series; the reasons for this effect are The experimental PXRD pattern for 1 resembles that for the sub-structure akin to 2 ( Figure S1); compound 1 retains its structure after the removal of guest DMF molecules (see Experimental for details). The notable difference between the experimental and simulated patterns of 1 may arise from a slight change of the structure upon partial loss of solvate molecules under sample preparation and/or due to difference in the temperature of the single-crystal (150 K) and powder (298 K) experiments. To clarify this, we carried out PXRD experiments at 150 K for an as-synthesized sample under a small amount of the mother liquor; the data was in better agreement with the simulated pattern of the sub-structure. A few superstructural reflections were observed; however, they did not correspond to super-structure 1. The presence of sub-and super-structures for the family of MOFs seemed to be common. For instance, related MOFs formulated as [Zn 2 (bdc) 2 (dabco)] (dabco is 1,8-diazabicyclooctane) revealed a variety of cuboid structures of the same topology that differ from each other mainly by spatial geometry of {Zn(bdc)} layers [62]. Relatively poor quality of the PXRD pattern of 1 did not allow us to evaluate the superstructural motif. The PXRD for a sample of 1 dried naturally, measured at 150 K ( Figure S3, SI), slightly differed from that with the mother liquor, implying minor structural transformations upon partial loss of the solvate. It became similar to the room-temperature patterns for as-synthesized and activated samples ( Figure S1). The same trend was observed for the PXRD patterns of sample 2 ( Figures S2 and S4). Upon activation of sample 2, the structure changes more noticeably, which was reflected in the appearance of strong superstructural reflections. Sorption selectivity of 1 and 2 towards different organic substrates was examined using NMR (see Supplementary Information for NMR spectra). Results are given in Table 1. Both 1 and 2 more readily absorbed 1,2-dichloroethane from its mixture with benzene than MOFs of the dabco family. Although the difference was not large, these results are rather inspiring. The best selectivity was demonstrated by 1 for benzene/cyclohexane mixtures-it exceeded one of [Zn 2 (bdc) 2 dabco]. Interestingly, the selectivity in 1-butanol/1bromobutane pair was reversed for bpe and dabco series; the reasons for this effect are unclear. Materials and Methods All reagents were obtained from commercial sources and used as purchased. Solvents were purified according to the standard procedures. 2-iodoterephtalic acid was prepared according to the method reported earlier [61] and identified by its 1 H and 13 C NMR spectra. Synthesis of 1 Seventy-four and a half micrograms (0.25 mmol) of Zn(NO 3 ) 2 ·6H 2 O, 73 mg (0.25 mmol) of 2-I-bdc, 23 mg (0.125 mmol) of bpe, and 8 mL of anhydrous DMF were placed into a glass ampoule which was sealed, kept in an ultrasonic bath for 10 min and kept at 120 • C for 48 h with slow cooling to room temperature-forming colorless crystals of 1 with a 79% yield. Sorption of organic substrates The method was identical to one described by us earlier [62]. Prior to sorption experiments, samples of 1 or 2 were kept in excess of acetone for 48 h, followed by drying in vacuo (4 h, 60 • C) in order to eliminate guest DMF molecules. After that, the sample of MOF (50 mg) was placed into an open smaller vial, which was then placed into the closed bigger vial containing a mixture of organic substrates in 1:1 M ratio so that the liquid phase level is below the edges of the smaller vial. By this, the MOF sample was allowed to absorb organic substrates from the gas phase. After 48 h, the MOF sample was placed into a DMF:d 6 -DMSO mixture (1:1) and kept for 48 h again in order to desorb the substrates from the pores. The organic solvents and the ratio between the extracted organic substrates were established by comparison of relevant 1 H NMR intensities (see SI). X-ray Diffractometry Data sets for single crystals of 1 and 2 were obtained at 150 K on a Bruker D8 Venture diffractometer (Bruker, Billerica, MA, USA) with a CMOS PHOTON III detector and IµS 3.0 source (mirror optics, λ(MoKα) = 0.71073 Å). Absorption corrections were applied with the use of the SADABS program. The crystal structures were solved using the SHELXT [64] and were refined using SHELXL [65] programs with OLEX2 GUI [66]. Atomic displacement parameters for non-hydrogen atoms were refined anisotropically. The available volume for solvate molecules is estimated using OLEX2 Solvent Mask Procedure to be minimum 214 Å3 and 240 Å3 per [M 2 (I-bdc) 2 (bpe)] formula unit for 1 and 2, respectively; actual space is larger due to the disorder of iodine atoms. Details of XRD experiments are given in SI (Table S1). CCDC 2142027-2142028 contains the supplementary crystallographic data for this paper. Powder X-ray Diffractometry (PXRD) Powder XRD data for the compounds were collected at 150 K by a Bruker D8 Venture diffractometer (Bruker, Billerica, MA, USA) with a CMOS PHOTON III detector and ImS 3.0 microfocus source (CuK α radiation, collimating Montel mirrors). Powder samples were mounted on a nylon loop with a small amount of epoxy resin [67,68]. By ϕ-scanning (360 • ), Debye diffraction patterns with continuous diffraction arcs were measured. To diminish the effect of the preferred orientations, five scans were made at different positions of a goniometer for ω from -240 • to 0 • . The external standard (Si) correction and integration were performed using the Dioptas program [69]. At 298 K, PXRD analysis was performed on Shimadzu XRD-7000 diffractometer (Shimadzu Scientific Instruments Incorporated, Kyoto, Japan) (CuK-alpha radiation, Ni-filter, linear One Sight detector, 5-50 • 2θ range, 0.0143 • 2θ step, 2 s per step). NMR Spectroscopy All NMR experiments were performed on a Bruker Avance III 500 MHz spectrometer (Bruker, Billerica, MA, USA) at room temperature (25 • C). Conclusions We prepared two new metal-organic frameworks based on M(II), 2-iodoterephtalic acid, and bpe linkers. Both complexes reveal interpenetrating 3D structures. The Zn-based complex 1 features a relatively high gas-phase sorption selectivity for benzene:cyclohexane mixture. Despite the overall structural similarity, their adsorption selectivity towards various organic substrates is different, demonstrating that even minor changes in structure can strongly affect these characteristics. Experiments aiming preparation of other M(II)based MOFs with the same ligand combination, as well as with other iodine-substituted carboxylates, are underway in our group.
2,783.2
2022-02-01T00:00:00.000
[ "Chemistry", "Materials Science" ]
Dynamic analysis and optimal control of stochastic information cross-dissemination and variation model with random parametric perturbations Information dissemination has a significant impact on social development. This paper considers that there are many stochastic factors in the social system, which will result in the phenomena of information cross-dissemination and variation. The dual-system stochastic susceptible-infectious-mutant-recovered model of information cross-dissemination and variation is derived from this problem. Afterward, the existence of the global positive solution is demonstrated, sufficient conditions for the disappearance of information and its stationary distribution are calculated, and the optimal control strategy for the stochastic model is proposed. The numerical simulation supports the results of the theoretical analysis and is compared to the parameter variation of the deterministic model. The results demonstrate that cross-dissemination of information can result in information variation and diffusion. Meanwhile, white noise has a positive effect on information dissemination, which can be improved by adjusting the perturbation parameters. Reviewer #3: 1. Response to comment: (In abstract the author claims that white noise has a positive effect on information dissemination, which can be improved by adjusting the perturbation parameters.They may provide the evidence for this either theoretically or numerically.In the current version I did not see any evidence for this claim.) Response: Thank you very much for your comments!Your suggestions are of great help to the improvement of the paper.In the original manuscript, our expression is not comprehensive and careful enough.We would like to make a further explanation on this point.According to your request, we have further explained how white noise effect on information dissemination in numerical simulations in the revised manuscript (Page 20, line 413 -420).The details are as follows: "In Figure 5 and Figure 7, the fluctuating lines represent the population density changes after adding random disturbances.And the stable lines represent the population density changes without adding random disturbances.It can be seen that the population densities with added random perturbations are higher than that without added random perturbations.From Figures 5 (c "In social systems multiple pieces of information sometimes coexist.The phenomenon of coexistence and intersection in the process of multiple information dissemination is similar to the spread of viruses in biology.For example, when the SARS-CoV-2 virus spread in 2020, the presence of its original strain did not cause the extinction of other strains in the body.Instead, there were multiple strains coexisting, and even new mutated strains were produced through cross transmission, such as Omicron (B.1.1.529).The phenomenon of coexisting and producing mutated strains of this virus can be analogized and applied to the spread of multiple information.Multiple similar information may have coexisting relationships, and after prolonged cross propagation, the content expressed by each information may deviate from the original information, similar to the phenomenon of multiple viruses coexisting and producing mutated viruses.This paper proposes a stochastic susceptible-infectious-mutantrecovered model that considers information cross-dissemination and variation, and then demonstrates the existence of global positive solutions.After calculating the sufficient conditions for information disappearance and information stationary distribution, the appropriate parameters are selected as control variables.The numerical simulation validates the rationality of the proposed method is finally validated through numerical 5. Response to comment: (From page 7 onwards there is too many mathematical equations.The author may delete the unnecessary mathematical equations.) Response: Thank you very much for the attentive and earnest comments!Your suggestions will help improve the reading experience of the manuscript.Because we want to fully reflect the work of this manuscript as much as possible in the article. Therefore, the necessary steps were included in the main text.Your reminder can help us learn simpler steps to prove in future research.We will continue to study the papers you have provided us and delve into the proof methods from these excellent papers."Due to the range of the parameters has not been explicitly given in previous studies. Therefore, the values of the parameters in the model can be given according to the conditions given by Theorem 3.1 and Theorem 4.1."8. Response to comment: (Which parameter is more sensitive and why in numerical simulations section.The author must write some sentences about this.) Response: Thanks very much for your comments.We do regret for the lack of explanation of parameters sensitivity due to the omission of necessary instructions in the original manuscript, which has brought you a dissatisfactory reading experience. According to your comments and suggestions, we have added some sentences to explanation of parameters sensitivity in the revised manuscript (Page 20, line 421 -430).The details are as follows: "Next, in order to verify the effectiveness of the optimal control strategy, and then observe the change of densities of !() and " () when the optimal control strategy is adopted.Here, the optimal control is adopted for the control variables !! , !" , "" , "! and other parameters remain unchanged.Figure 9 and Figure 10 confirm the densities of !() and " () change over time when ( ( = 1~8) = 0.001 and ( ( = 1~8) = 0.0001 under constant control measure and optimal control.From Figure 9 and Figure 10, it can be seen that adopting optimal control for control parameter !! , !" , "" , "! can further enhance information dissemination. That is to say, the cross-contact rate is more sensitive to the dissemination of information."9. Response to comment: (Author may look for some punctuation, typos and editing issues.) Response: Thanks very much for your comments.Your comments are conducive us to be more careful.It is more helpful for us to improve our good scientific research literacy. According to your request, we have revised upon all punctuation, typos and editing issues once again.And we have already put them right in the revised version.As these minor errors are more, we have not marked each and every of them corresponding page and line numbers in details. (3) The information dissemination may be effectively facilitated by controlling the perturbation parameters; unlike earlier research, the optimal control strategy provided in this paper is based on the optimal value calculated by the control variables. To sum up, for positive information, it is necessary to give full play to the activity of the social system itself, and to introduce a large number of stochastic components into the social system to improve the information dissemination.For negative information, on the other hand, it is crucial to eliminate the external environment's uncertain factors and reduce the impact of uncontrollable factors on the social system in order to inhibit its dissemination.In future study, the role of stochastic environmental factors on information dissemination in social systems will be further investigated, and a v process-driven information dissemination model will be developed." Special thanks to you for your good comments! )-(d) and Figure 7 (c)-(d), the white noise disturbance enhances the population densities of information dissemination group ! and " .From this, it can be seen that white noise disturbance has a positive effect on information dissemination." 2. Response to comment: (The introduction should also make a compelling case for why the study is useful along with a clear statement of its novelty or originality by providing relevant information and providing answers to basic questions such as: i.What is already known in the literature?ii.What was done and how it was done?) Response: Thank you very much for your incisive comments and thorough reminders.Your comments are extremely important for improving the practical application of the model established in this paper.According to your comments and suggestions, we have added a clear statement of its novelty or originality in the revised manuscript (Page 3, line108 -Page 4, line 124).The details are as follows: to comment: (The author may also add some recants work about the current study related to their work in introduction part.They may add the following but not mandatory.https://doi.org/10.1038/s41598-023-41861-4https://doi.org/10.1063/1.5016680https://doi.org/10.1016/j.aej.2023.01.027 doi: 10.3934/math.2023210)Response: Thanks for your comments and suggestions.We have downloaded and read these references.These references are closely related to the research of this paper, and they deepen our understanding.It is more helpful for us to clearly understand the the application of stochastic differential equations.Therefore, we have added refers to these useful recent papers in the revised manuscript (Page 2, line 60 -69).The details are as follows: "Chu et al.(2023)[30] constructed an # $ % malnutrition model with random perturbations and crossover effects, and using fractional differential equations analysis deterministic-stochastic model.Rashid et al.(2023)[31] constructed an & ' '& & ' '& of the co-infection of the fractional pneumonia and typhoid fever disease stochastic model with cost-effective techniques and crossover effects.Ali et al.(2023)[32] gave the dynamics analysis and simulations of stochastic COVID-19 epidemic model using Legendre spectral collocation method.In addition, Khan et al.(2018)[33] studied on the application of Legendre spectral-collocation method to delay differential and stochastic delay differential equation."4. Response to comment: (Even though all the figures are available at the end, but there is no figure in the text and only captions of the figures are there, please see page from 3 onwards.)Response: Thank you very much for your careful reading, helpful comments and constructive suggestions, which has significantly improved the presentation of our manuscript.In the original manuscript, we did not put the figures in the text.For this, we are very sorry for the inconvenience caused to your reading.According to your comments and suggestions, we have inserted all the figures into the main text in the revised manuscript (Figure 1 to 10). 6. Response to comment: (Improve the quality of the figures (from figure 4 and onwards).)Response: Thank you very much for your incisive comments and thorough reminders.We are very sorry that our image resolution was indeed very low, resulting in the information displayed being not clear enough.According to your comments and suggestions, we have further improved the resolution of all the figures in the revised manuscript (Figure 1 to 10). 7. Response to comment: (In numerical simulation section, what is the criteria to choose the values of the parameters involve in model equation (1) and equation (5).)Response: Thank you very much for the attentive and earnest comments!We do regret for the lack of smooth logical narration due to the omission of necessary instructions in the original manuscript, which has brought you a dissatisfactory reading experience.According to your comments and suggestions, we have added instructions for the criteria to choose the values of the parameters involve in model equation (1) and equation (5) in the revised manuscript (Page 18, line 381 -Page 19, line 384).The details are as follows: 10. Response to comment: (The conclusion section is too long.In general conclusion consist of what is claimed is achieved.)Response: Thanks for your sincere comments and reminders, which are indeed to the point.We agree that the conclusion of this article is indeed too long.According to your comments and suggestions, we have reduced the textual description in the conclusion section and highlighted the key points more in the revised manuscript (Page 21, line 442 -460).The details as following: "This study investigates the influence of multi-population information crossdissemination, mutation, and white noise disturbance on information dissemination.And developed the stochastic model.The following results are achieved through examination of this paper: (1) White noise disturbance can facilitate the information dissemination, and stochastic environmental factors play a positive role in information dissemination.(2) As disturbance intensity increases, the stochasticity of the model gradually enhances, and the fluctuation of information dissemination trend becomes more apparent.
2,601.2
2024-05-23T00:00:00.000
[ "Mathematics", "Economics" ]
Interplay of negative electronic compressibility and capacitance enhancement in lightly-doped metal oxide Bi0.95La0.05FeO3 by quantum capacitance model Light-sensitive capacitance variation of Bi0.95La0.05FeO3 (BLFO) ceramics has been studied under violet to UV irradiation. The reversible capacitance enhancement up to 21% under 405 nm violet laser irradiation has been observed, suggesting a possible degree of freedom to dynamically control this in high dielectric materials for light-sensitive capacitance applications. By using ultraviolet photoemission spectroscopy (UPS), we show here that exposure of BLFO surfaces to UV light induces a counterintuitive shift of the O2p valence state to lower binding energy of up to 243 meV which is a direct signature of negative electronic compressibility (NEC). A decrease of BLFO electrical resistance agrees strongly with the UPS data suggesting the creation of a thin conductive layer on its insulating bulk under light irradiation. By exploiting the quantum capacitance model, we find that the negative quantum capacitance due to this NEC effect plays an important role in this capacitance enhancement Bismuth Ferrite (BiFeO 3 ) is a multiferroic oxide material which has been extensively studied due to its ability to simultaneously exhibit both magnetic and strong ferroelectric properties at room temperature 1,2 . As such, BiFeO 3 has recently drawn much interest in potential applications spanning spintronics, magnetoelectric sensors and photovoltaic devices [3][4][5] . Several studies also attempt to enhance the dielectric constant of BiFeO 3 which could effectively improve its ferroelecticity 6 . In fact, a pure BiFeO 3 is difficult to synthesize whose dielectric constant was reported only 50-100 at 10 kHz 7,8 . Slight modifications to BiFeO 3 ceramics have been reported featuring both giant dielectric constant (>10 4 at room temperature) and sufficiently low dissipation factor 9,10 to be suitable for magnetodielectric applications 11 . Recently, efforts to improve the dielectric behavior of BiFeO 3 have been reported, such as varying preparation methods 12 and dopants 13 . Such extrinsic dielectric constant enhancement can be controlled by lattice distortions, particle sizes, domains, or impurities 12,14 . In addition to extrinsic dielectric constant enhancement, the capacitance of oxide materials can intrinsically be improved by tuning carrier densities (i. e. n-type doping or applied gate electric field). Such dielectric tunabilities have potential applications as various microwave devices, such as phase shifters and varactors 15 . Recently, capacitance enhancement at the LaAlO 3 /SrTiO 3 interface in excess of 40% was found to originate from negative electron compressibility (NEC) at low electron density (n) 16 , enabled as the accumulation of all mobile electrons in the interfacial region which make quantum conduction and therefore quantum capacitance is the dominant model [16][17][18] . The negative thermodynamic density of state ( < μ 0 dn d ), where μ is chemical potential, has been observed in several 2D materials and interfaces 16,19 , carbon nanotubes 20 , and bulk materials 21 Alternatively, carrier densities on metal oxide surfaces can be controlled by the creation of oxygen vacancy states induced by light irradiation [22][23][24][25] . Recently, the capacitance enhancement induced by surface charge accumulation has been reported on CaCu 3 Ti 4 O 12 26 . This is similar to the case of applying electric field (i. e. introduction of the quantum conductive interfaces) which might indicate the analogous microscopic origin. In this work, we observe a striking capacitance enhancement in lightly-doped metal oxide Bi 0.95 La 0.05 FeO 3 (BLFO) under 405 nm violet laser irradiation. By using ultraviolet photoemission spectroscopy (UPS) and transport measurements, a signature of NEC has been revealed. By light irradiation, the experimentally-observed changes indicate that the quantum capacitance model plays a major role, which therefore supports claims for a strong interplay between NEC and capacitance enhancement. These findings are critical in understanding the fundamental nature of such system as well as establishing a new synthetic route to light-sensitive capacitive devices. Methods Sample preparation and characterisation. Our BLFO polycrystals were prepared by a simple co-precipitation method 9 . The dried precursors were calcined in air at 600 °C for 3 h. Sample powders were pressed into pellets with 1 cm diameter and sintered at 800 °C in air for 3 h. A small decrease of grain size (compared to the pure BiFeO 3 ) and structural distortion were revealed in BLFO samples by X-ray diffraction (XRD) and scanning electron microscopy (SEM). The remarkable enhancement of BLFO capacitance might solely be affected by the reduction of electrical conductivity and leakage current 27 . We chose BLFO because of its high initial capacitance, for example, its dielectric constant was more than three times that of pure BiFeO 3 ceramics prepared by the same method 9,14,28 . Details of sample preparation and characterisation are shown in Supplementary Information. Capacitance measurement. Instead of a metal top electrode, our capacitor was fabricated with transparent conductive indium tin oxide (ITO) allowing light irradiation on this side. The BLFO sample was mechanically compressed against ITO and metal electrode as shown in Fig. 1(a). The capacitance measurements were performed using a standard impedance analyzer (Agilent: model 4294A) with alternating voltage output (V ac = 0.5 V) and frequency ranging from 1 kHz to 1 MHz. A 405 nm laser corresponding to 3.06 eV photon energy (with fixed intensity of 1.8 W.cm −2 and 3 × 3 mm 2 beamsize) was used throughout the experiment. This photon energy is smaller than the ITO optical band gap and work function of most materials (≈4eV), hence, competing effects occurring upon light irradiation such as light absorption and photoelectron ejection would not be expected 29 . Ultraviolet photoemission spectroscopy. To understand the microscopic mechanism that drives this light-sensitive behavior in BLFO, the electronic structure of the BLFO sample was measured by ultraviolet photoemission spectroscopy (UPS) using Scienta R4000 electron analyzer located at BL 10.0.1 of the Advanced Light Source (USA) and BL 3.2a of the Synchrotron Light Research Institute (SLRI), Thailand. The measurements were performed at room temperature with base pressure better than 5 × 10 −8 mbar. Photon energy was set to be 60 eV with 0.3 × 0.1 mm 2 beam size. Results and discussion The BLFO capacitance measured at f = 2 kHz as a function of light irradiation is shown in Fig. 1(b). The initial capacitance measured in this setup is found to be around 200 pF. After irradiation for 240 s, the capacitance increased gradually and then became saturated at 236 pF. After turning off the irradiation, its capacitance decayed slowly and then reached the value close to the initial value. The frequency dependent measurements of capacitance and loss tangent before and after irradiation are shown in Fig. 1(c,d). The dielectric behaviour can be described by the Debye type equation including electrode and grain-boundary effects 30 . Resistance measurement under light irradiation was performed on BLFO surfaces by preparing 1 mm-wide surface in between the gold electrodes ( Fig. 1(e)). After turning on the laser for 240 s, BLFO resistance decreased by 8%. Similar to the capacitance measurement, its resistance recovers close to the initial value after a 240 s absence of irradiation ( Fig. 1(f)). The features of the changes in capacitance and resistance suggest the existence of at least two effects occurring during the on-off process, including photogeneration of charge carriers (photoconductivity) and creation of oxygen vacancy. When light is on, photoconductivity is known to contribute to the changes in capacitance and resistance instantly and dynamically. It was found that both capacitance and resistance change quickly and immediately when turning off the light (at time ≈240 s of Fig. 1(b,f)) which could be described by the photogeneration of charge carriers. However, since the capacitance measured here does not instantly recover back to its original value when the irradiation is off, such changes are attributed to the creation of a thin conductive layer upon the insulating bulk referred to as a quantum-confined electron gas which is related to the creation of oxygen vacancies induced by light irradiation 26,31,32 . The increases of capacitance and loss tangent are shown in Fig. 1(g,h). We found that the capacitance enhancement can be measured as high as 21% at frequency around 2 kHz after 240s of irradiation. The capacitance enhancement lineshape is nicely-fitted with incorporating of a reduction of surface resistivity (due to photogeneration of charge carrier) described by Maxwell-Wagner model 33 (see Supplementary Information) and NEC effect described below. The UPS spectrum of the fresh sample was measured immediately (i. e. 0 min of irradiation, the black spectrum in Fig. 2(a)). It was found that the O 2p state located at a binding energy around 6-8 eV was consistent with other metal oxides 25,26,34 indicating a good electrical contact between our sample and a sample holder. A counterintuitive shift of the O 2p state to lower binding energy was initially observed by the first doping condition (5 min of UV light exposure) as illustrated by the red spectrum in Fig. 2(a). The UPS spectrum after zeroth order light irradiation, i. e., light with all frequencies, clearly indicates a significant shift of around 1 eV to lower binding energy (the blue spectrum in Fig. 2(a)). Moreover, an intergap peak emerged at around 4 eV binding energy which can be assigned to the oxygen vacancy state (V O ) 23,26 . In contrast, the C 1s state was also measured immediately after each UV irradiation (hν = 500 eV) which clearly indicates no binding energy shift of C 1s state (inset of Fig. 2(a)) confirming the distinctive character of BLFO sample. By using standard Gaussian fitting 35,36 , a continuous shift of O 2p state as a function of UV dosing was found to reach the value as high as 243 meV at a maximum UV dosing of 180 J.cm −2 as summarized in Fig. 2 (b) (see Supplementary Information). Capacitance enhancement induced by light irradiation has been studied in several metal oxides 26,37 . To explain this, possible scenarios such as filling of material's mid-gap state 38 , enhancement of total charge carrier density 39 , and the creation of a two-dimensional electron gas at the surface 26,40 as a result of photo illumination have been proposed. Regarding these corroborated with our observed UPS spectra, we introduce the quantum capacitance (C q ) model to explain the capacitance enhancement in BLFO. This quantum capacitance model is a consequence of the Pauli principle which requires extra energy for filling a quantum well with electrons accumulated near the surface 18 . In our case, C q formed on the irradiated-surfaces is manifested as capacitors in series with a geometric capacitance (C geo ) between two electrical plates 41 . Total capacitance (C tot ) can then be calculated by where C geo = A d ɛ is solely dependent on the geometry 16,42 . C q can be derived by the electron-electron interaction between layers (i. e. C q = C kin + C ex-corr ) which represents capacitances due to kinetic and exchange-correlation www.nature.com/scientificreports www.nature.com/scientificreports/ energies respectively. Notably, C q can be expressed by a term of thermodynamic density of states (C Ae q dn d 2 = μ ) 16 which strongly indicates that C q can either be positive or negative depending on the sign of μ dn d 16,43 . From above equation, C tot can be increased only in the case of negative C q which means < μ 0 dn d , whereby increasing the electron density leads to a decrease of chemical potential. In view of increasing C tot as a function of light irradiation (Fig. 1b), the absolute value of measured C q has been plotted by circle symbols in Fig. 3(a). Note that the similar values of C q estimated from UPS spectra and impedance measurement are shown in the inset of Fig. 3(a). According to the available information of both NEC and quantum capacitance, the creation of two-dimensional electron density (n 2D corresponding to n) upon UV light exposure can be estimated by . 16 , where V e = μ is the shift of O 2p state and D is light dose (see Supplementary Information). As shown in Fig. 3(b), the calculated n 2D increases as a function of light dosing reaching the value of 0.95 × 10 10 cm −2 at 180 J.cm −2 . This exhibits similar trend with three orders of magnitude smaller than previous report of a light-irradiated surface of bulk SrTiO 3 (reproducing data is shown in the inset of Fig. 3(b) with permission from ref. 24 ). Regarding the expression of C q , μ dn d (or d dn μ ) is found to be a crucial parameter which offers a quantitative capacitance calculation. As shown in Fig. 3(c), the calculated d dn μ increases in negative values up to 43 × 10 −12 meV cm 2 at a maximum n 2D of 0.95 × 10 10 cm −2 . This value is about 2 times smaller than the previously observed 40% capacitance enhancement in LaAlO 3 /SrTiO 3 interfaces 16 . A plot of negative chemical potential shift versus n 2D is shown in Fig. 3(d). Note that the circle symbols are the measured data taken from Fig. 2(b). This negative value can be described by the random phase approximation (RPA) for exchange and correlation interactions where the negative compressibility (up to 600 meV) can happen in two-dimensional electron system 19,44 . Overall, we note that our observed NEC and the capacitance enhancement is generally described by the creation of two-dimensional electron layer on BLFO ceramic 16,45 induced by light irradiation.
3,157
2020-03-20T00:00:00.000
[ "Materials Science", "Physics" ]
Comparative Studies on Solubility and Thermo Dynamics Properties of Natural Rubber Filled with CB/CPKS and CB/APKS Fillers In this research, the comparative studies on solubility and thermodynamics properties of natural rubber vulcanizates filled with blends of activated palm kernel shell and carbonized palm kernel shell has been investigated. Palm Kernel Shell (PKS) was locally sourced. washed and sun dried to remove accompanying and moisture. The PKS was then pulverized to particle size, carbonized at 600°C for one hour (1hr) using Carbolite furnaces and chemically activated using 0.1M H3PO4 and 0.1M KOH solutions. The NR-filler loading concentrations of CB/APKS and CB/CPKS were compounded using two-roll mill. The solubility was done using three different solvents of water, kerosene and petrol respectively. The solubility results obtained for CB/APKS and CB/CPKS has no significance difference as the temperature varies when immersed in water. The solubility values observed for CB/APKS and CB/CPKS ranges from 1.06g to 1.19g and 1.03g to 1.19g across the samples respectively. This shows that since the filler is an organic substance, it has little or no affinity for water. In the case of kerosene and petrol, both are organics and the filler is an organic substance which follows the statement ‘likedissolves-like’ as the temperature increases, the absorption of kerosene is lower than that of petrol. The results recorded for kerosene across the samples of CB/APKS and CB/CPKS ranges from 1.18g to 4.37g and 2.02g to 4.79g while the results for petrol ranges from 2.25g to 4.92g and 2.51g to 4.88g respectively. This may be due to the fact that petrol is volatile and flammable compared to kerosene. The results of the activation energy were a reflection of the solvent’s permeability except for water which showed contrary results. The results of the activation energy obtained for the three solvents across CB/APKS and CB/CPKS were 5.55 KJ/mol for water, kerosene with 9.48 KJ/mol and petrol with 13.61 KJ/mol respectively. The results observed for water might be due to its nature as the universal solvent being entirely different from other solvents in terms of reactivity and anomalous property. This means polar solvents dissolve polar molecules while nonpolar solvents dissolve nonpolar molecules. This research shows that both CB/APKS and CB/CPKS possess great potential in rubber system. Introduction Natural rubber exhibits the advantages of advanced elasticity, high strength, great toughness and manufacturing versatility. A rubber band can be stretched to 9 or 10 times its original length before returning to its original condition as soon as the outside pressure is released. Similarly, a block of rubber can also be compressed, and after the load is released the block will display its original shape and dimensions in a very short time. As to the extent to which it can be distorted, the rapidity of recovery and the degree to which it recovers to its original shape and dimensions, rubber is considered as unique material. Strength, toughness and elasticity are essential properties of rubber [1,10]. The higher strength and greater toughness of rubber provides more powerful elastic qualities in some situations where most other elastic materials may fail. Due to these properties and its dependence, rubber shows excellent resistance towards cutting, tearing and abrasion. Furthermore, this combination of useful physical properties is well maintained over a wide range of temperature from low temperatures (-45°C) to relatively high temperatures (120°C) which covers the most commonly used range of climatic conditions. Also, rubber is relatively inert, resistant to the deteriorating effects arising from atmosphere and many chemicals. Therefore, it has a relatively long and useful life under a wide variety of conditions. Natural rubber when vulcanized possesses unique properties such as high tensile strength, comparatively low elongation, hardness and abrasion resistance which is useful in the manufacture of various products. The main use of natural rubber is in automobile tyres. They can also be used in houses, foot wears, battery boxes, balloons, toys and so many others [1,10]. The use of natural fibres as reinforcements or filler sin rubber systems has gained extra attention in recent years. Many studies have been carried out on the utilization of natural fillers such as sago, sisal, short silk fibre, oil palm, empty fruit bunch, rice husk ash, cornhub, jute fibre, rubber wood powders, hemp, kenaf and cellulosic fibres as reinforcement materials [21]. The presence of solvents in polymers upon blending may be assumed to be significant because most polymers after swelling in the solvent show reduction in their properties. The effects of these solvents are believed to be due to localized plasticization that allows the development of cracks at reduced stress [10]. Polymers for commercial applications should be chemically resistant and retain their mechanical integrity and dimensional stability on contacts with solvents [9]. Numerous literature sources have revealed excellent reports on the sorption processes as well as mechanical properties of elastomer/thermoplastic blends. Polymers swell if they interact with the solvents, and the degree of this interaction is determined by the degree of crosslink density. It has been reported that the degree of swelling can be measured or related to the thermodynamic properties of the system [11]. Considerable interest has been focused on the absorption and diffusion of organic solvents because their ability to permeate at different rate enhances the separation of component of their liquid mixture through polymeric membrane [7]. The physico-mechanical, solubility and thermodynamic studies of Natural Rubber -Neoprene Blends using variety of solvents has been studied. The results of swelling revealed that the blends with higher neoprene content showed better resistance to petrol (PMS), kerosene (DPK) and hexane compared to blends with lower neoprene contents. The order of increasing permeability of the solvents regardless of sample composition was; kerosene > hexane > petrol. The results of the thermodynamic studies showed that the sensitivity of reaction towards temperature as higher mass uptake values of the blends were recorded as temperature was increased in the order 30°C, 50°C and 70°C. The activation energy of the swelling process was in reverse order of the permeability of the solvents. The solvent with the least permeability (petrol) had the highest activation energies in all the selected blends. [2] investigated the equilibrium sorption properties of palm kernel husk and N330 filled natural rubber vulcanizates as a function of filler volume fraction. The result obtained showed that there was a decrease in sorption with increasing filler loading which was attributed to the fact that each filler particle behaves as an obstacle to the diffusing molecules. As concentration of filler increases in the rubber matrix, more and more obstacles are created to the diffusing molecules which ultimately reduce the amount of penetrant solvent. The effect of groundnut shell filler carbonizing temperature on the mechanical properties of natural rubber composite was studied [3]. They found that the tensile strength, modulus, hardness and abrasion resistance increased with increasing filler loadings while, other properties such as compression set, flexural fatigue and elongation decreased with increasing filler loading. The percentage swelling in benzene, toluene and xylene where found to decrease with increased carbonization. [16] studied the physicomechanical effects of surface-modified sorghum stalk powder on reinforced natural rubber, and found that fillers reduces the water absorption resistance which is in agreement with Ragumathen et al, (2011). In this study, Carbonized PalmKernel Shells (CPKS) and Activated Palm Kernel Shells (APKS) were considered as reinforcing fillers in rubber. The CPKS and APKS were blended with Carbon Black (CB) and used as fillers in Natural Rubber (NR) compounding. The aim of this research is to study the solubility of CPKS and APKS filled NR vulcanizates in some common solvents as well as determine the rapidity of these processes using thermodynamic parameters Materials The equipment and apparatus used for this study include: weighing balance RS232, model WT2203GH, Saumya Two roll mill (DTRM-50) for compounding rubber, Saumya Compression moulding machine 50TONS ( Carbonization Palm Kernel Shells (PKS) were obtained from Apomu, Osun State, Nigeria and washed to remove accompanying dirt, thereafter, sun dried for 2 days. The PKS was pulverized to particulate size, weighed and recorded. Carbonization was done using a modified method of Emmanuel et al., 2017. The dried sample was then carbonized for 1 hour at 500-600°C using the muffle furnace. The sample was removed from the furnace and placed in a bowl containing water for quenching and cooling. Then, the shell was drained, dried, weighed and recorded. Chemical Activation The palm kernel shell/carbonized palm kernel shell (CPKS/PKS) particles were activated using a modified method of Emmanuel et al., 2017. The sample was soaked in 0.1MH 3 PO 4 for 24 hours. Palm kernel shell /carbonized palm kernel shell (PKS/CPKS) particles were dried in oven to obtain the initial mass recorded. The activated sample is then washed with distilled water and 0.1MKOH to neutralize the material being activated to pH 7 and finally sun dried for 2-3 hours followed by oven drying for 1-2 hours at about 170°C. The activated/carbonized palm kernel shell particle was weighed and recorded. Formulation for Compounding The formulation used for compounding in this research is presented in Tables 1 & 2, measurements were carried out using part per hundred of rubber (Pphr). Compounding, Mastication and Mixing The compounding of the polymer was carried out using the two-roll-mill (DTRM-150). The mastication of the rubber was carried out first where the rubber was milled continuously to make it more elastic and soft for easy incorporation of ingredients and shaping process. The speed of the two roll mill are at ratio of 1:1.25. and the nip-setting is at 0.055 -0.008 inche at a temperature of 70°C and at a speed of 24rpm. Swelling Test This was done to know the extent of solvent penetration in the blends. The solvents used were water, kerosene and petrol. 1.0 g of each sample was weighed and immersed in 20 ml of water for 1, 2 and 3 hours respectively. The weight of the samples was taken after each time interval. The same procedure was used for kerosene and petrol. Results were obtained in triplicates for each sample per solvent used and the average value was taken and recorded [19,20]. Filled with CB/CPKS and CB/APKS Fillers Sorption All vulcanizates samples were immersed in water, kerosene and petrol at 35°C, 45°C and 55°C of temperatures for 1, 2, and 3 hours respectively and the mass uptake were taken and recorded. The percentage sorption was calculated using the relation [19]. Activation Energy of the Swelling Process The activation energy is the minimum energy required for a reaction to proceed. In determining the activation energy of the swelling process, all samples for both CB/CPKS and CB/APKS were immersed in water, kerosene and petrol at 35°C, 45°C and 55°C and their mass uptake readings were taken. The natural logarithm of percentage sorption was plotted against the reciprocal of temperature for each samples and the slopes of the graphs were substituted into the Arrhenius relation; K= Ae -Ea/RT to determine the activation energy (Ea), where R is molar gas constant, 8.314KJ/mol [1]. Arrhenius relation, K = Ae-Ea/RT Ea = the activation energy, R = the molar gas constant, 8.314 KJmol -1 T = the thermodynamic temperature in Kelvin (k) Discussion Solubility is the maximum amount of a substance that will dissolve in a given amount of solvent at a specific temperature. Temperature is one of the factors that affect the solubility of both solids and gases. The results for mass uptakes of CB/APKS Blends at 35°C, 45°C and 55°C at different time intervals are presented on Table 3. The results obtained were carried out using three different solvents which are water, kerosene and petrol respectively. It was observed that majority of the blends of CB/APKS showed the same sorption pattern from 1 to 3 hours at 35°C, 45°C and 55°C respectively when immersed in water. The permeability of majority of the blends from sample 1 to 7 increased from 1 to 2 hours after which it either fell or remained stable after 3 hours of immersion. This trend was observed across samples 1 to 7. The sorption values of sample 1 at 35°C increased from 1.06g to 1.09g after 2 hours, after which it remained stable at 1.09g after 3 hours. At 45°C the sorption value decreased from 1.19g to 1.07g after 2 hours after which it increased to 1.08 after 3 hours. At 55°C the sorption values increased from 1.05g to 1.06g after 2 hours after which the stable value of 1.06g was observed after 3 hours. Sorption values of 1.08g and 1.07g were recorded for sample 7 at 35°C and 55°C with the time intervals respectively while an increase sorption values from 1.06g to 1.09g was observed for 45°C after 3 hours. It was also observed that majority of the samples tend have equilibrium sorption at 2 and 3 hours at 45°C and 55°C. This may be due to the permeability reaching its maximum and the blends no longer tolerating the absorption of water. After 3 hours, majority of the blends decreased as the temperature was increased, Figure 1. This was seen at sample 1 which decreased from 1.09g to 1.06g after 55°C rise. The same trend was observed for samples 3, 4 and 6 which decreased from 1.16g to 1.05g, 1.09g to 1.07g and 1.12g to 1.08g at 55°C respectively. A maximum sorption of 1.08g was recorded for sample 2, while an increase of 1.07g to 1.08g for sample 5 was recorded and decrease of 1.08g to 1.07g for sample 7 at 55°C [3,5,14,6,8]. For kerosene, the blends across sample 1-7 showed increase in permeability for the three temperature values of the experiment Table 3. It was observed that the sorption values increased as the time and the temperature were increased across the seven samples. Sample 1 at 35°C showed increase insorption value from 2.16g to 2.89g after 3 hours; 2.86g to3.34g at 45°C after 3 hours and 3.42g to3.85g at 55°C after 3 hours respectively, Figure 2. The same trend was observed across the other samples. This observation might be due to the nature of kerosene as a solvent having higher hydrocarbon content and greater compatibility, facilitating its ability to dissolve or penetrate the blends which are also having higher hydrocarbon content due to the presence of natural rubber. It could also be that the average kinetics energy of the solvent molecules was increased due to increase in temperature facilitating the solvent molecules to permeate the blends better [12,19]. For petrol, an appreciable increase was observed from 1 to 3 hours at 35°C for all the seven samples Table 3 and Figure 3. The sorption of sample 2 increased from 2.62g to 4.73g after 3 hours; sample 3 from 2.37g to 4.56g after 3 hours. This trend was observed across the seven samples. This observation might also be due to the non-polar nature of petrol making it to penetrate the blends which are also essentially non-polar due to the organic components. However, at 45°C and 55°C the sorption decreased from 1 to 3 hours. The sorption of sample 1 at 45°C decreased from 4.39g to 4.21g after 3 hours and also decreased at 55°C from 3.95g to 3.71g after 3 hours. The sorption also decreases in sample 7 from 3.36g to 2.94g and 2.35g to 2.46g after 3 hours for 45°C and 55°C temperature rise respectively. On the other hand, the sorption values at 45°C and 55°C were slightly greater when compared with those of 35°C. This might be due to the effect of temperature on the permeability of the solvent arising from greater mobility of solvents or kinetic energy at elevated temperature [5,6,13,14]. The results for mass uptakes by CB/CPKS Blends at 35°C, 45°C and 55°C at different time intervals are presented on Table 4. The results obtained were also carried out using three different solvents, which are water, kerosene and petrol [15]. The sorption for majority of the blends immersed in water at 35°C tends to increase as the CPKS values and CB values increased and decreased respectively. The sorption as the CPKS content increased from sample A to D was found to increase from 1.07g to 1.19g. A decrease was only observed at higher CPKS composition and this might be due to the lower content of CB in the blends suggesting that higher CB loading might possess better reinforcing and strength impacting properties than CPKS. However, the sorption values of most of the blends either decreased from 1 to 3 hours or remain stable after an observable increase or decrease. This might also be because the blends no longer have capacity for absorption of the solvent making the sorption to be at maximum. At 45°C and 55°C most blends across sample A to F showed a stable sorption values after 3 hours indicating the reduction in absorption capacity of the blends, Figure 4 (18, 19). The sorption values across sample A to F increased from 1 to 3 hours when the samples were immersed in kerosene Table 4 and Figure 5 respectively. Sample A at 35°C increased from 2.02g to2.87g after 3 hours; sample C from 2.30g to 3.49g after 3 hours; sample F from 2.21g to 3.20g after 3 hours. The same trend was observed at 45°C and 55°C for most of the blends. This observation may be as a result of nonpolar solvents dissolving non polar molecules. Therefore, kerosene being a nonpolar solvent facilitates its penetrating power to penetrate the blends [17,18]. The sorption of the samples at 35°C when immersed in petrol showed an appreciable increase from 1 to 3 hours, Table 4 and Figure 6. This trend was observed for all the blends for example the sorption of sample A increased from 2.51g to 4.38g after 3 hours; sample D 2.79g to 4.53g after 3 hours; sample F 2.66g to 4.85g after 3 hours respectively. This observation may be as a result of non-polar solvents dissolving non-polar molecules. However, the sorption for most blends decreased from 1 to 3 hours at 45°C and 55°C. The sorption of sample A decreased from 3.67g to 3.10g at 45°C and 3.15g to 2.85g at 55°C after 3 hours; sample C from 4.22g to 3.84g at and 3.51g to 3.25g at 55°C after 3 hours; sample F 4.24g to 3.07g at 45°C and 3.23g to 3.03g at 55°C after 3 hours respectively. This observation could be due to the effect of temperature on the permeability of the solvent because at a given temperature the activation energy depends on the nature of the chemical transformation that takes place but not on the relative energy state of the reactants and products [8,9]. Therefore, the solubility of CB/APKS had no significant difference as the temperature is varied. This shows that since the filler is an organic substance, it has little or no affinity for water with highest absorption of 1.16 g after 3 hours (sample 3). In the case of kerosene and petrol, both are organic solvents and the filler is an organic substance which follows the statement that 'like-dissolves-like'. As the temperature increases, the absorption of kerosene is lower than that of petrol. This is evident that petrol is more volatile and flammable compared to kerosene as both are non-polar solvents [1,2,5,8,18]. In the case of CB/CPKS, there is no significant solubility in water, but petrol was absorbed better than kerosene which, may be due to its volatility and flammability. Also, increase in temperature allows the filler particles to become more mobile due to increase in kinetic energy which make the solvent molecules to interact more with the filler particles as observed in petrol and kerosene. Therefore the low solubility of the fillers in the different solvents may be due to low reaction surface of the filled vulcanizates using bio fillers used [15,17,18]. Also, the level of cross-link to filler dispersion, nature of solvent and type of fillers used are being considered [13,5,9]. Generally, petrol being a mixture of hydrocarbons with a lower molecular weight than Kerosene may be expected to diffuse faster and be accommodated in the rubber matrix with fewer hindrances. The decrease in sorption with increasing filler loading may arise from filler particles behaving as an obstacle to the diffusing molecule. As filter loading increase in rubber matrix, more and more obstacles are created to the diffusing molecule and thus reduce the amount of penetrated solvent. [1,8,10] explain why higher sorption values were obtained for low molecular weight hydrocarbons. Activation Energy The activation energy being the minimum energy required for a chemical reaction to occur, connotes the lesser the activation energy the easier it is for the reactant particles to overcome the energy barrier and form product and vice versa. In this context, the permeability of the solvent is inversely proportional to the activation energy for most blends i.e the better the solvent permeates the blends, the lesser the activation energy and vice versa. For sample A-F, the solvent that the blends most was kerosene, followed by petrol and water. The results of the activation energy were a reflection of the solvent's permeability except for water which showed a different pattern. The results observed for water might be due to its polar nature solvent and wide differences in solubility parameters with the majority of the ingredients in the vulcanizates. The activation energy of sample A for kerosene, petrol and water were 23.99KJ/mol, 25.06KJ/mol and 11.96KJ/mol; Sample C; 9.88KJ/mol, 22.63KJ/mol and 11.96KJ/mol; Sample F; 14.53KJ/mol, 26.61KJ/mol and 5.55KJ/mol respectively, Table 5. The permeability in kerosene and petrol is as results nonpolar solventsto dissolve nonpolar molecules [1,2,4,8,10,14,18]. Similar explanation can be given for sample 1-7, Table 6. The activation energy for kerosene, petrol and water were 17.07KJ/mol, 13.61KJ/mol and 16.85KJ/mol for sample 1 respectively with petrol being the solvent that permeated the blend most for sample 1. The activation energy for kerosene, petrol and water for sample 4 were 23.10KJ/mol, 32.83KJ/mol and 10.44KJ/mol respectively with kerosene having highest activation energy. The results recorded for sample 7 were 9.48KJ/mol 47.18KJ/mol and 5.55KJ/mol for kerosene, petrol and water respectively. The same trend was observed for sample 5. The results of activation energy of both the CB/APKS and CB/CPKS may be due to the aggregation of carbon chain in the organic compounds as a results of the increase in the fillers which reduces ignition and bring about increase in modulus and tensile strength [4] which make the reactions in petrol and kerosene with samples variation difficult [1,2,4,8,10,14,18]. Conclusion The solubility and thermodynamics studies of CB/APKS and CB/CPKS filled NRblends were investigated. The study showed that blend loading composition and the nature of the organic molecule played a significant role in determining the mass uptake. This shows that since the filler is an organic substance, it has little or no affinity for water. In the case of kerosene and petrol, both are organics and the filler is an organic substance which follows the statement 'likedissolved-like'. As the temperature increases, the absorption of kerosene is lower than that of petrol. The results of the activation energy were a reflection of the solvent's permeability except for water which showed contrary results. The results observed for water might be due to its nature as the universal solvent being entirely different from other solvents in terms of reactivity and anomalous property. This means polar solvents dissolve polar molecules while nonpolar solvents dissolve nonpolar molecules. This research shows that both CB/APKS and CB/CPKS possess great potential in rubber science and technology.
5,437.6
2021-05-14T00:00:00.000
[ "Materials Science" ]
Deep Learning-Based Energy Expenditure Estimation in Assisted and Non-Assisted Gait Using Inertial, EMG, and Heart Rate Wearable Sensors Energy expenditure is a key rehabilitation outcome and is starting to be used in robotics-based rehabilitation through human-in-the-loop control to tailor robot assistance towards reducing patients’ energy effort. However, it is usually assessed by indirect calorimetry which entails a certain degree of invasiveness and provides delayed data, which is not suitable for controlling robotic devices. This work proposes a deep learning-based tool for steady-state energy expenditure estimation based on more ergonomic sensors than indirect calorimetry. The study innovates by estimating the energy expenditure in assisted and non-assisted conditions and in slow gait speeds similarly to impaired subjects. This work explores and benchmarks the long short-term memory (LSTM) and convolutional neural network (CNN) as deep learning regressors. As inputs, we fused inertial data, electromyography, and heart rate signals measured by on-body sensors from eight healthy volunteers walking with and without assistance from an ankle-foot exoskeleton at 0.22, 0.33, and 0.44 m/s. LSTM and CNN were compared against indirect calorimetry using a leave-one-subject-out cross-validation technique. Results showed the suitability of this tool, especially CNN, that demonstrated root-mean-squared errors of 0.36 W/kg and high correlation (ρ > 0.85) between target and estimation (R¯2 = 0.79). CNN was able to discriminate the energy expenditure between assisted and non-assisted gait, basal, and walking energy expenditure, throughout three slow gait speeds. CNN regressor driven by kinematic and physiological data was shown to be a more ergonomic technique for estimating the energy expenditure, contributing to the clinical assessment in slow and robotic-assisted gait and future research concerning human-in-the-loop control. Introduction Gait disabilities are among the most frequent disabilities in European countries [1]. Either caused by aging or cardiovascular and/or neurological disorders, impaired gait strongly affects the walking energetic efficiency of persons [2]. Therefore, energy expenditure has gained importance in gait rehabilitation, being a golden marker of gait quality and a primary outcome for evaluating manual or robotics-assisted therapies. It is usually assessed by evaluating the exchanges of oxygen consumption ( . VO 2 ) and carbon dioxide production ( . VCO 2 ) through indirect calorimetry [3], using wearable gas analyzers. Then, these exchanges of gas are commonly translated into energy using the Brockway's equation [4]. expenditure in both assisted and non-assisted walking by an ankle exoskeleton while considering slow gait speeds (0.22, 0.33, and 0.44 m/s). For this purpose, we compared two deep learning regression networks, namely the long short-term memory (LSTM) and CNN, attending to the results of Zhu et al. [22], using HR, lower limb kinematic, and EMG data as estimators. This study hypothesizes that a deep learning regressor fed by fused gait-related biomechanical and physiological data enables an accurate and timely energy expenditure assessment in basal, assisted, and non-assisted walking, even for slow speeds. We studied this hypothesis through the leave-one-subject-out cross-validation (LOOCV) technique by comparing the estimation of deep learning tool against indirect calorimetry and verify its feasibility. This study presents two-fold contributions: (i) in terms of clinical assessment, it proposes a more ergonomic and rapid technique to estimate the energy expenditure, which is reliable in slow gait, as commonly observed in gait impaired subjects; (ii) in terms of rehabilitation robotics, it supports future research insights regarding human-in-the-loop control by accurately and efficiently estimating the energy expenditure in robotic-assisted conditions through timely and reliable acquired data. Materials and Methods The development of the deep learning-based tool required a prior data collection. This was conducted under the ethical procedures of the Ethics Committee in Life and Health Sciences (CEICVS 006/2020), following the Helsinki Declaration and the Oviedo Convention. All participants gave their informed consent to be part of the study. Data were collected at the LABIOMEP, Porto Biomechanics Laboratory, University of Porto. Participants Eight healthy participants (four females and four males, see Table 1) were recruited and accepted to participate in this work. A list of eligibility criteria was outlined to conduct the experimental data collection. All subjects that had: (i) 18 or more years, (ii) body mass within 45 and 90 kg, and (iii) height within 150 and 190 cm were included in the study. The inclusion criteria regarding the anthropometric data were imposed given the exoskeleton's inherent requirements. The subjects were excluded if they reported any disturbance of locomotion or balance, caused by any known neurological or musculoskeletal injury. Table 1 presents the participants' detailed anthropometric data. Instrumentation The participants were instructed to wear shorts and standard sports shoes for better sensor accommodation. Subsequently, we instrumented each participant with the following sensor setup. First, the participants were instrumented with a wireless Polar H10 (Polar Electro Oy, Kempele, Finland, validated in ref. [24]) at the chest, that was used to monitor the HR. Second, they were instrumented with eight wireless EMG surface electrodes from the Trigno TM Avanti Platform (Delsys, Natick, MA, USA) on the tibialis anterior (TA), gastrocnemius lateralis (GL), bicep femoris (BF), and vastus lateralis (VL) muscles of both legs. The sensors were placed following the Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles (SENIAM) recommendations [25], and fixed with a white strap, as illustrated in Figure 1. Third, we instrumented the participants with the wireless motion tracker system MVN Awinda (Xsens Technologies B.V., Enschede, The Netherlands, validated in ref. [26]), placing inertial measurement units (IMUs) on the feet, lower-leg, upper-leg, pelvis, and torso. Each IMU was secured with a black strap, as illustrated in Figure 1. Fourth, each participant was instrumented with a K4b 2 metabolic respiratory sensor (COSMED, Rome, Italy, validated in ref. [27]), covering the facial respiratory ways, to measure . VO 2 and . VCO 2 . Lastly, the participants were instrumented with an electrical ankle-foot exoskeleton from the H2-Exoskeleton (Technaid S.L., Madrid, Spain) in the lateral side of the right lower limb. This device provides one degree-of-freedom in the sagittal plane. The participants were instructed to wear shorts and standard sports shoes for better sensor accommodation. Subsequently, we instrumented each participant with the following sensor setup. First, the participants were instrumented with a wireless Polar H10 (Polar Electro Oy, Kempele, Finland, validated in ref. [24]) at the chest, that was used to monitor the HR. Second, they were instrumented with eight wireless EMG surface electrodes from the Trigno TM Avanti Platform (Delsys, Natick, MA, USA) on the tibialis anterior (TA), gastrocnemius lateralis (GL), bicep femoris (BF), and vastus lateralis (VL) muscles of both legs. The sensors were placed following the Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles (SENIAM) recommendations [25], and fixed with a white strap, as illustrated in Figure 1. Third, we instrumented the participants with the wireless motion tracker system MVN Awinda (Xsens Technologies B.V., Enschede, The Netherlands, validated in ref. [26]), placing inertial measurement units (IMUs) on the feet, lower-leg, upper-leg, pelvis, and torso. Each IMU was secured with a black strap, as illustrated in Figure 1. Fourth, each participant was instrumented with a K4b 2 metabolic respiratory sensor (COSMED, Rome, Italy, validated in ref. [27]), covering the facial respiratory ways, to measure V O 2 and V CO 2 . Lastly, the participants were instrumented with an electrical ankle-foot exoskeleton from the H2-Exoskeleton (Technaid S.L., Madrid, Spain) in the lateral side of the right lower limb. This device provides one degree-of-freedom in the sagittal plane. Representation of the setup's instrumentation on the human body: (a) example of a participant during an assisted walking trial, while wearing all sensor setup (kinematic, EMG, and HR sensors and the respirometer); (b) zoomed-in view of the ankle-foot orthosis from the H2-Exoskeleton during the walking trial; and (c) schematic illustrating the IMUs placement on sternum, pelvis, upper leg and lower leg; the EMG sensors on the vastus lateralis, bicep femoris, tibialis anterior, and gastrocnemius lateralis muscles; the Polar H10 heart rate sensor at the chest; the respirometer, covering the user's facial respiratory ways; and the ankle-foot orthosis fixed on the shank and foot segments of the right lower limb. Experimental Protocol Immediately after the placement of the EMG sensors, the participants performed three maximum voluntary contractions (MVC) for each muscle to normalize EMG data. For the BF muscle, the participants laid on a stretcher in ventral decubitus position with their knee slightly bent. One researcher immobilized the participants' shank and asked them to perform maximum knee flexion. Regarding the VL muscle, the participants sat down on the same stretcher performing 90 degrees between thighs and shanks. The researcher immobilized their shank and asked the participants to perform maximum knee Figure 1. Representation of the setup's instrumentation on the human body: (a) example of a participant during an assisted walking trial, while wearing all sensor setup (kinematic, EMG, and HR sensors and the respirometer); (b) zoomed-in view of the ankle-foot orthosis from the H2-Exoskeleton during the walking trial; and (c) schematic illustrating the IMUs placement on sternum, pelvis, upper leg and lower leg; the EMG sensors on the vastus lateralis, bicep femoris, tibialis anterior, and gastrocnemius lateralis muscles; the Polar H10 heart rate sensor at the chest; the respirometer, covering the user's facial respiratory ways; and the ankle-foot orthosis fixed on the shank and foot segments of the right lower limb. Experimental Protocol Immediately after the placement of the EMG sensors, the participants performed three maximum voluntary contractions (MVC) for each muscle to normalize EMG data. For the BF muscle, the participants laid on a stretcher in ventral decubitus position with their knee slightly bent. One researcher immobilized the participants' shank and asked them to perform maximum knee flexion. Regarding the VL muscle, the participants sat down on the same stretcher performing 90 degrees between thighs and shanks. The researcher immobilized their shank and asked the participants to perform maximum knee extension. For the TA and GL muscles, the participants laid on the stretcher, assuming dorsal decubitus position. To perform the MVC, the researcher immobilized the participants' foot and asked them to perform maximum dorsiflexion of the ankle articulation, in the case of TA muscle, and maximum plantar flexion, in the case of GL muscles. This procedure was repeated for both legs. Afterward, the MVN Awinda sensors were placed and the MVN biomechanical model was calibrated by following the software guidelines: each participant assumed the N-pose, which refers to a neutral position of body segments, in upright position, looking forward with the two arms stretched and hands near the thighs. Each participant held this position for four seconds, and then walked forward in a normal fashion, turned, and walk backwards to the initial position. Subsequently, some anthropometric data, namely gender, height, and body mass, were introduced in the respirometer. After this, each participant experienced a one-day protocol consisting of two sessions: one session with and another without the orthotic device. In both sessions, the subjects performed three walking trials of twelve minutes, one per each gait speed: 0.22, 0.33, and 0.44 m/s. In each trial, the participants were instructed to remain in standing position during the first three minutes to assess the basal energy expenditure, followed by a walking activity on a treadmill that lasted six minutes, and finishing with three minutes in standing position as a recovering period. The participants rested for ten minutes between each trial and/or session. In the assisted session, the ankle-foot exoskeleton assisted according to a position control strategy. Data Acquisition Data acquisition included: (i) . VO 2 and . VCO 2 , in mL/sec, with the K4b 2 respiratory sensor breath-by-breath; (ii) the muscular activation of the TA, GL, BF, and VL muscles for both lower limb at 1000 Hz using the 8-channel Trigno TM EMG sensors and the Delsys acquisition software; (iii) the acceleration and angular velocity of lower-leg, upper-leg, feet, pelvis, and torso at 100 Hz using the MVN Awinda; and (iv) the heart rate using a Polar H10. Data acquisition commenced simultaneously for all devices, ensuring time synchronization. We average each activity (i.e., standing for three minutes, walking for six minutes, and recovering for three minutes) to assess the steady-state energy expenditure considering the last three minutes of each [28]. Due to the high noise of indirect calorimetry, a 95% confidence interval was calculated to eliminate possible outliers. Additionally, the steady-state energy expenditure was normalized by body-mass, as performed in [10][11][12]. Following these processing techniques, a step-like signal of energy expenditure was obtained (Figure 2A-D). The HR signal was processed by following the same approach. Additionally, the HR was normalized considering the maximum HR expected for the individual considering his/her age, in years. The kinematic data, namely the acceleration and angular velocity, were first filtered with a fourth-order zero-lag Butterworth filter with a cut-off frequency of 5 Hz [29]. Afterward, the total acceleration and angular velocity (vector's magnitude) were calculated and low-pass filtered at 0.1 Hz to preserve the low frequencies that belong to the start/stop walking transitions. EMG signals were processed as follows. First, they were filtered with a zero-lag band-pass filter with cut-off frequencies of 20 and 450 Hz [30,31]. Second, we calculated the envelope using a low-pass filter with a cut-off frequency of 2 Hz and normalized considering the user's MVC. Third, the signals were low-pass filtered at 0.1 Hz to preserve the low frequencies related to the start/stop walking transitions. Additionally, we calculated the EMG sum ( Figure 2B), similar to Ingraham et al. [19], for both lower and upper leg muscles, to have a more general information regarding the muscles' activation. Figure 2 illustrates an example of the post-processed data that were used to estimate the steady-state energy expenditure. The HR signal was processed by following the same approach. Additionally was normalized considering the maximum HR expected for the individual con his/her age, in years. The kinematic data, namely the acceleration and angular velocity, were firs with a fourth-order zero-lag Butterworth filter with a cut-off frequency of 5 Hz terward, the total acceleration and angular velocity (vector's magnitude) were ca and low-pass filtered at 0.1 Hz to preserve the low frequencies that belong to the s walking transitions. EMG signals were processed as follows. First, they were filtered with a zero-l pass filter with cut-off frequencies of 20 and 450 Hz [30,31]. Second, we calcul envelope using a low-pass filter with a cut-off frequency of 2 Hz and normalized ering the user's MVC. Third, the signals were low-pass filtered at 0.1 Hz to pres low frequencies related to the start/stop walking transitions. Additionally, we ca the EMG sum ( Figure 2B), similar to Ingraham et al. [19], for both lower and u muscles, to have a more general information regarding the muscles' activation. illustrates an example of the post-processed data that were used to estimate the state energy expenditure. AI-Based Regression Models As regression models, we explored LSTM, suited for sequential data [32], an due to the reliable performance reported in Zhu et al. [22]. For both models, we f following estimators: (i) kinematic data, namely the acceleration ( Figure 2A) and velocity ( Figure 2B) of lower leg, upper leg, feet, pelvis, and torso; (ii) lower an leg EMG sum ( Figure 2C); (iii) HR ( Figure 2D); (iv) gait speed; and (v) anthrop data, as in [21], as the gender (corresponding to a binary signal), age, and height. T signals were rescaled between [−1, 1] considering a min-max algorithm. Regarding the LSTM, we implemented the following architecture. For the fir we set the sequence input layer with a sequence length of 300 samples. In the secon we implemented one LSTM layer and studied the best number of neurons, consid 50, 100, 150, and 200 neurons. To the best model found in the last step, we stu effect of adding a second LSTM layer and ranged the number of neurons from 1 neurons. We introduced one dropout layer (p = 0.5) after each LSTM layer. As the mate layer, we added one fully connected layer to the best model found. The last lay AI-Based Regression Models As regression models, we explored LSTM, suited for sequential data [32], and CNN, due to the reliable performance reported in Zhu et al. [22]. For both models, we fused the following estimators: (i) kinematic data, namely the acceleration ( Figure 2A) and angular velocity ( Figure 2B) of lower leg, upper leg, feet, pelvis, and torso; (ii) lower and upper leg EMG sum ( Figure 2C); (iii) HR ( Figure 2D); (iv) gait speed; and (v) anthropometric data, as in [21], as the gender (corresponding to a binary signal), age, and height. The input signals were rescaled between [−1, 1] considering a min-max algorithm. Regarding the LSTM, we implemented the following architecture. For the first layer, we set the sequence input layer with a sequence length of 300 samples. In the second layer, we implemented one LSTM layer and studied the best number of neurons, considering 10, 50, 100, 150, and 200 neurons. To the best model found in the last step, we studied the effect of adding a second LSTM layer and ranged the number of neurons from 10 to 100 neurons. We introduced one dropout layer (p = 0.5) after each LSTM layer. As the penultimate layer, we added one fully connected layer to the best model found. The last layer is the regression output layer. Figure 3 illustrates an example of a LSTM neural network that was explored in this work. rs 2022, 22, x FOR PEER REVIEW regression output layer. Figure 3 illustrates an example of a LSTM neural network th explored in this work. For the CNN, we firstly implemented the input layer. At the second layer, we one convolution layer with 8 filters, and we ranged the filter's size from 5 [22] to 15 resolution of 5. We also tested increasing the number of filters to 16 and introdu second convolution layer. The filters were initiated with the Glorot initializer and th For the CNN, we firstly implemented the input layer. At the second layer, we added one convolution layer with 8 filters, and we ranged the filter's size from 5 [22] to 15 with a resolution of 5. We also tested increasing the number of filters to 16 and introducing a second convolution layer. The filters were initiated with the Glorot initializer and then optimized during the learning process. We used the ReLu layer as the activation function and an average pooling layer of size 2 for each convolution layer. We also introduced one dropout layer (p = 0.5) and one fully connected layer, ranging the number of neurons until a decrease in performance was detected. The last layer is the regression output layer. Figure 4 illustrates one example of CNN configuration that was studied in this work. For the CNN, we firstly implemented the input layer. At the second layer, we one convolution layer with 8 filters, and we ranged the filter's size from 5 [22] to 15 resolution of 5. We also tested increasing the number of filters to 16 and introdu second convolution layer. The filters were initiated with the Glorot initializer and th timized during the learning process. We used the ReLu layer as the activation functio an average pooling layer of size 2 for each convolution layer. We also introduced one d layer (p = 0.5) and one fully connected layer, ranging the number of neurons until a de in performance was detected. The last layer is the regression output layer. Figure 4 illu one example of CNN configuration that was studied in this work. For both CNN and LSTM, we used the dropout layer and the L2 regulari method (λ = 0.0001) to prevent the existence of overfitting and gradient vanishing. regression models were trained with the Adam optimization algorithm using square error (MSE) as the loss function, considering the initial learning rate set to 0. recursively implemented several models by changing the neural networks' hyperp eters (number of layers, number of neurons of each layer, the introduction of full nected layers for the LSTM and CNN, and the number of filters and their size for aiming to find the best model. Data processing and model implementation were performed using MAT R2019a (MathWorks Inc., Natick, MA, USA) with a machine with an Intel Core i7-36 with a maximum clock rate of 2.4 GHz. Models' Evaluation From the eight participants, we randomly left aside one participant for testi accuracy of the best model regarding unseen data (test dataset), leaving the rem seven participants for the training and validation process (training dataset). Giv For both CNN and LSTM, we used the dropout layer and the L2 regularization method (λ = 0.0001) to prevent the existence of overfitting and gradient vanishing. These regression models were trained with the Adam optimization algorithm using mean-square error (MSE) as the loss function, considering the initial learning rate set to 0.01. We recursively implemented several models by changing the neural networks' hyperparameters (number of layers, number of neurons of each layer, the introduction of fully connected layers for the LSTM and CNN, and the number of filters and their size for CNN), aiming to find the best model. Data processing and model implementation were performed using MATLAB ® R2019a (MathWorks Inc., Natick, MA, USA) with a machine with an Intel Core i7-3630QM with a maximum clock rate of 2.4 GHz. Models' Evaluation From the eight participants, we randomly left aside one participant for testing the accuracy of the best model regarding unseen data (test dataset), leaving the remaining seven participants for the training and validation process (training dataset). Given the user-specific variability of the energy expenditure, we implemented a leave-one-subjectout cross-validation technique (LOOCV) with the number of epochs fixed to 1000. This technique enabled us to identify the best model and the respective hyperparameters. To assess the performance of our models, the following metrics were used, taking the energy expenditure as the ground truth: (i) MSE, (ii) root-mean-squared error (RMSE), (iii) normalized MSE (NMSE), and (iv) Spearman's correlation coefficient (SCC). The MSE and RMSE were calculated to give a relative accuracy regarding the neural network's estimation. A value closer to 0 indicates that the neural network performs well. The SCC and NMSE were also evaluated to assess the correlation between target and estimation. A value closer to 1 indicates a perfect fit between both. Additionally, we computed the Bland- Altman plot (enables to compare two measurement techniques, in which the difference between two techniques is plotted against their average) and the linear regression plot with the respective coefficient of determination (R 2 ) to assess, respectively, differences and linearity between energy expenditure estimation using indirect calorimetry or the neural network. Table 2 presents the MSE, RMSE, NMSE, and the SCC for the best architectures of LSTM and CNN, and their respective hyperparameters. Regarding the LSTM, we verified that one cell with 150 neurons yields the best model considering this network, with an average MSE of 0.25 W/kg and RMSE of 0.45 W/kg. The NMSE was high (NMSE = 0.67), which shows that this network performs reasonably well in estimating energy expenditure. Increasing the number of neurons to 200 or the number of LSTM layers did not entail an improvement in the energy expenditure estimation, presenting a higher value of MSE (≥0.28 W/kg, RMSE ≥ 0.51 W/kg). The introduction of a fully connected layer also did not improve the tool's power estimation, presenting a MSE of 0.32 W/kg and RMSE of 0.54 W/kg. Figure 3 shows the best architecture found for the LSTM in this study. Regarding CNN, the best model was composed of two convolutional layers of 8 filters of size 10, followed by a fully connected layer of 10 neurons, as shown in Figure 4. The best model presented an average MSE of 0.14 W/kg and RMSE of 0.36 W/kg, being more accurate when compared with the LSTM. With the hyperparameters tuned, it was verified that one convolution layer instead of two, regardless of the filter size (5, 10, or 15) or the number of filters (8,16), entailed a higher MSE (≥0.19 W/kg, RMSE ≥ 0.41 W/kg). Moreover, we observed that increasing the number of neurons of the fully connected layer to 50 did not improve the estimation's performance (MSE = 0.22 W/kg, RMSE = 0.44 W/kg). Figure 5A illustrates the energy expenditure estimation considering the best model found in this work (CNN with hyperparameters of Table 2) for the worst, medium, and the best prediction subject of the LOOCV algorithm. Figure 5A shows that the CNN has a good capacity to estimate the steady-state energy expenditure and to discriminate between basal and walking energy expenditure. This neural network was able to associate higher energy when the participants were walking, as we can inspect in Figure 5A. This result is supported by a high and positive SCC (above 0.80). Figure 5A illustrates the energy expenditure estimation considering the best model found in this work (CNN with hyperparameters of Table 2) for the worst, medium, and the best prediction subject of the LOOCV algorithm. Figure 5A shows that the CNN has a good capacity to estimate the steady-state energy expenditure and to discriminate between basal and walking energy expenditure. This neural network was able to associate higher energy when the participants were walking, as we can inspect in Figure 5A. This result is supported by a high and positive SCC (above 0.80). These results are in accordance with Figure 5B,C, illustrating, respectively, the Bland-Altman and the linear regression plots for the worst, medium, and the best subject of the LOOCV. From Figure 5B, we verify that most of the error's dispersion is within the 95% confidence interval, with a bias closer to 0. The worst subject presented a positive bias of 0.48 W/kg, the medium subject presented a negative bias of −6.5 × 10 −2 W/kg, and the best subject presented a positive bias of 6 × 10 −2 W/kg. From Figure 5C, the subject with the worst prediction presented an R 2 of 0.73, with the linear fit ( Figure 5C, black line) slightly deviated from the ideal line ( Figure 5C, red line). The subject with medium These results are in accordance with Figure 5B,C, illustrating, respectively, the Bland-Altman and the linear regression plots for the worst, medium, and the best subject of the LOOCV. From Figure 5B, we verify that most of the error's dispersion is within the 95% confidence interval, with a bias closer to 0. The worst subject presented a positive bias of 0.48 W/kg, the medium subject presented a negative bias of −6.5 × 10 −2 W/kg, and the best subject presented a positive bias of 6 × 10 −2 W/kg. From Figure 5C, the subject with the worst prediction presented an R 2 of 0.73, with the linear fit ( Figure 5C, black line) slightly deviated from the ideal line ( Figure 5C, red line). The subject with medium prediction and the subject with the best prediction presented a best linear fit, closer to the ideal line, with R 2 of 0.76 and 0.95, respectively. The mean value of R 2 was found to be 0.79 (R 2 = 0.79 ± 0.12), indicating a high fit regarding the target energy expenditure. Performance Analysis in New Data We also evaluated the best model performance regarding unseen data (test dataset). It was verified that it estimated reasonably well the test dataset, as illustrated in Figure 6, presenting an MSE of 0.19 W/kg and an RMSE of 0.44 W/kg. The NMSE and the SCC were positive and above 0.7. prediction and the subject with the best prediction presented a best linear fit, closer to the ideal line, with R 2 of 0.76 and 0.95, respectively. The mean value of R 2 was found to be 0.79 (R 2 = 0.79 ± 0.12), indicating a high fit regarding the target energy expenditure. Performance Analysis in New Data We also evaluated the best model performance regarding unseen data (test dataset). It was verified that it estimated reasonably well the test dataset, as illustrated in Figure 6, presenting an MSE of 0.19 W/kg and an RMSE of 0.44 W/kg. The NMSE and the SCC were positive and above 0.7. Dependency on Gait Speed and Walking Condition We investigated if the CNN accuracy depends on gait speed and walking condition (i.e., assisted vs. non-assisted walking). For this analysis, the MSE for each of the three gait speeds was evaluated and considering the trials in which the user walked with and without the ankle-foot exoskeleton. The MSE values were used to create a heatmap, showing the error's dispersion, as illustrated in Figure 7. By analyzing Figure 7, the CNN presented similar values of MSE for both assisted and non-assisted walking and considering the three gait speeds. The MSE was considered low (≤0.17 W/kg) for all conditions. Dependency on Gait Speed and Walking Condition We investigated if the CNN accuracy depends on gait speed and walking condition (i.e., assisted vs. non-assisted walking). For this analysis, the MSE for each of the three gait speeds was evaluated and considering the trials in which the user walked with and without the ankle-foot exoskeleton. The MSE values were used to create a heatmap, showing the error's dispersion, as illustrated in Figure 7. By analyzing Figure 7, the CNN presented similar values of MSE for both assisted and non-assisted walking and considering the three gait speeds. The MSE was considered low (≤0.17 W/kg) for all conditions. prediction and the subject with the best prediction presented a best linear fit, closer to the ideal line, with R 2 of 0.76 and 0.95, respectively. The mean value of R 2 was found to be 0.79 (R 2 = 0.79 ± 0.12), indicating a high fit regarding the target energy expenditure. Performance Analysis in New Data We also evaluated the best model performance regarding unseen data (test dataset). It was verified that it estimated reasonably well the test dataset, as illustrated in Figure 6, presenting an MSE of 0.19 W/kg and an RMSE of 0.44 W/kg. The NMSE and the SCC were positive and above 0.7. Dependency on Gait Speed and Walking Condition We investigated if the CNN accuracy depends on gait speed and walking condition (i.e., assisted vs. non-assisted walking). For this analysis, the MSE for each of the three gait speeds was evaluated and considering the trials in which the user walked with and without the ankle-foot exoskeleton. The MSE values were used to create a heatmap, showing the error's dispersion, as illustrated in Figure 7. By analyzing Figure 7, the CNN presented similar values of MSE for both assisted and non-assisted walking and considering the three gait speeds. The MSE was considered low (≤0.17 W/kg) for all conditions. Discussion This work presents and validates a deep learning-based tool for steady-state energy expenditure estimation without relying on indirect calorimetry. Although gold standard, indirect calorimetry is an expensive technique, besides being cumbersome and slightly invasive for persons with motor disabilities. Furthermore, it is not adequate for human-inthe-loop control strategies since it takes more than three minutes to reach the steady-state condition, which is a long time for real-time optimization problems. Our investigation sought to find alternative methods to estimate energy expenditure through smaller, ergonomic, and clinically validated sensors, namely inertial, EMG, and heart rate sensors. To estimate the subjects' energy expenditure, we implemented and compared two deep learning approaches, namely the CNN and LSTM neural networks. To find the best model that entails a lower estimation error, we evaluated some network's hyperparameters with a LOOCV algorithm. Comparative Analysis of Deep Learning Regression Models Regarding the LSTM, we verified that increasing the number of neurons until 150 promoted an improvement in the network's performance. However, we verified an increase of more than 13% in RMSE (∆ error ≈ 0.06 W/kg) when overcoming 150 neurons or when introducing another LSTM cell. The introduction of a fully connected layer after the LSTM cells did not improve the network's performance, resulting in an increase of 19% in RMSE (∆ error ≈ 0.08 W/kg). The best model yields one LSTM cell with 150 neurons. Considering the CNN, we verified that one convolutional layer was not enough to attain the best energy expenditure estimation, resulting in RMSE 13% higher than the best architecture found with two convolutional layers (∆ error ≈ 0.05 W/kg). Moreover, increasing the number of neurons of the best CNN's fully connected layer did not improve the power's estimation (RMSE increased 21%, ∆ error ≈ 0.08 W/kg). The best model yield two convolutional layers of 8 filters each and one fully connected layer with 10 neurons. By comparing the two deep learning architectures, we verified that the CNN yields a more accurate estimation in all metrics assessed in this work. It presents an improvement of 44% in MSE (∆ error ≈ −0.11 W/kg), 20% in RMSE (∆ error ≈ −0.090 W/kg), and 18% in NMSE (∆ error ≈ 0.12, with values closer to 1.0, indicating a better fitting). Furthermore, the CNN achieved better generalization than the LSTM, since a lower standard deviation was obtained for all metrics. This neural network was reported in ref. [22] as suitable for energy expenditure estimation when compared with an MLP. Detailed Analysis of the Best Model This study demonstrates the suitability of CNN to accurately estimate the subjects' steady-state energy expenditure based on inertial, EMG, and heart rate sensors. A deep analysis of Figure 5 reveals that the CNN was able to catch relevant information in the inputs to increase its generalization to different subjects and walking conditions, supported with the low standard-deviation observed. By analyzing Figure 5A, our model are revealed to be accurate in distinguishing the basal from the walking energy expenditure, and it was sensitive to changes in gait speed and walking condition, since an SCC above 0.85 was achieved, indicating that both target and estimation share the same monotony. Figure 5B, which illustrates the Bland-Altman plots, also highlights the suitability of the proposed CNN. The errors between target and estimation were within the 95% confidence interval and the bias was small and close to 0. For the subject with worst estimations, a positive and higher bias (0.48 W/kg) was expected since an underestimation is visible in Figure 5A, especially during the walking condition. For the subject with medium estimations, the CNN slightly overestimates the target energy expenditure and, thus, it would be expected a negative bias. However, this overestimation is not valid for all trials, explaining the negative, yet close bias to 0 (−6.5 × 10 −2 W/kg). Regarding the subject with best estimation, the CNN achieved a perfect fit, explaining the low bias of 6 × 10 −2 W/kg. These results are supported by Figure 5C, where high linearity is observed (R 2 = 0.79). Therefore, the steady-state energy expenditure estimation with the CNN is comparable to that obtained with indirect calorimetry. When estimating energy expenditure for a novel subject, the CNN proved to be reliable. Although the increment of 36% in MSE (∆ error ≈ 0.05 W/kg) and 22% in RMSE (∆ error ≈ 0.08 W/kg), respectively, when compared to the LOOCV algorithm, these values are within the range observed in Table 2 (0.14 ± 0.1 and 0.36 ± 0.13 W/kg for MSE and RMSE, respectively). The test subject presented a higher variation in the baseline energy expenditure in comparison with other subjects (visible in Figure 6), which introduced an additional challenge in estimating energy expenditure. This may explain the higher error when compared to the LOOCV algorithm. Nevertheless, the SCC was high (0.87) and consistent with the LOOCV algorithm (0.87 ± 0.043), indicating that the inertial, EMG, and heart rate sensors provide enough information to estimate accurately the energy expenditure, regardless of gait speed, walking condition, and basal vs. walking energy expenditure. This was also highlighted in Figure 7, which illustrates the MSE dependency on gait speed and walking conditions. Thus, it seems that there is no dependency regarding gait speed, given the similar values of MSE. Further, the MSE mean values were close to 0 (MSE ≤ 0.17), supporting the conclusion that our tool is accurate in estimating energy expenditure. Yet, we verified a slightly better estimation when the subjects were assisted with the ankle-foot exoskeleton. This may be explained by the fact that gait is more controlled when subjects walk with the ankle-foot exoskeleton. Considering these results, the proposed tool was revealed to be versatile for energy expenditure assessment in multiple rehabilitation scenarios. Related Work Previous works have shown a good performance of AI algorithms in estimating human energy expenditure. For instance, [19] presented an approach to estimate energy expenditure using IMUs, EMG, HR, and, among other physiological signals, the SpO 2 . The users performed numerous tasks without gait assistance, with a minimum gait speed of 0.6 m/s and a maximum of 2.7 m/s, which resulted in high energy expenditure variation (~10 W/kg). The authors used linear regression models with the best ones presenting errors bellow 1.5 W/kg. Our work innovates by exploring deep learning regressors along with orthotics-based gait assistance considering slow gait speeds, which entailed a lower energy expenditure variation (~4.1 W/kg) when compared to [19]. This lower variation of energy expenditure, conjugated with both assisted and non-assisted gait conditions, may cause an added challenge in the estimation process of our model. In refs. [20,21], the authors found high correlations between the target and the estimated oxygen uptake by a random forest algorithm and MLP, respectively. Zhu et al. [22] assessed the effectiveness of CNN, MLP, and an activity-specific linear regression model, using inertial, HR, anthropometric data (age, height, weight), and basal metabolic rate. The authors found that the CNN yields the best estimation, resulting in an improvement of more than 30% when compared to the MLP and linear regression model. However, these state-of-the-art approaches may require several minutes (at least 3) to compute the steadystate energy expenditure. Thus, it would entail a higher optimization time for applications with the human-in-the-loop control. Our work innovates over studies from refs. [20][21][22] by (i) estimating the steady-state energy expenditure, which is useful for rapid estimation and reliable use in human-in-the-loop control strategies; (ii) benchmarking two different deep learning regressors, namely the CNN and LSTM, along assisted and non-assisted gait conditions while considering slow gait speeds commonly observed in persons with gait disabilities. A recent study estimated the steady-state energy expenditure considering ankle assisted walking at 1.25 m/s [15]. The authors used GRF and EMG sensors as inputs of a linear regression model and an LSTM. They found the LSTM was suitable to estimate the end-user energy expenditure, with an average RMSE of 0.40 W/kg for ankle assisted walking, yielding a better model when compared with linear regression (RMSE = 0.43 W/kg). The results were comparable to those obtained with our work, especially for the LSTM neural network. However, our results suggest that using CNN to estimate energy expenditure may improve the estimation power, given the lower RMSE (0.36 W/kg). Additionally, our work innovates the study [15] by exclusively relying on wearable sensor data and exploring slow gait conditions. Limitations and Future Insights The main limitation of our work is the low number of subjects to train the models. Although our trials are long in temporal terms, involving more than 360,000 samples in total to develop the regression models, our algorithm may be affected by the subject's variability. Even so, an equal gender distribution was guaranteed, which is important to have a more representative population. From a future perspective, we aim to collect more data with different subjects to augment the estimation's power. Future insights also address the estimator's selection algorithms, by selecting among data the minimal conjugation of sensors that promotes an accurate estimation and fulfill usability guidelines. Lastly, we aim to collect data with persons that exhibit gait disabilities to assess the differences regarding energy expenditure. The use of transfer learning will endow our models with the ability to estimate the energy expenditure for non-healthy persons. Conclusions This work presents and validates a deep learning tool for steady-state energy expenditure estimation for both assisted and non-assisted gait conditions considering slow gait speeds typically observed in persons with motor disabilities. Our approach relies on the use of deep learning to promote an accurate estimation of the subjects' energy expenditure using data collected by reliable, ergonomic, and clinically validated on-body sensors. From the cross-validation results, we verified the suitability of CNN to accurately estimate energy expenditure in basal, assisted, and non-assisted walking, resulting in an improvement of 20% in RMSE when compared to LSTM. Therefore, we propose a versatile tool for estimating the energy expenditure in multiple gait conditions (freely human walking and robotics-based gait rehabilitation) and in optimization problems issues in the human-in-the-loop control strategies.
9,462.6
2022-10-01T00:00:00.000
[ "Engineering", "Medicine", "Computer Science" ]
A deep‐learning workflow to predict upper tract urothelial carcinoma protein‐based subtypes from H&E slides supporting the prioritization of patients for molecular testing Abstract Upper tract urothelial carcinoma (UTUC) is a rare and aggressive, yet understudied, urothelial carcinoma (UC). The more frequent UC of the bladder comprises several molecular subtypes, associated with different targeted therapies and overlapping with protein‐based subtypes. However, if and how these findings extend to UTUC remains unclear. Artificial intelligence‐based approaches could help elucidate UTUC's biology and extend access to targeted treatments to a wider patient audience. Here, UTUC protein‐based subtypes were identified, and a deep‐learning (DL) workflow was developed to predict them directly from routine histopathological H&E slides. Protein‐based subtypes in a retrospective cohort of 163 invasive tumors were assigned by hierarchical clustering of the immunohistochemical expression of three luminal (FOXA1, GATA3, and CK20) and three basal (CD44, CK5, and CK14) markers. Cluster analysis identified distinctive luminal (N = 80) and basal (N = 42) subtypes. The luminal subtype mostly included pushing, papillary tumors, whereas the basal subtype diffusely infiltrating, non‐papillary tumors. DL model building relied on a transfer‐learning approach by fine‐tuning a pre‐trained ResNet50. Classification performance was measured via three‐fold repeated cross‐validation. A mean area under the receiver operating characteristic curve of 0.83 (95% CI: 0.67–0.99), 0.8 (95% CI: 0.62–0.99), and 0.81 (95% CI: 0.65–0.96) was reached in the three repetitions. High‐confidence DL‐based predicted subtypes showed significant associations (p < 0.001) with morphological features, i.e. tumor type, histological subtypes, and infiltration type. Furthermore, a significant association was found with programmed cell death ligand 1 (PD‐L1) combined positive score (p < 0.001) and FGFR3 mutational status (p = 0.002), with high‐confidence basal predictions containing a higher proportion of PD‐L1 positive samples and high‐confidence luminal predictions a higher proportion of FGFR3‐mutated samples. Testing of the DL model on an independent cohort highlighted the importance to accommodate histological subtypes. Taken together, our DL workflow can predict protein‐based UTUC subtypes, associated with the presence of targetable alterations, directly from H&E slides. Introduction Urothelial carcinomas (UCs) are malignant epithelial neoplasms arising from the urothelial lining of the urinary tract [1].The rare upper tract UC (UTUC) represents 5-10% of all UCs, whereas the remaining 90-95% are urothelial bladder cancer (UBC).UTUC is frequently associated with poor prognosis, with twothirds of patients being diagnosed at an invasive tumor stage [2].Owing to the histopathological similarity between UTUC and UBC [3,4], and the preponderance of the latter, UTUC is an understudied disease.However, a better understanding of UTUC biology could allow the identification of distinctive molecular traits with potential strong impact on patient stratification and treatment [3][4][5]. Assessment of molecular subtypes via high-throughput sequencing is neither ubiquitously available nor cost-effective.Thus, alternative subtyping approaches should be considered.In MIBC, studies showed a substantial overlap between molecular and protein-based subtypes [18,19], identifiable via more widespread, routinely applicable immunohistochemistry (IHC) analyses.Moreover, AI-based approaches have recently emerged as novel research tools able to provide automated and accurate pathological diagnoses, leveraging the information residing into whole slide images (WSIs) [20].Here, we identify UTUC proteinbased subtypes via a set of markers able to well characterize the luminal/basal differentiation of the urothelium in both the upper and lower urinary tract.These protein markers have been used in previous studies to stratify UTUC and UBC patients [21][22][23][24] and have shown to correlate well with RNA expression [18,19].In addition, we propose a deep-learning (DL) approach that can predict the identified protein-based subtypes relying only on digitized hematoxylin and eosin (H&E) slides. Patient cohorts The 'German cohort' served as training cohort.It comprised formalin-fixed paraffin-embedded (FFPE) material of N = 163 retrospectively analyzed patients diagnosed with UTUC between 1995 and 2012 at University Hospital Erlangen-Nürnberg (Erlangen, Germany) and University Hospital Gießen and Marburg (Marburg, Germany).The 'Dutch cohort' served as independent test cohort.It comprised N = 55 samples diagnosed with UTUC between 2017 and 2020 as part of a multicenter, phase II, prospective trial conducted at University Medical Center Rotterdam (Rotterdam, The Netherlands) [25].All patients underwent radical nephroureterectomy or partial ureterectomy, without any treatment before surgical tissue collection.All samples were invasive (i.e. with tumor stage ≥ pT1) and for both cohorts one WSI per patient was used, namely the one showing the most representative invasive Tissue microarray analysis H&E staining and tissue microarray (TMA) analysis were performed for the two cohorts in the respective pathology centers.For each patient a total of four representative tissue cores (1 mm of diameter), two covering the tumor centrum and two covering the invasion front, were punched from the associated paraffin block and transferred to distinct recipient blocks using the TMA Grand Master (3DHistech, Budapest, Hungary). IHC analysis All IHC analyses were performed at the Institute of Pathology, University Hospital Erlangen (Erlangen, Germany) on a Ventana BenchMark Ultra (Ventana, Tucson, AZ, USA) autostainer accredited by the German Accreditation Office (DAKKs) according to DIN EN ISO/IEC 17020. For protein-based subtyping, we relied on a set of three basal (i.e.CK14, CK5, and CD44) and three luminal (i.e.CK20, FOXA1, and GATA3) markers that characterize the luminal/basal differentiation of the urothelium in both the upper and lower urinary tract.More specifically, our marker choice was based on the following considerations: 1. the urothelium of both the upper and lower urinary tract can be stratified into three major epithelial cell layers based on their localization and cell type [27]: • the basal layer sits on the basement membrane.It is a proliferative cell layer containing stem cells and expressing the basal cytokeratins (CK) 5/6 and CK14, as well as the hyaluronic acid receptor (CD44); • the intermediate layer contains moderately differentiated cells with variable expression of CD44, reduced expression of CK5/6, and high expression of CK18; • the superficial layer contains the so-called umbrella cells, which are fully differentiated and express uroplakin proteins as well as CK20.2. Starting from the differentiation of normal urothelium, urothelial neoplasms develop via two distinct oncogenic pathways [27,28]: • the luminal pathway is driven by the main transcription factors (TFs) GATA3, FOXA1, and PPARG.Cancer cells express markers characteristic of the superficial cell layer; • the basal pathway is driven by the TFs p63, STAT2, and EGFR.Resulting cancer cells express markers characteristic of the basal layer. PD-L1 expression on immune and tumor cells was assessed on TMAs using the PD-L1 assay (clone SP263, Ventana) as previously described [30].Quantification was performed by a pathologist (VB) using both immune cells (IC) score and combined positive score (CPS).The IC score was calculated as the percentage of the area occupied by PD-L1-positive IC relative to the total tumor area, whereas the CPS was calculated as the number of immune and tumor cells positive for PD-L1 out of the total number of tumor cells.Only samples with IC score ≥5% or CPS ≥10 were considered positive for PD-L1 [31,32]. DNA isolation and FGFR3 SNaPshot analysis Tumor DNA was isolated using the Maxwell 16 LEV Blood DNA Kit (Promega, Mannheim, Germany) according to the manufacturer's instructions as previously described [33].FGFR3 mutational analysis was performed using the SNaPshot method, which simultaneously detects nine hot-spot mutations, as previously described [34]. Clustering-based protein-based subtype identification and statistical analyses Hierarchical clustering and statistical analyses were performed within the R environment v.4.0.3 [35].For protein-based subtyping, the expression of each luminal/basal marker in each patient was taken equal to the median H-score across the four TMA cores. Unsupervised hierarchical clustering was then performed on the standardized marker expression. Association between categorical variables was assessed using Fisher's exact test.To compare the distribution of continuous variables, the Wilcoxon ranksum test for independent samples (two groups), the Kruskal-Wallis test (more than two groups), or the one-tailed Wilcoxon signed-rank test (paired samples) were used.Analyses of overall survival (OS) and disease-specific survival (DSS) were performed using the Kaplan-Meier estimator, and statistical differences were assessed through the log-rank test.p values <0.05 were considered statistically significant.Further details are provided in Supplementary materials and methods. WSI annotation and preprocessing Slides belonging to the two cohorts were digitized in the respective pathology centers using a Panoramic P250 scanner (3DHistech) at different resolution levels.For each WSI, tumor tissue was manually annotated in QuPath [36] (v.0.2.3) by a trained observer (MA) under expert supervision (VB) (see supplementary material, Figure S1).An automated Python-based pipeline (https://github.com/MiriamAng/TilGenPro) was implemented to tessellate the identified tumor areas into nonoverlapping tiles of 512 Â 512 pixel edge length, perform quality filtering, and stain-normalization (see supplementary material, Figure S2).Further details are provided in Supplementary materials and methods. DL algorithm and its validation Our DL framework relied on a transfer-learning approach by fine-tuning a ResNet50 [37] initialized with weights pre-trained on the ImageNet database [38].A repeated three-fold cross-validation was used to estimate the model's generalization accuracy and error.Here, to ensure independence between training and validation folds, random splitting was performed at patient level (see supplementary material, Figure S3).To account for class imbalance, the number of tiles belonging to each class within the training set was equalized.The DL model's prediction for single tiles of the validation set was averaged to obtain a WSI-level subtype prediction.For each repetition, the area under the receiver operating characteristic (AUROC) curve, accuracy, precision, recall, and F1-score were assessed as mean and 95% confidence interval (CI) across the three validation folds relying on Student's t-distribution.Confusion matrices for a given repetition were 4 of 14 M Angeloni et al obtained by concatenating the predictions for the three validation folds.The predicted class in the independent test cohort was taken equal to the class with the highest average prediction value across the three models from the best-performing repetition.Further details are provided in Supplementary materials and methods. Hierarchical clustering of protein marker expression identifies luminal, basal, and indifferent UTUC subtypes To characterize protein-based UTUC subtypes, the expression of three basal (CK5, CK14, and CD44) and three luminal (CK20, FOXA1, and GATA3) differentiation markers of the urothelium was assessed in a cohort of 163 invasive samples, referred to as the 'German cohort'.Hierarchical clustering of protein marker expression identified a 'luminal' cluster (80 samples, 49.1%), a 'basal' cluster (42 samples, 25.8%), and an 'indifferent' cluster (41 samples, 25.1%) with low expression of both basal and luminal markers (Figure 1A).Only 2 of the 41 indifferent samples had marker expression equal to zero, while for the others weak expression could be detected.The basal samples exhibited shorter OS and DSS than the luminal and indifferent samples (Figure 1B).Tumor stages differed across the three subtypes (p = 0.01), with almost half of pT4 samples in the basal group (see supplementary material, Table S1).Both infiltration (p = 0.02) and tumor type (p = 0.02) differed between basal and indifferent cases (Figure 1C,D).In contrast, no significant differences were found when comparing luminal and indifferent subtypes.Indeed, basal samples showed a clear prevalence of diffusely infiltrative and non-papillary tumors, whereas the other two subtypes showed a higher proportion of pushing and papillary tumors. Collectively, hierarchical clustering based on the IHC expression of six differentiation markers of the urothelium identified three distinctive protein-based UTUC subtypes, with high histopathological similarity between the indifferent and the luminal subtype. DL successfully predicts luminal and basal proteinbased subtypes from H&E slides and identifies candidate heterogeneous slides We hypothesized that a DL model could predict the identified protein-based subtypes using only the information contained in the digitized H&E slides.With this aim, slides were annotated in QuPath (see supplementary material, Figure S1) and preprocessed via an automated Python-based pipeline (https://github.com/MiriamAng/TilGenPro;see supplementary material, Figure S2).A DL model was then learned on the 163 German samples in a weakly supervised way.Notably, each tile inherited as true class label the protein-based subtype (i.e.luminal, basal, and indifferent) assigned to the parent slide by hierarchical clustering of the expression of the chosen markers.A total of 100,178 luminal, 66,770 basal, and 57,874 indifferent tiles were used to train and validate a DL-based classifier in a three-time repeated three-fold cross-validation setting (see supplementary material, Figure S3).While most basal (AUROC = 0.77; 95% CI: 0.67-0.86;repetition three) and luminal (AUROC = 0.71; 95% CI: 0.44-0.99;repetition two) samples were correctly predicted, more than 55% of samples labeled as indifferent on the basis of protein expression were predicted luminal by our DL model on the basis of digitized H&E slides alone (see supplementary material, Figure S4 and Table S2).This difficulty of the DL model in predicting the indifferent subtype was consistent with the observed histomorphological similarity with the luminal subtype. Therefore, we decided to train a new DL model focusing on those samples assigned, on the basis of protein expression, to the most distinctive luminal and basal subtypes (Figure 2A).Again, we relied on a repeated cross-validation setting.Our classifier achieved a mean AUROC of at least 0.8 (AUROC = 0.83; 95% CI: 0.67-0.99;repetition one) and mean accuracy of ≥0.75 (mean accuracy = 0.79; 95% CI: 0.75-0.84;repetition two) (see Figure 2B and supplementary material, Table S3).For further analyses, we focused on the results of repetition 2, as it showed the best accuracy and most consistent performance metrics across the three hold-out folds (see supplementary material, Table S3).First, the so called 'high-confidence' slides were identified, i.e. those slides whose prediction score for the luminal/basal class, given as output by the model, was at least 0.7 (see supplementary material, Table S4).These slides were selected exclusively on the basis of the DL model prediction scores, irrespective of their protein-based subtype assigned via clustering of protein marker expression.The true positive rates in the high-confidence luminal and basal slides were respectively 86.2% (50/58) and 87.5% (14/16).Visual inspection of tile-level prediction maps of the top high-confidence slides confirmed the pathological description of these subtypes at histopathological level, i.e. dense nuclei with small stroma bridges as distinctive feature of the luminal subtype 5 of 14 Deep-learning predicts UTUC protein-based subtypes (Figure 3A) and dense stroma and keratinization for the basal subtype (Figure 3B) [1].In the top scoring luminal slide, 99.9% of tiles were predicted luminal.In the top basal, 90% of tiles were predicted basal.These predictions were confirmed by whole slide level staining with the six luminal and basal markers (see Next, we focused on the 22 'low-confidence' slides, i.e. those slides whose prediction score for the luminal/basal class, given as output by the model, was between 0.4 and 0.6 (see supplementary material, Table S5).As for the high-confidence slides, the lowconfidence slides were chosen exclusively on the basis of the DL model prediction scores, irrespective of their protein-based subtype.These slides showed no significant difference in the distribution of luminal and basal marker expression (p = 0.43).Tile-level prediction maps allowed categorization of these slides into 'heterogeneous slides', with distinguishable clusters of predicted luminal and basal tiles, and slides without any visible luminal/basal structured patterns.Visual inspection of a selected candidate heterogeneous slide supported the predictions, with basal and luminal tiles showing the characteristic histopathological features (Figure 3C).Furthermore, whole slide IHC validation showed positivity of the entire tumor area for the three luminal markers and the basal marker CK14 (see supplementary material, Figure S5C).Notably, in the luminal-predicted area, the CK14 basal marker appeared expressed only in the outer cellular layer, whereas in the basal-predicted area it appeared in all cell layers (Figure 3C).Taken together, the DL model was able to identify candidate heterogeneous slides showing co-presence of luminal and basal areas. High-confidence predicted slides show the expected histopathological features and are significantly associated with PD-L1 and FGFR3 status To further characterize the high-confidence predictions, we also examined their marker expression and morphological characteristics.High-confidence predicted luminal and basal slides showed higher expression of luminal (p = 6.62 Â 10 À9 ) or basal markers (p = 0.00241) respectively (Figure 4A).Highconfidence luminal predictions were mainly characterized by papillary tumors, with not otherwise specified (NOS) histological subtype, and with pushing infiltration type.Instead, high-confidence basal predictions were mainly non-papillary tumors, either squamous or with subtype histology and with diffuse infiltration (Figure 4B).Of note, five of the eight wrongly predicted luminal slides were samples characterized by papillary growth of the tumor with NOS histology.Instead, both wrongly predicted basal slides were diffusely infiltrating, non-papillary tumors with subtype histology.These results show that high-confidence predictions showed morphologic features consistent with the DL-predicted subtype. Next, we investigated the association of highconfidence slides with clinically relevant biomarkers (Figure 4C).The proportion of PD-L1-positive samples was higher in high-confidence predicted basal slides, both using IC score (p = 0.01) and CPS (p < 0.001).Vice versa, the proportion of FGFR3-mutated samples was higher in high-confidence predicted luminal slides (p = 0.002).Interestingly, one wrongly predicted basal slide was actually PD-L1 positive, whereas four wrongly predicted luminal slides were FGFR3 mutated. External validation of the DL model highlights the importance of subtype histology To test the generalization ability of the DL model, an external cohort of 55 invasive UTUC patients, referred to as the 'Dutch cohort', was used.Hierarchical clustering identified also in this cohort luminal (31 samples, 56.3%), basal (4 samples, 7.3%), and indifferent (20 samples, 36.4%)subtypes, with expression profiles matching those in the German cohort (see supplementary material, Figure S6A).WSI-level predictions on the The DL model correctly classified all luminal samples, with an average prediction score of 0.89, but not the four basal samples (see supplementary material, Figure S6B).Three of these samples showed histological subtypes with glandular, squamous, and sarcomatoid features; one sample was instead predominantly characterized by papillary growth of the tumor with NOS histology.However, in this fourth sample, basal features could be observed at the invasion front, where two out of four TMA cores were punched.Interestingly, the tile-level prediction map highlighted a small tumor area predicted basal in correspondence to the invasion front, whereas the remaining papillary area was mainly predicted luminal (see supplementary material, Figure S6C). Thus, although a satisfying performance of the DL model was reached in the prediction of the luminal samples, the presence of histological subtypes might have made prediction of the basal samples difficult. Discussion We identified three protein-based UTUC subtypes via a set of markers that are able to characterize the luminal/basal differentiation of the urothelium in both the upper and lower urinary tract.The three subtypes were identified in a completely unsupervised way, using hierarchical clustering of the protein marker expression.The samples belonging to the indifferent subtype had very low protein expression of both luminal and basal markers, thus clearly differing from both luminal and basal samples.However, our characterization in terms of infiltration type and tumor type highlighted the histopathological similarity between the indifferent and luminal subtypes.This similarity was reflected in the performance of a three-class DL model, which, utilizing only the digitized H&E slides, predicted as luminal a large number of samples assigned to the indifferent protein-based subtype.Future studies with molecular data might help to elucidate the molecular-level differences between the indifferent and the luminal protein-based subtypes.Instead, a two-class DL model trained on only the samples assigned to the luminal and basal proteinbased subtypes could predict with high accuracy these most distinctive luminal/basal subtypes relying only on digitized H&E slides.Furthermore, the DL model identified candidate heterogeneous slides for which whole slide IHC validation confirmed the co-presence of luminal and basal areas closely matching the tilelevel predictions. At the histopathology level, invasive UCs present different morphological appearances [1].As previously observed, MIBC histological subtypes are strong indicators of mRNA-/protein-based subtypes [18].Notably, the high-confidence predictions by our model, even including those slides where the DL-predicted subtype did not match the protein-based subtype, showed morphological features consistent with the DL-predicted subtype.For example, we saw that five out of the eight WSIs labeled 'basal' on the basis of the protein expression, but predicted luminal by our DL model, showed a NOS histology, which is characteristic of the luminal subtype.Moreover, highconfidence predicted slides were significantly associated with FGFR3 mutation and PD-L1 expression.Indeed, basal predictions contained a higher proportion of PD-L1-positive samples and luminal predictions a higher proportion of FGFR3-mutated samples.Because of the implementation of anti-PD-L1 and PD-1 therapies, and specific FGFR3 inhibitors [26,39] in UCs, histopathology laboratories have been facing increased requests for PD-L1 assessment and FGFR3 alteration testing.Our DL model predictions, based exclusively on the information contained in digitized H&E slides, could thus offer valuable support to pathologists for the prioritization of UTUC patients who should undergo FGFR3/PD-L1 testing.This would also contribute to extending patient access to targeted therapies and improve their management and care in clinical practice.Yet, a fully digital diagnostics workflow would be required to implement such prioritization support in daily practice.In addition, testing on even larger UTUC cohorts would be needed. Several challenges were encountered during our study.Hierarchical clustering in the independent Dutch cohort highlighted the same biological tendency observed in the German cohort, which is even more remarkable considering the prospective nature of the former.This strongly supported the existence of three distinct UTUC protein-based subtypes.However, the cluster membership of single samples might vary with varying samples being clustered.This uncertainty in the training sample labels might negatively affect the DL model performance.An additional level of uncertainty in training labels was due to the assessment of marker expression in selected TMA cores.It is common practice to stain several tumors at once, while also taking into account tumor heterogeneity.Yet, expression in TMA cores might not be representative of whole slide level expression, as clearly seen for the predominantly papillary case in the Dutch cohort with a basal-like morphology at the invasion front.RNA-sequencing analyses might offer more robust subtype assignment, thanks to genome-wide profiling, yet might still fail to correctly characterize of 14 Deep-learning predicts UTUC protein-based subtypes heterogeneous samples.Another challenge was linked to histological subtypes.Given the rarity of UTUC, we had decided not to exclude them, as had been done in previous UBC studies [19].However, as the results on the Dutch cohort showed, this might have impaired model performance.Histological subtypes have gained increasing importance, given their impact on pathological and clinical outcomes [40].Thus, it would be very important to develop machine learning approaches able to accommodate the prediction of histological subtypes from H&E slides.This might be achievable with the future availability of an even more extensive UTUC cohort, with a sufficient number of samples for all histological subtypes to ensure robust training of a DL model. Furthermore, in the future it would be very interesting to investigate the extension of our DL framework to biopsy samples.Here, challenges will be related to whether an intrinsically small and superficial biopsy sample provides enough information to predict subtypes and ultimately offers support in deciding on the best treatment strategy.Finally, additional steps could be integrated into the workflow to facilitate the use of the developed DL model in a fully digital pathology laboratory.First, an upstream tumor segmentation step could be implemented to avoid the manual annotation of new samples.In addition, it would be useful to develop a downstream postprocessing tool for the automatic detection of candidate heterogeneous slides based on luminal/basal patterns analysis of tile-level prediction maps. Collectively, our results show that the most distinctive protein-based UTUC subtypes can be predicted directly from H&E slides and are associated with the presence of targetable alterations.Thus, our study lays the foundation for an AI-based tool to support UTUC diagnosis and extend patient access to targeted treatments. Figure 1 . Figure 1.Legend on next page. 6 of 14 M Angeloni et al © 2024 The Authors.The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.J Pathol Clin Res 2024; 10: e12369 Figure Figure 3A,B and supplementary material, FigureS5A,B).Taken together, these results show that the DL model was able to successfully predict the most distinctive luminal/basal protein-based subtypes on the basis of digitized H&E slides alone.Next, we focused on the 22 'low-confidence' slides, i.e. those slides whose prediction score for the luminal/basal class, given as output by the model, was between 0.4 and 0.6 (see supplementary material, TableS5).As for the high-confidence slides, the lowconfidence slides were chosen exclusively on the basis of the DL model prediction scores, irrespective of their protein-based subtype.These slides showed no significant difference in the distribution of luminal and basal marker expression (p = 0.43).Tile-level prediction maps allowed categorization of these slides into 'heterogeneous slides', with distinguishable clusters of predicted luminal and basal tiles, and slides without any visible luminal/basal structured patterns.Visual inspection of a selected candidate heterogeneous slide supported the predictions, with basal and luminal tiles showing the characteristic histopathological features (Figure3C).Furthermore, whole slide IHC validation showed positivity of the entire tumor area for the three luminal markers and the basal marker CK14 (see supplementary material, FigureS5C).Notably, in the luminal-predicted area, the CK14 basal marker appeared expressed only in the outer cellular layer, whereas in the basal-predicted area it appeared in all cell layers (Figure3C).Taken together, the DL model was able to identify candidate heterogeneous slides showing co-presence of luminal and basal areas. Figure 1 . Figure 1.Protein-based UTUC subtypes and their histopathological characterization.(A) Heatmap visualization of the hierarchical clustering analysis performed on the expression of the three basal (CD44, CK5, and CK14) and three luminal (FOXA1, GATA3, and CK20) markers in our UTUC German cohort (N = 163 invasive samples).Heatmap colors represent marker expression (quantified via the standardized H-score, i.e. in terms of standard deviation differences with respect to the average H-score of the marker across all samples; white: equal to the average expression; red: higher than the average expression; blue: lower than the average expression).The color ribbon at the top of the heatmap indicates the three protein-based subtypes: luminal (red), indifferent (green), basal (blue).(B) Kaplan-Meier overall survival (top) and disease-specific survival (bottom) curves for the identified subtypes.p values from log-rank test.(C and D) Characterization of the identified subtypes in terms of infiltration type (C) and tumor type (D).Representative histopathology images are shown on the left and bar plot distributions on the right.The p values shown above the bar plots refer to two different Fisher's exact tests: one test to compare the distribution of histopathology features in basal versus indifferent subtypes (left) and one test to compare the distribution of histopathology features in luminal versus indifferent subtypes (right).The histopathology images are framed in the respective color used in the bar plot to the right ( p < 0.05 in italic bold). predicts UTUC protein-based subtypes © 2024 The Authors.The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.J Pathol Clin Res 2024; 10: e12369 Dutch cohort were obtained employing an ensemble of the three DL models trained on repetition 2. Figure 2 . Figure 2. DL-based prediction of luminal and basal protein-based subtypes.(A) Steps of the DL framework: (1) Starting from 80 luminal and 42 basal WSIs, a library made up of 100,178 luminal (red) and 66,770 basal (blue) stain-normalized tiles is generated using an automated, custom-developed, pre-processing pipeline.(2) The tiles library is used for training the network using three-fold cross-validation (CV) (gray: training folds, yellow: validation fold).Tiles of the trained set are balanced between the two classes.The CV is repeated three times.(3) For each training/validation set combination, a DL model is trained using a transfer-learning approach.(4) For each tile, the model outputs a prediction value for the luminal (p luminal ) and for the basal (p basal ) class.WSI-level predictions for the luminal (p WSI luminal ) and basal (p WSI basal ) subtypes are calculated by averaging the tile-level predictions for each class.The subtype associated with the highest prediction is assigned to the entire slide.In the schematization, color intensity is proportional to the prediction score.(B) AUROC for the three repetitions (with basal subtype as positive class).The mean AUROC ± standard deviation across folds is reported for each repetition.AUROC, area under the receiver operating characteristic; N B , number of basal slides; N L , number of luminal slides. Figure 3 . Figure 3. Whole-slide IHC validation of deep-learning predictions.(A) Slide predicted with the highest p WSI luminal , (B) slide predicted with the highest p WSI basal , and (C) candidate heterogeneous slide.For all three slides, from left to right: digitized whole slide with annotated tumor areas (red); tile-level prediction map (red: luminal; blue: basal; intensity dependent on prediction score); selected areas; and corresponding areas of the whole-slide IHC validation using CK20 as representative luminal marker and CK14 as representative basal marker.Whole-slide IHC validation with all six luminal/basal markers is provided in supplementary material, Figure S5. predicts UTUC protein-based subtypes © 2024 The Authors.The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.J Pathol Clin Res 2024; 10: e12369 Figure 4 . Figure 4. Legend on next page. 10 of 14 M Angeloni et al © 2024 The Authors.The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.J Pathol Clin Res 2024; 10: e12369 Figure 4 . Figure 4. Characterization of the high-confidence predicted luminal and basal slides.(A) Boxplot distributions of the mean expression values for the basal (CK14, CK5, and CD44) and luminal (CK20, FOXA1, and GATA3) markers in the high-confidence predicted luminal (left) and basal (right) slides.Shape represents the true slide label (triangle: basal; square: luminal) and color represents the prediction accuracy (gray: correct predictions; violet: incorrect predictions).p values from one-tailed Wilcoxon signed-rank test.(B) Characterization of the high-confidence predicted slides in terms of tumor type (left), histological subtypes (middle), and infiltration type (right).An overview of the morphological features for the wrongly predicted slides is provided through the colored symbols above the bars.n: number of samples; p values from two-tailed Fisher's exact test.(C) Characterization of the high-confidence predicted slides in terms of clinically relevant biomarkers: PD-L1 status (measured as CPS = combined positive score; IC score = immune cells score) and FGFR3 mutational status.An overview of PD-L1 and FGFR3 status of the wrongly predicted slides is provided through the colored symbols above the bars.n: number of samples; p values from one-tailed Fisher's exact test. Table 1 . Clinicopathological variables characterizing the German and the Dutch cohorts
6,892.2
2024-03-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Alternating direction method of multiplier for the unilateral contact problem with an automatic penalty parameter selection We propose an alternating direction method of multiplier (ADMM) for the uni-lateral (frictionless) contact problem with an optimal parameter selection. We first introduce an auxiliary unknown to seprate the linear elasticity subproblem from the unilateral contact condition. Then an alternating direction is applied to the corresponding augmented Lagrangian. By eliminating the primal and auxiliary unknowns, at the discrete level, we derive a pure dual algorithm, starting point for the convergence analysis and the optimal parameter approximation. Numerical experiments are proposed to illustrate the efficiency of the proposed (optimal) penalty parameter selection method. Introduction Unilateral contact (frictionless) problem is a very common problem in engineering and poses serious challenges.It differs from the classical linear elasticity only by the presence of a linear constraint (the non-penetration condition).Various numerical methods have been developed, see, e.g., [8,6,7,9,13,14] and references therein.The method proposed in [9] is an alternating direction method of multiplier (ADMM) and it is related to augmented Lagrangian operator splitting methods [2,3].The main idea is to separate the differentiable part of the problem (i.e.linear elasticity) from the nondifferentiable part (i.e. the non-penetration condition) by introducing an auxiliary unknown.Applying an alternating direction method to the corresponding augmented Lagrangian leads to a simple two-step iterative method in which the matrix of the linear elasticity is constant.This property makes ADMM algorithm outperforms, see [9], another optimization based method, the popular active-set semi-smooth Newton method [6,7,13]. Alternating direction method of multiplier is a powerful operator-splitting algorithm for solving structured optimization problems encoutoured in a wide range of areas such as compressed sensing [15], image restoration [12], machine learning [1], etc.The method was initiated by Glowinski et al. [4,2,3], who systematically developed ADMM algorithms (also known as Uzawa block relaxation methods) for solving nonlinear partial differential equations.But ADMM based algorithms have a severe drawback: the choice of the penalty The model problem and ADMM algorithm We consider an elastic body occupying in its initial (undeformed) configuration a bounded domain Ω of R d (d = 2, 3) with a boundary Γ = Γ D ∪Γ C .We assume that the elastic body is fixed along Γ D with meas(Γ D ) > 0. Γ C denotes a portion of Γ which is a candidate contact surface between Ω and a rigid foundation.The normalized gap between Γ C and the rigid foundation is denoted by g.In this paper, we consider the small strains hypothesis so that the strain tensor is E(u) = (∇u+∇u t )/2, where u = (u 1 (x), . . ., u d (x)) is the displacement field.Hooke's law is assumed, i.e. the stress tensor is linked to the displacement through the linear relation where C = (C ijkl ) is the (fourth order) elastic moduli tensor, assumed to be symmetric positive definite.Let n be the outward unit normal to Ω on Γ.We consider the normal component of the displacement field and the stress tensor given by The unilateral contact problem consists, for a given volume force f , of finding the displacement field u satisfying (i) the equilibrium equations (ii) and the contact (i.e.non-penetration) conditions Let us introduce the functions space •) be the symmetric, continuous and coercive bilinear form which corresponds to the virtual work in the elastic body We denote by f (•) the linear form of external forces Using the potential energy functional we can formulate the unilateral contact problem as the constrained minimization problem To achieve a solution of (2.4) by an ADMM algorithm, we need additional steps.Following Glowinski and Le Tallec [3], we introduce the set and its characteristic functional (2.5) With (2.5)-(2.6)we associate the augmented Lagrangian functional where r > 0 is the penalty parameter.ADMM algorithm for (2.4) is based on (2.7).The method performs successive minimizations in u and p followed by the multiplier λ update as follows ) (2.10) Minimization (2.8) leads to an equilibrium problem which always has a unique solution even though the original problem allows rigid body motions.Minimization (2.9) is solved explicitly.The corresponding ADMM algoririthm is presented in Algorithm 1 (see [9] for detailed derivation).We iterate until the relative error on u k and p k becomes "sufficiently" small. Algorithm 1 ADMM algorithm for the unilateral contact problem Initialization.φ 0 and λ 0 are given. Finite dimensional algorithms We assume that Ω ⊂ R d is a polyhedral domain and therefore can be exactly triangulated.We consider a (continuous piecewise linear) finite element triangulation T h of Ω consistent with the decomposition of its boundary Γ into Γ D and Γ C .If n is the number of nodes of the triangulation, the dimension of the finite element subsapce V h ⊂ V is dimV h = dn.Let x j (j = 1, . . ., m) be a contact node (i.e.x j lies on Γ C ).The displacement vector and the unit outward normal at x j are denoted by u j and n j , respectively.We introduce the linear mappings N : R dn → R m , such that Nu is the vector of the normal components of u at contact nodes, that is, (Nu) j = u j n j , j = 1, . . ., m. The finite element discretization leads to the following matrices and vectors • A, (dn) × (dn) stiffness matrix (symmetric positive definite), i.e. from the bilinear form a(u, v); • M normal mass matrices (m × m symmetric positive definite); • f ∈ R dn the (discrete) external forces; • g ∈ R m the (discrete) normalized gap; • p ∈ R m the (discrete) auxiliary unknown; • λ ∈ R m the (discrete) lagrange multiplier. With the notations above, the augmented Lagrangian (2.7) now reads in the discrete setting where Discrete ADMM algorithm The derivation of the discrete version of Algorithm 1 from the augmented Lagrangian function (3.1) is straigthforward, Algorithm 2. Algorithm 2 Algebraic ADMM algorithm for the unilateral contact problem Initialization k = 0 r > 0, p 0 and λ 0 are given. Step 1 Compute u k+1 ∈ V h such that Step 2 Compute p k+1 ∈ P h Step 3 Update Lagrange multiplier ADMM algorithm without the auxiliary unknown Algorithm 2 can be simplified by eliminating the auxiliary unknown p. Indeed, using the update formula (3.3) we have , the right-hand side of (3.4) becomes e., we have eliminated the auxiliary unknown p from the right-hand side of (3.2).It remains to remove p from the multiplier update formula.From (3.3) we deduce that (3.5) Substituting (3.5) into the Lagrange multiplier update formula, we obtain the new multiplier update formula λ k+1 Gathering the results above, the ADMM algorithm without the auxiliary unknown p is described in Algorithm 3. In Algorithm 3, λ 0 + and λ 0 − must be consistent.Indeed, if, e.g., u 0 = 0, then we can set λ 0 + = r(Nu 0 − g) Algorithm 3 ADMM without auxiliary unknown for the unilateral contact problem Initialization k = 0 r > 0, λ 0 + and λ 0 − are given. Step 1 Compute u k+1 such that Step 2 Update the Lagrange multipliers Pure dual version The pure dual version of the ADMM algorithm for the unilateral contact problem is obtained by eliminating the displacements vector u.From (3.6) we have where we have set Substituting u k+1 into the Lagrange multiplier update formulas, we obtain Algorithm 4. Algorithm 4 is unpraticable since in A −1 r can be a full matrix of large size.However, the recursive relation (3.7)-(3.8) is important because it gives an easy way to prove the convergence of the ADMM algorithm.Indeed, Algorithm 4 is useful as a theoretical basis for the study of the influence of r. Convergence Since min(0, x) = − max(0, −x), the recursive relation (3.7)-(3.8)can be rewritten in the useful form Since for any x ∈ R n x + ≤ x we have so that where we have set The convergence of Algorithm 4 therefore depends on the spectral radius of A. We have the following proposition for the non-zeros eigenvalues of A. Proof. Let us set We have to show that the non-zero eigenvalues of A are eigenvalues of r N M and conversely.Let α be an eigenvalue of A and (X 1 , X 2 ) the corresponding eigenvector.We have that is, α(X 1 + X 2 ) = 0.For non-zero eigenvalues we deduce that the eigenvectors are such that X 1 = −X 2 .Substituting in (4.5)-(4.6)we obtain We deduce that α is an eigenvalue of A 1 − A 2 .From the relations above, it is obvious that non-zero eigenvalues of A 1 − A 2 are also eigenvalues of A. 2 To study the eigenvalues of I − 2rNA −1 r N M, we consider the following sequence We follow the same procedure as [3, p. 46] for a quadratic programming problem with equality constraints.From (4.7), we deduce that By introducing the auxiliary unknown η k = A −1 N Mµ k , the sequence (4.9) becomes The convergence of the sequence {η k } will gives us informations about the convergence of the sequence {µ k } and then as polynomial.Then the eigenvalues of B are polynomial functions of the eigenvalues of A −1 N MN and thus λ i (B) = p(λ i (A −1 N MN), that is The eigenvalues (and eigenvectors) of A −1 N MN are solution to the generalized symmetric eigenvalue problem N MNw = λAw Since A is symmetric positive definite and N MN is symmetric positive semi-definite, the eigenvalues of A −1 N MN are non-negative, and the eigenvectors corresponding to two distinct eigenvalues are A-otthogonal.It follows that the eigenvalues of B are such that 0 ≤ λ i (B) ≤ 1, ∀i. (4.12) Note that A and M are square and non singular matrices.Then if 0 is an eigenvalue of A −1 N MN , then the corresponding eigen-subspace is ker(N).Thus R(A −1 N M) is spanned by the eigenvectors of A −1 N MN associated with strictly positive eigenvalues (using the property R(N) = (ker(N)) ⊥ ).Since {η k } ⊂ R(A −1 N M), we can write (4.10) in a basis of R(A −1 N M) formed with eigenvectors w i of A −1 N MN associated with strictly positive eigenvalues λ i .We get We deduce the following convergence theorem.Corollary 4.3 (optimal step-size and convergence rate) The optimal penalty parameter for r is where λ m and λ M are, respectively, the smallest nonzero eigenvalue and the largest eigenvalue of A −1 N MN.With the choice (4.13), the convergence of Algorithm 4 is linear with an asymptotic constant θ satisfying We obtain the following corollary. Corollary 4.4 If N MN is nonsingular, then the eigenvalues of A −1 N MN are strictly positive and Algorithm 4 with the optimal choice (4.13) converges linearly with an asymptotic constant θ satisfying Penalty parameter approximation We now study how to compute λ m and λ M , the smallest nonzero eigenvalue and the largest eigenvalue of A −1 N MN.For this purpose, it is convenient to arrange the indices in such a way that the contact indices C and the interior indices I occur in a consecutive order.This leads to the block matrix representation of the stiffness matrix as where O II , O IC and O CI are matrices of zeros; and M CC is the contact boundary (diagonal) mass matrix.Consequently, if we set then the nonzero eigenvalues of A −1 N MN are eigenvalues of ÃCC M CC .To compute ÃCC , we can apply block-Gaussian elimination to A with A II as block-pivot It follows that ÃCC is the Schur complement of A II in A. We now detail the practical steps for the approximation of the penalty parameter.Since computing A −1 II is unpracticable even for medium size problem, we compute ÃCC by solving equivalent linear systems.We obtain Algorithm 5. Algorithm 5 Algorithm for computing the approximate penalty parameter r * Step 1. Solve for X the system A II X = A IC (5.1) Step 2. Compute the schur complement of Step 4. Compute λ m and λ M the exterme eigenvalues of X and the optimal penalty parameter approximation (4.13). Solving (5.1) or (5.2) is equivalent to solving several linear systems that have the same matrix but different right-hand sides.Since the different right-hand sides are available, all the systems of equations can be solved at the same time using a single Gaussian elimination (e.g. using MATLAB \ operator : X = S\M CC for (5.2)). Numerical experiments We have implemented the algorithms described in the previous sections in MATLAB, using piece-wise linear finite element, vectorized assembling functions and the mesh generator provided in [10,11], on a computer equipped running Linux (Ubuntu 16.04) with 3.00GHz clock frequency and 32GB RAM.The ADMM solver used in the experiments is based on Algorithm 1.The test problems used are designed in order to illustrate the behavior of the algorithms more than to model contact actual phenomena.Since our approximation procedure combines matrix inversion and computation of extreme eigenvalues, we restrict our penalty parameter estimation to small size problems. We use the following notations • r * approximate optimal penalty parameter obtained by (4.13). • r standard numerical optimal penalty parameter obtained using the uniform sampling of interval (.1, αE), where α > 1 and E is the Young modulus. • m c the size of matrix ÃCC M CC used for computing r * .To reduce the computational cost m c ≤ 100 in the numerical experiments. • Iter.The number of iterations required for convergence in Algorithm 2. • CPU Time.CPUT time in Seconds. Example 1: 2D problem We consider a rectangular elastic body Ω = (0, 2) × (0, 1) with the boundary partition We first consider the meshes of size h=1/8 (Figure 1), 1/16 and 1/32.We compare the approximate penalty parameter r * obtained with (4.13) with the numerical penalty parameter r obtained by sampling the interval (0.1, 10) * E with 200 points uniformly spaced.The evolution of the number of iterations with respect to the penalty parameter is shown in Figure 2. The normal stress distribution on Γ C is shown in Figure 3 We summarize in Table 1 the (optimal) penalty parameters and the behavior of Al-gorithm 2 in terms of CPU time and number of iterations.Note that the CPU time for r * includes the time spent in both Algorithm 2 and Algorithm 5.The CPU time for r is the computational time of the whole sampling procedure.We can notice that the sampling process gives better results in terms of the number of iterations required by the ADMM solver.But sampling is a time-consuming process.For the largest problem (h = 1/32), obtaining the optimal penalty parameter (4.13) with Algorithm 5 is about 80 times faster than the standard sampling procedure.Table 2 shows that, for r * = 3134.1289and r = 3510 (obtained using a mesh of size 1/32), the performances of Algorithm 2 are almost comparable for large size meshes. . Example 2: Hertz problem The Hertz problem is a classical test problem in the numerical simulation of unilateral contact problem.It consists of an infinitely long cylinder resting in a rigid foundation, and subjected to a uniform load along its top of intensity f = −(0, 1600).The radius of the cylinder is R = 8.The material constants for the cylinder are E = 2000 and ν = 0.3.For symmetry reason, only quarter of the cylinder is consider (Figure 4) and we set u 1 = 0 on Γ D = {0} × (0, 8).The contact surface is Γ C = {x ∈ (0, 8) 2 | x 2 1 + x 2 2 = 16}.We first consider meshes with 442 and 1692 with 35 and 69 nodes on Γ C , respectively.Note that the problem allows a rigid body motion in the vertical direction.In ADMM Algorithm 2, the mass terms provided by the normal integral prevent infinite displacements at the initial step.But for Algorithm 5, A II is only positive semidefinite and then non invertible.We simply add a small positive constant on the vertical components of Γ D .As in Example 1, we compute r by sampling (.1, 10) * E with 200 points, uniformly spaced.Figure 5 shows the variation of the number of iterations with respect to the penalty parameter.The deformed configuration and the stress distribution on the contact Γ C are shown in Figure 6 and 7, respectively.We summarize in Table 3 the (optimal) penalty parameters and the behavior of Algorithm 2 in terms of CPU time and number of iterations.For m c = 35, the sampling procedure gives better results in terms of the number of iterations for the ADMM solver but the computational time is much greater.For m c = 69 the number of iterations is almost equivalent for both r and r * but the computational time is again much greater for r.We report in Table 4 the performances of Algorithm 2 using r * = 201.7825and r = 200.We can notice that the performances of Algorithm 2 using both penalty parameters are almost equivalent. Example 3: 3D problem We now study the behavior of a 3D rectangular elastic body pressed onto a solid hemisphere, Figure 8 ( [13,9]).The elastic body occupies the domain Ω = (−0.5,0.5)×(−1, 1)× (−0.5, 0.5) and the obstacle is a half-ball with radius r = 0.5 and center (−0.3, 0, −1).The We first consider a non uniform mesh with 198 nodes, 773 tetrahedrons and 45 nodes on Γ C .The deformed configuration obtained using ADMM Algorithm 2 is shown in Figure 9 and the normal (contact) stress distribution in Figure 10.For r, we sample (0.1 10 * E) with 200 points, uniformly spaced.We summarize in Table 5 the computed penalty parameters r * and r and the behavior of Algorithm 2 in terms of number of iterations and CPU time.Even though r * is 20% larger than r, the number of iterations required by the ADMM solver with both values is almost the same.We notice again that computing the penalty parameter with Algorithm 5 is about 100 times faster.reported in Table 6.As in the 2D case, the number of iterations is virtually independent of the mesh size, for both penalty parameters.We can notice that, for both penalty parameter, the difference of computational time for the largest problem is less than 5%.The performances of Algorithm 2 with r * and r are therefore comparable. Proposition 4 . 1 The non-zero eigenvalues of A are eigenvalues of I − 2rNA −1 r N M and conversely. Figure 2 : Figure 2: Number of iterations versus the penalty parameter for Example 1. Figure 3 : Figure 3: Normal Stress distribution on Γ C for Example 1. Figure 4 : Figure 4: Mesh sample for the Hertz problem. Figure 5 : Figure 5: Number of iterations versus penalty parameter for the hertz problem. Figure 6 : Figure 6: Deformed configuration and Von Mises effective stress for the Hertz problem Figure 8 : Figure 8: Geometry of the 3D obstacle problem Figure 9 : Figure 9: Deformed configuration and Von Mises effective stress for Example 3. Table 1 : r obtained by sampling Versus r * obtained by (4.13). Table 3 : r obtained by sampling Versus r * obtained by (4.13), for the Hertz problem. Table 4 : Performances of Algorithm 2 with r * = 201.7825and r = 200 for the Hertz problem . Table 5 : r obtained by sampling Versus r * obtained by (4.13) for the 3D obstacle problem To study the beahivior of Algorithm 2, the initial mesh of 198 nodes and 773 tetrahedrons is successively refined to produce meshes with 1289, 9245 and 69897 nodes and 6144, 49472 and 395776 tetrahedrons, respectively.The performances of Algorithm 2 areFigure 10: Normal stress distribution on Γ C Number of nodes on Ω/Γ C 198/45 1289/153 9245/603 69897/2341 Table 6 : Performances of Algorithm 2 with r and r * for the 3D obstacle problem.
4,691.4
2020-02-01T00:00:00.000
[ "Engineering", "Mathematics" ]
Human Monocyte-Derived Dendritic Cells Are the Pharmacological Target of the Immunosuppressant Flavonoid Silibinin Silibinin, a natural polyphenolic flavonoid, is known to possess anti-inflammatory, anticancer, antioxidant, and immunomodulatory properties. However, the effects of Silibinin on the maturation and immunostimulatory functions of human dendritic cells (DC) remain to be elucidated. In this study, we have attempted to ascertain whether Silibinin influences the maturation, cytokine production, and antigen-presenting capacity of human monocyte-derived DC. We show that Silibinin significantly suppresses the upregulation of costimulatory and MHC molecules in LPS-stimulated mature DC and inhibits lipopolysaccharide (LPS)-induced interleukin (IL)-12, IL-23, and TNF-α production. Furthermore, Silibinin impairs the proliferation response of the allogenic memory CD4 T lymphocytes elicited by LPS-matured DC and their Th1/Th17 profile. These findings demonstrate that Silibinin displays immunosuppressive activity by inhibiting the maturation and activation of human DC and support its potential application of adjuvant therapy in the treatment of autoimmune diseases. Introduction Silibinin, a natural polyphenolic flavonoid, is the major biologically active component of Silymarin isolated from the plant milk thistle Silybum marianum. Silibinin has been shown to have multiple protective biological activities due to its ability to modulate various cellular signaling pathways and to possess strong antioxidant, hepatoprotective, neuroprotective, anticancer, and antiviral activities [1]. It has been traditionally used for the treatment of various liver disorders [2] as well as of neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease, and depression, due to its ability to increase the intracellular antioxidant defense and restore the redox status by changing the level of antioxidant enzymes [3]. The induction of apoptosis and its other abilities including cell growth inhibition, cell cycle arrest, an antiproliferative effect, and regulation of aberrant miRNA expression have supported the use of Silibinin as an anticancer and chemo preventive agent in a variety of cancers [4]. The biological actions of Silibinin include a broad spectrum of anti-inflammatory and immunosuppressive effects in a dose and time-dependent manner [5]. The potent anti-inflammatory effects of Silibinin have been shown to be related to the inhibition of several transcriptional factors such as the Nuclear Factor kappa B (NF-κB) and the Signal Transducer and Activator of Transcription 3 (STAT-3) which regulate several gene products involved in the immune response, inflammation, and cytokine production [6,7]. Silibinin is known to inhibit CD4 + T-cell activation and proliferation by inhibiting the Mitogen-activated protein (MAP) kinase pathway through T-cell receptor engagement [8] and to suppress the production of Th1-related cytokines (IL-2, IFN-γ, and TNF-α) by activated-PBMC from HCV-infected and uninfected subjects [9]. In agreement with these studies, we previously showed that Silibinin upregulates ERβ expression, induces apoptosis, inhibits proliferation, and reduces expression of the pro-inflammatory cytokines through ERβ binding in T lymphocytes from female and male healthy donors and patients with active Rheumatoid Arthritis [10]. Moreover, Silibinin has been also shown immunosuppressive effects in experimental autoimmune encephalomyelitis (EAE), the animal model of multiple sclerosis via inhibition of dendritic cell (DC) activation, and Th17 cell differentiation [11]. Silibinin has also been demonstrated to inhibit phenotypic and functional maturation of murine bone marrow-derived DC and Th1 polarization. In particular, it has been shown that Silibinin suppresses the expression of MHC class I and II and costimulatory molecules (CD80, CD86) and IL-12 production by murine bone marrow-derived DC [12]. DC play a key role in both the pathogenesis and treatment of several diseases. Therefore, DC are attractive targets for therapeutic manipulations of the immune response. However, the effect of Silibinin on human DC has never been investigated. Here, we show that Silibinin affects the LPS-induced maturation of human monocyte-derived DC by inhibiting the upregulation of costimulatory and MHC molecules and inflammatory cytokine production. Moreover, Silibinin treated DC fail to present alloantigens to memory CD4 + T lymphocytes which barely proliferate and the percentage of IFN-γ + and IL-17 + CD4 + T-cells is significantly lower. Given the critical role of DC in the initiation and regulation of immune responses, these findings provide new insight into the immunosuppressive activity of Silibinin and support its potential application in the treatment of autoimmune diseases. Silibinin Inhibits the Phenotypical Maturation of DC Induced by LPS Stimulation DC play a pivotal role in the control of the innate and adaptive immune response. They reside in peripheral tissues and undergo a maturation process in response to diverse stimuli. Mature DC express high levels of major histocompatibility complex (MHC) molecules and co-stimulatory molecules that are important for antigen presentation to T lymphocytes. To investigate the effect of Silibinin on DC, phenotype cells were cultured for five days in IL-4 and GM-CSF-enriched medium and stimulated overnight with LPS in the presence or absence of Silibinin at different concentrations (50 or 100 µM). Figure 1A shows that Silibinin added on the fifth day of culture did not alter the DC phenotype at either concentration. When added together with LPS at 50 µM, it inhibited only the upregulation of CD86 costimulatory molecule, whereas at 100 µM, it significantly suppressed the upregulation of co-stimulatory molecules CD80 and CD86 and of MHC class I and II induced by LPS stimulation. Furthermore, Silibinin decreased the novo-expression of CD83, a maturation marker for DC. These data indicate that Silibinin alters the DC maturation program which is needed for acquiring a fully professional antigen-presenting cell profile. Silibinin Inhibits Proinflammatory Cytokine Production by LPS-Stimulated DC The maturation process of DC induced by several stimuli leads to the production of a cascade of pro-inflammatory cytokines, which in turn control the induction and skewing of T-cell responses. LPS-stimulated DC produce huge amounts of IL-12 and IL-23 which are involved in Th1 and Th17 polarization, respectively, and of TNF-α cytokine which has pleiotropic effects on various cell types and is a major regulator of inflammatory responses. To investigate the effects of Silibinin on the pro-inflammatory cytokine profile of mature DC, we measured IL-12, TNF-α, and IL-23 secretion by DC stimulated with LPS in the presence of the flavonoid. Figure 1B shows a significant decrease in all three cytokines induced by a Silibinin treatment at 100 µM, thereby indicating an anti-inflammatory effect of Silibinin. Mature DC Treated with Silibinin Do Not Present Alloantigens to Memory CD4 + T Lymphocytes To investigate the effects of Silibinin on the antigen-presenting capacity of DC, we measured CD4 + T-cell response to alloantigens in a mixed leukocyte reaction. As shown in Figure 2, LPS-matured DC, as expected, induced a strong proliferation of allogeneic CD4 + T-cells and an increase in the percentage of IFN-γ and IL-17 producing CD4 + T-cells as compared to immature or Silibinin-treated DC. On the contrary, the same population of T-cells barely proliferated and the percentage of IFN-γ and IL-17 CD4 + T-cells was significantly lower in response to LPS-matured DC treated with Silibinin. These results demonstrate the immunosuppressive action of Silibinin on the most powerful antigen-presenting cells which play a crucial role in shaping the immune response in several diseases. Mature DC Treated with Silibinin Do Not Present Alloantigens to Memory CD4 + T Lymphocytes To investigate the effects of Silibinin on the antigen-presenting capacity of DC, we measured CD4 + T-cell response to alloantigens in a mixed leukocyte reaction. As shown in Figure 2, LPS-matured DC, as expected, induced a strong proliferation of allogeneic CD4 + T-cells and an increase in the percentage of IFN-γ and IL-17 producing CD4 + T-cells as compared to immature or Silibinin-treated DC. On the contrary, the same population of T-cells barely proliferated and the percentage of IFN-γ and IL-17 CD4 + T-cells was significantly lower in response to LPS-matured DC treated with Silibinin. These results demonstrate the immunosuppressive action of Silibinin on the most powerful antigen-presenting cells which play a crucial role in shaping the immune response in several diseases. Monocyte Isolation and DC Generation Peripheral blood mononuclear cells (PBMC) were purified from heparinized blood obtained by healthy donors on a density gradient (Ficoll-Hypaque, Lympholyte-H; Cedarlane, Paletta Court, ON, Canada). Silibinin (Sigma, St. Louis, MO, USA) was dissolved in dimethyl sulfoxide, diluted in the culture medium, and used at 50 or 100 µM. Flow Cytometric Analysis Immature DC or LPS-stimulated DC in the presence or absence of Silibinin at 50 or 100 µM were plated into 96-well plates and washed once in phosphate-buffered saline (PBS) containing 1% FBS. The cells were incubated with Mabs at 4 • C for 30 min. Cells were then analyzed on a Beckman Coulter Gallios™ flow cytometer and data were analyzed with Kaluza Analysis Software v2.1 (Beckman Coulter, Brea, CA, USA). Cytokine Production Analysis After 5 days, the supernatant of DC cultures was removed, and the cells were washed and cultured in the presence or absence of 100 ng/mL LPS or Silibinin 100 µM for an additional 24 h. After 5 days, the supernatants from immature DC or LPS-stimulated DC cultures in the presence or absence of 100 µM Silibinin were harvested, filtered (0.2-µm filters), and stored at −80 • C. IL-23, IL-12p70, and TNF-α were measured by the specific enzyme-linked immunosorbent assay (ELISA) using the commercially available kit (R&D Systems Europe, Ltd., Abingdon, UK) according to the manufacturer's instructions. Mixed Leukocyte Reaction (MLR) Immature DC or LPS-stimulated DC were or were not treated with Silibinin (100 µM), washed with PBS, and then co-cultured with CD4 + lymphocytes purified by indirect magnetic sorting with a memory CD4 + T-cell isolation kit (Miltenyi) at 1:10 ratio. To track proliferating cells, CD4 + T-cells were intracellularly labeled with the fluorescence dye carboxyfluorescein succinimidile estere (CFSE) (Thermo Fisher Scientific, Waltham, MA, USA) which decreases after each cell division. The proliferative response was evaluated after 7 days by measuring the CFSE dilution by flow cytometry and identified by the percentage of CFSE low + cells. CFSE-unlabeled CD4 + T-cells were also analyzed by flow cytometry for their intracellular cytokine production. Analysis of Intracellular Cytokine Production CD4 + T-cells after stimulation with 50 ng/mL phorbol myristate acetate (PMA) and 1µg/mL ionomycin (both from Sigma) for 5 h and in the presence of Golgi Plug (BD Bioscience, San Jose, CA, USA) for the last 2 h of culture, were stained with FITC conjugated anti-human IL-17A Mab and APC/Cyanine7 conjugated anti-human IFN-γ Mab (BD Bioscience PharMingen) after fixation and permeabilization using Cytofix/Cytoperm (BD Bioscience PharMingen), according to the manufacturer's instructions. Stained cells were analyzed on a Beckman Coulter Gallios™ flow cytometer and data were analyzed with Kaluza Analysis Software v2.1 (Beckman Coulter). Statistical Analysis Statistical analysis was performed by the Mann-Whitney U test using GraphPad Prism (version 5.0, GraphPad Software, San Diego, CA, USA). A p-value < 0.05 was considered statistically significant. Conclusions Dendritic cells are specialized antigen-presenting cells that maintain immune tolerance to self-antigens by deleting or monitoring auto-reactive T-cells. Several immunosuppressive and anti-inflammatory agents interfere with phenotypic and functional DC maturation, a classical example of which is represented by corticosteroids [13]. Our data demonstrate that the flavonoid Silibinin has immunosuppressive and anti-inflammatory effects by hampering the maturation, inflammatory cytokine production, and antigen-presenting capacity of human monocyte-derived DC. Such evidence encourages the use of Silibinin as a therapeutic tool or a dietary nutritional supplement for the treatment of inflammatory/autoimmune diseases. However, further investigations in vivo are needed to support Silibinin application and therapeutic use in regard to its biological activities. Author Contributions: M.T.P. and K.F. performed the experiments and analyzed data, M.P. and E.O. analyzed data and revised the manuscript, and D.P. provided overall supervision, designed the experiments, and wrote the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki. The project and the patients' informed consent were approved by the Review Board of the Italian National Institute of Health, Rome, Italy. The experimental protocol was performed in accordance with the guidelines and regulations provided by the local Ethical Committee (protocol no.AOOI-SS-4/01/2022 0000197). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data supporting this study's findings are available upon request.
2,787.4
2022-09-01T00:00:00.000
[ "Medicine", "Biology" ]
EVOLUTION OF THE NATURAL GROWTH AND THE ANALYSIS OF THE SEASONALITY OF LIVE BIRTHS AND DEATHS FROM ROMANIA AND BACAU COUNTY IN THE LAST FOUR YEARS Since 1992, Romania's population declined every year naturally, the number of deaths being higher than live births. If in Bacau until 2002 we had to deal with natural growth of the population, even if increasingly less, starting that year it emerged more and more the process of decline of the population. In both cases, the phenomenon is more pronounced in rural areas than in urban areas due to more pronounced aging of the rural population. Both in the case of live births and deaths it is observed the seasonal oscillation by the months of the year. If in the case of births they are more numerous in the summer months and in decrease in the winter months, the situation is exactly the opposite in the case of deaths. Thus from here we have the natural decrease of the population, which increases in the winter months and starts to decline or become slightly positive in July-September. Introduction Knowing the evolution in time of the demographic phenomena and the factors that influence it underline its great importance in estimating the population and age structure.If we consider the population as a closed system, without taking into account the migration, its evolution depends on the two demographic phenomena: birth and death rates.The evolution of the birth rate in a country is influenced by existing family policies and it depends on the intention of the couples in terms of the number of children they wish to have.In parallel, mortality depends on the age structure of the population, the state health system, and the general living conditions.In the last years, Romania has experienced a birth rate becoming smaller and a higher mortality.The latter was higher due to a poor health system, which could not cope with an aging population and insufficient livelihood means.The opposite evolutions of birth and death rates have led to the worsening decline of the population naturally, from -0,9 ‰ in 2000 at national level and +0,9 ‰, reaching at the end of the period -2,6 ‰ nationally and -1,9 ‰ at county level.These negative evolutions, coupled with a negative migratory growth, led and still lead to a decrease in the population of our country, but also to an unbalanced age structure in favor of the elderly. Evolution of the live births, deaths and growth rate The monthly evolution of the live births and deaths from the last years can be observed for Bacau county compared to Romania in figures 1 and 2. It can be noticed that the evolution of the two phenomenon follow a somewhat similar evolution from year to year.Thus, the live births one is higher during summer months and lower as we approach the cold months.This phenomenon has been observed for many years in most of the countries with the same climate.One explanation could be that in the winter months couples retire from the field work, especially in rural areas, focusing more on life of the couple.On the other hand, families where family planning works, they want their children to start their life in the spring-summer months in order to grow and develop a little until the cold is coming. In the case of Romania, the peaks of the births are July and August and the months of December and February are recorded with a minimum.In Bacau county, peak months are from July to September, while the lows are especially February and March. In the case of deaths, the evolution is opposite to the one of births.Thus, in the winter months due to unfavorable weather conditions, persons at risk of death (especially children and the elderly) get sick easier and have a higher probability of dying.For this reason we have more naturally decrease during the winter months.Both in the case of Romania and Bacau county, the maximum deaths were observed during the months of December and January, but as winter was extended in 2012, deaths peaks were observed also in February-March or even April.Following the evolution of the two demographic phenomena, the natural growth rate was negative for most of the year.From Figure 3 it can be noticed that the only months in which the country has had a slightly natural increase of the population was in the following In the case of Bacau County, there are outlined in the chart three periods with a higher natural growth of the population during July-September 2011-2013.The maximum negative figure was observed in March 2012, followed by February 2012 and December 2013. Number Since both births and deaths manifest a clearly seasonal oscillation, we moved on to the statistical analysis of this phenomenon. Seasonality indices Seasonality indices measure the deviation of the phenomenon from the monthly yearly average.Because the repeating graphics of the two demographic phenomena is observed every 12 months, there were calculated mobile averages of 12 terms, and finally these were centered in the right of an initial term, by averaging the two terms.The influence of the trend component was eliminated by dividing the real terms with the centered mobile averages, and in the end, using the arithmetic mean method to calculate the monthly indices of pure seasonality.The calculated indices of seasonality were presented comparatively in Figures 5 and 6. in Romania, the average number of births per month deviates from the monthly average annually with +14,3% in July, +13,9% in August, +9,2% in September, +7% in October, and in January with +4,5%.Practically January contradicts the theory mentioned before.The months recording minimum figures are February, March and April, when the average monthly number of live births was with -12% lower than the annual average.In Bacau county, every year, we have with + 20,4% born in August compared to the annual average, with +11,4% in July, +6,8% in September, +6,4% in October, +5,3% in January and with +0,5% in November.The seasonality indices calculated indicate an average number of live births each year less than the annual average with -18,5% in March, -14,6% in April and -4,3% in February. Comparing the evolution of the seasonality indices from the two graphs, it can be observed that there are significant differences between the indices calculated for the national level and those calculated for Bacau County. Conclusions After the 90's, the number of live births has been declining continuously, without even assuring the simple reproduction of the population.Instability of a steady income, women's desire to achieve economic independence from the partner, the migration of young people abroad, are just some of the factors that led to the delay of the moment of bringing a child to life and the decision of couples to have a number small children.On the other hand, the increasing of life duration of the population, on the grounds of an aging population left within the country and poor health system, turned into a slight increase in the number of deaths.The correlated evolution of the two demographic phenomena has led to a continuous negative increase, more pronounced in the country compared to Bacau County.It turned out from the monthly evolutions of the past four years that both demographic phenomena have a seasonal component with higher number of births especially in the summer months and a higher number of deaths in the cold months.These evolutions should be taken into account in estimating the population monthly trends when they are produced, if not taken into account economic and demographic policy measures.These measures should stimulate the birth rate on the one hand; on the other hand lead to the improvement of health care and greater attention to prevention and health. If governments that succeed do not take actions in this direction, young, active and in power work force, within the Romanian state's expense, will continue to go abroad, and the country will remain with the elderly, sick and helpless, without a real support.This is unfortunately the trend from the last few years in Romania! Figure 1 Figure 1 Evolution of the number of live births and deaths in Romania, for the months of the years 2010-2013 Source: I.N.S., Tempo online Figure 3 Figure 3 Evolution of the growth rate of the population from Romania during 2010-2013 Source: own calculations based on data from the Institute of Statistics from Bacau County Figure 4 Figure 4 Evolution of the growth rate of the population from Bacau County during 2010-2013 Source: own calculations based on data from the Institute of Statistics from Bacau County Figure 5 Figure 6 Figure 5 Monthly seasonality indices calculated by the mobile average method for live births during 2010-2013 Source: own calculations based on data from the Institute of Statistics from Bacau County
2,024.4
2014-07-30T00:00:00.000
[ "Economics" ]
Microfluidics assisted optics manufacturing technique The conventional micro/nano-manufacturing techniques can hardly process interior microstructures. The entire fabrication process is complex and requires large-footprint and high-cost equipment. The presented microfluidics assisted optics manufacturing technique is feasible to create the curved surface inside microstructure using various modified materials. The fabrication process is simple. Only small, low-cost devices are needed. In this paper, microfluidics assisted optics manufacturing technique is introduced in detail and compared with the current manufacturing techniques. A diversity of interesting micro-optics, including microlens array and compound eye, are demonstrated. These optical components are all fabricated by the microfluidics assisted manufacturing technique and possess their own outstanding features. Introduction Fabricating complex-structured optics including microlens array (MLA) and compound eye has a great challenge due to the requirements of miniature size, high resolution and 3D profile [1][2][3].The traditional optical lens manufacturing process is a meticulous procedure that involves cutting, grinding and polishing to shape and refine the optics [4,5].The primary advantages lie in the manufacturing precision, versatility across various materials, and the ability to tailor optical properties to specific requirements [6,7].During the manufacturing, the profile of the optics could be meticulously controlled, ensuring a high level of accuracy in the final product.Furthermore, this process can be applied to a wide range of materials, including glass, plastic, and crystalline substances, making it adaptable to various industries.It also allows for customization, enabling optical components to meet specific design and performance criteria.However, the traditional optics manufacturing has several disadvantages.The most notable issue is the substantial amount of material waste generated during grinding and polishing, leading to inefficiency and increased costs.The process is time and labor-intensive, requiring skilled operators and a significant investment in time.Moreover, it is not well-suited for complex shapes or intricate optical components, as achieving precision becomes increasingly challenging.In such cases, alternative manufacturing techniques, e.g.laser direct writing and gray-scale lithography, may be more efficient and cost-effective [6][7][8][9][10]. Harnessing surface tension for shaping optics is a widely used technique [11][12][13].It was employed as early as the 1660s to make small lenses for the use as the magnifying lens with molten glass [14].Shaping liquid surface requires the cooperation of intermolecular force of the liquid and environmental factors.Once manipulation of liquid surface tension can be precisely realized, the surface modulation depth, which is related to the optical power, can be well controlled.Besides, mastering liquid curing process to avoid deformation and surface wrinkle is also important.Typically, UV curable resin and thermally curable elastomer are used in the manufacturing.Once a desired profile is realized, the resin is solidified under UV exposure or heating.In this paper, we thoroughly study the advanced micro/nano manufacturing techniques for optics fabrication.We summarize the manufacturing procedure of these techniques.The outstanding features and the weakness are investigated.We focus on the discussion on microfluidics assisted optics Nevertheless, laser direct writing is not as easily scalable for mass production as other techniques like moulding.It can be time-consuming and expensive for large quantities.Voxel-modulation laser scanning technique was demonstrated to speed up the fabrication of compound eye.The inner spherical based was produced by rapid low-precision scanning with large voxels, while the outer ommatidia were fabricated by high-precision small voxel [18].Initial setup costs for laser direct writing equipment and the process can be relatively high, making it less economical for large-scale production.Therefore, the choice between laser direct writing and other lens manufacturing methods depends on factors like the required customization, production volume, material selection, and cost considerations.Laser direct writing is particularly advantageous for applications that demand highly customized and complex microlens designs, rapid prototyping, and small batch production.However, for high-volume production, cost-efficiency, and specific material requirements, alternative methods such as molding or lithography may be more appropriate. Gray-scale lithography Grayscale lithography is one kind of photolithographic process that allows for the creation of threedimensional microstructures with varying heights or depths in a single exposure [19][20][21].This technique is used in microfabrication and micro-optics to produce complex structures.The fabrication procedure is illustrated in Fig. 2. In the fabrication, a layer of photoresist is spin coated to the substrate.A grayscale mask is required, which is a special photomask that consists of varying levels of transparency.The mask contains grayscale patterns that define the desired heights or depths of the microstructures.The grayscale mask is placed between a UV light source and the photoresist-coated substrate.During exposure, the light passes through the mask, and the varying levels of transparency result in different light intensities reaching the photoresist.The photoresist undergoes a photochemical reaction based on the intensity of light it receives.The more intense the light, the more the photoresist is exposed and undergoes a change in its solubility.After exposure, the coated substrate is subjected to a development process.The developer solution selectively removes the exposed or unexposed portions of the photoresist.Depending on the specific application, etching or other processing steps may be used to transfer the grayscale pattern from the photoresist to the substrate.This step can create microstructures with varying heights or depths, based on the grayscale pattern.Grayscale lithography allows for the creation of highly customized and complex lens shapes, making it suitable for applications with unique optical requirements.Unlike some other techniques that involve multiple steps, grayscale lithography can produce three-dimensional microstructures, including lenses, in a single exposure, simplifying the manufacturing process.This method offers precise control over the heights and shapes of microstructures, resulting in lenses with accurate and well-defined optical properties.Grayscale lithography is well-suited for small-batch or prototyping production, making it cost-effective for niche or research applications. The manufacturing process is shown in Fig. 3. Micro-scale droplets are generated from a piezo-actuated ink-jet nozzle and dripped onto a substrate.When the microdroplets are stably on the substrate, the balance of the forces, i.e. gravity and surface tension, applied on the microdroplets would be achieved.Under the action of the forces, mainly attributed to the surface tension, the microdroplets turn into the spherical crown.The shape could be described by the contact angle between the microdroplets and the substrate.The contact angle could be tuned by modifying the wettability of the substrate.As a result, the profile of the microdroplets, corresponding to the focal length of the microlenses, could be controlled. In addition, the volume of the jetting determines the size of the droplets which is proportional to the aperture of the microlenses.It is worth noting that when the diameter of the droplets is larger than the capillary length, > 2√ /, where γls is the tension force for the liquid-substrate interface, ρ is the density of the liquid and g is the gravity of earth, the contribution of the gravity could not be neglected and the profile of the droplets could be fitted to an elliptical function [27].Consequently, the shape of the microdroplets could be fixed by UV curing.Microdroplet jetting/dripping technique is a simple way to produce MLAs with controllable focal length and diameter.It requires a drop-on-demand jet nozzle and a high-precision 2D translation stage to generate and position the micro-scale droplets one by one.Nevertheless, it is not very efficient to manufacture a large-area MLA. Thermal reflow Thermal reflow technique is a sophisticated way to produce MLAs and has already been employed in the MLA manufacturing industry [28,29].In the MLA fabrication, an array of cylinders is produced by photolithography technique, as shown in Fig. 4.Then, the cylinder array is heated to melt down into spherical caps.After cooling, MLA is formed.The aspect ratio, i.e. the ratio between the height and the diameter, and the spacing of the cylinders are key parameters to affect the curvature, the aperture and the filling factor of the microlenses.High numerical aperture and high filling-factor MLAs of a large area could be produced by using thermal reflow method in a time-saving manner.Since the modulation of the diameter and the curvature of the microlenses is intrinsically related in the fabrication, it is not easy to control the optical parameters individually. Microfluidics assisted manufacturing Recently, various cost-effective methods for fabrication MLAs are demonstrated by curving liquid surface with the assistance of the microstructure mold.MLAs with controllable curvature can be realized by driving UV-curable liquid to flow through an air gap under a hydrophobic microhole array and on a hydrophilic substrate [30].Surface tension overcomes viscosity of the liquid to curve the liquid surface under the microholes, as depicted in Fig. 5(a).The surface curvature could be controlled by adjusting the gap between the substrate and the microhole array. Besides that, when the microhole array mold is gently pressed on the liquid, the liquid surface under the microholes would be curved due to surface tension [31].The schematic diagram is shown in Fig. 5(b).If the liquid is heated, the pressure difference across the liquid-air interface would be changed according to Eotvos Ramsay-Shield relation.The variation of the pressure difference could be harnessed to control the surface curvature.The curved surface with the thermally controlled curvature could be UV cured, forming concave MLAs.MLAs with controllable focal length and aperture can be directly produced in a microhole array mold [32].The fabrication procedure is simple, as illustrated in Fig. 6. Step 1: The liquid is filled into the microholes. Step 2: The mold is spun at a high rotation speed. Step 3: The liquid is solidified under the UV exposure, forming MLAs.During the spin of the mould, partial liquid is spun out of the microholes and the surface of the remaining liquid in the microholes becomes curved due to the thermodynamic equilibrium between three phases, i.e. solid phase (the wall of the microholes), liquid phase (the UVcurable liquid), and the gas phase (air in the microholes).The curvature of the microlenses could be controlled by modifying the wettability of the microhole walls or adjusting the inclined angle of the microhole walls [33].By setting the microhole with different inclined walls, the focal length of the microlenses can be individually modified.As a result, the depth of view of the MLA could be extended because the microlenses on the MLA have a large range of the focal length.Biomimetic compound eyes can also be produced by microfluidics assisted manufacturing technique.The complex structure of the compound eye, specifically the outer MLA on the hemisphere and the internal waveguide structure, could be precisely realized [34].Fig. 8 shows the fabrication procedure.A hemispherical pit mold with an array of microholes and a hemispherical substrate with an array of hollow channels are 3D printed using a projection micro-stereo-lithography 3D printer.The pit mold is then filled with photoresist.After spinning the pit mold, only a portion of photoresist remains in the microholes.Then, the photoresist is UV cured.The hemispherical substrate is placed in the mold and elastomer is used to fill the mold and the substrate.After curing, the hemispherical substrate is taken out.The MLA is formed on the substrate and the internal channels are all filled with the elastomer, forming an apposition compound eye. Conclusion A diversity of micro/nano manufacturing techniques have been presented for optics fabrication.These techniques have their own features in aspects of the ability of 3D construction, manufacturing precision, low cost/rapid processing and massive production.Among these advanced techniques, microfluidics assisted manufacturing technique provides a unique way for fabricating optics.High-cost and largefootprint equipment is not utilized.The fabrication procedure is simple.The surface tension plays an important role in the liquid surface shaping, and the wettability could be easily modified.Thus, the optical parameters, e.g.focal length and aperture, of the microlenses could be flexibly controlled. Although the recently proposed microfluidics assisted optics manufacturing techniques attract lots of interests due to its extraordinary features, it still meets great challenge.For example, since the liquid surface is formed by surface tension in the capillary, the surface is always spherical and in the scale below capillary length.Fabricating lenses with large, free-form profiles is a difficult topic.We believe the microfluidics assisted manufacturing techniques could be well developed and applied to fabricate more interesting optics. Figure1. Figure1.Fabrication procedure of laser direct writing.(a) Fabrication of the MLA.(b) Fabrication of the compound eye. Figure 5 . Figure 5. Fabrication procedure of the microfluidics assisted manufacturing techniques.(a) Fabrication of the MLA based on lateral flow.(b) Fabrication of the MLA by thermally curving photosensitive gel beneath microholes. Figure 6 . Figure 6.Fabrication of MLA with controllable focal length by modifying surface wettability.(a) Fabrication procedure of the concave mold and MLA.(b) SEM images of the concave MLAs fabricated with the surface treatment of 20 s, 60 s and 120 s [32]. Figure 7 . Figure 7. Fabrication of uniform-aperture multi-focus MLA using microholes with inclined walls.(a) Fabrication procedure of the multi-focus MLA.(b) 3D printed microholes with different inclined walls, residual photoresist after spin coating, and lenslets obtained by replicating the mold [33]. Figure 8 . Figure 8. Fabrication procedure of the apposition compound eye.
2,909.2
2024-01-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Mechanical Recycling and Its Effects on the Physical and Mechanical Properties of Polyamides The aim of this study is to investigate the impact of mechanical recycling on the physical and mechanical properties of recycled polyamide 6 (PA6) and polyamide 66 (PA66) in relation to their microstructures. Both PA6 and PA66 raw materials were reprocessed six times, and the changes in their properties were investigated as a function of recycling number. Until the sixth round of recycling, slight changes in the mechanical properties were detected, except for the percentage of elongation. For the physical properties, the change in both flexural strength and Young’s modulus followed a decreasing trend, while the trend in terms of elongation showed an increase. Microscopic analysis was performed on virgin and recycled specimens, showing that imperfections in the crystalline regions of polyamide 6 increased as the number of cycles increased. Introduction Polymer recycling is the process of recovering and reusing different types of plastics that would otherwise end up in landfills or in the oceans, polluting the environment.Polymer recycling involves the use of advanced technologies to turn plastic waste into new raw materials that can be used to manufacture new products [1,2].Polymer recycling is essential for maintaining environmental balance, reducing waste and pollution, and protecting natural resources.It also helps reduce the carbon footprint and energy consumption associated with the production of new plastics [3]. Recycling processes involve collecting plastic waste from homes, offices, and industrial factories, sorting and cleaning it to remove contaminants, grinding it into small pellets, and then melting it down to make new products [4,5].There are different types of polymers recycling processes, including chemical recycling, biodegradable polymer recycling, and mechanical recycling [5][6][7].Chemical recycling uses techniques such as pyrolysis and gasification to break down plastic waste into its component parts and then convert them into new materials [8].The recycling of biodegradable polymers uses biodegradable plastics made from renewable raw materials such as corn or potato starch, which, under certain conditions, can break down into organic substances, water, and carbon dioxide [6][7][8][9].Mechanical recycling is the processing of plastic waste by crushing and grinding it into small granules that can then be melted down and formed into new products [10]. Polyamide, commonly known as nylon, is a synthetic polymer with a wide range of applications in different industries.It is widely used in several industrial applications due to its exceptional durability, strength, and compatibility with various manufacturing techniques, making it an essential material for many industrial processes.Some of the major industrial applications of polyamide include the automobile, textile, and packaging industries [11,12].The automobile industry uses polyamide material extensively for making various parts such as engine covers, air intake manifolds, and door handles.This is because it has excellent heat and chemical resistance, can withstand high loads and forces, and is lightweight.Similarly, the textile industry commonly uses nylon for making sportswear, hosiery, and swimwear due to its moisture-wicking capacity, high tensile strength, and abrasion resistance [11][12][13][14]. Moreover, polyamide can be either mechanically recycled or chemically recycled through thermal depolymerization [15].Mechanical recycling involves grinding down the used polyamide products and then melting them to form new products, while chemical recycling involves breaking down the used polyamide into its constituent molecules and then reassembling them into new products. PA's recyclability makes it a sustainable and environmentally friendly material by reducing the amount of waste in landfills, reducing the carbon footprint of manufacturing, and reducing the cost of production since reprocessing old materials is cheaper than producing new ones [1,2].Despite these promising benefits, reprocessing polyamide can lead to a reduction in its molecular weight [16], which can affect its mechanical properties, and reduce its thermal stability, making it more susceptible to melting or deformation and to color alteration due to contamination, which can reduce the aesthetic aspect of the end product. Polyamides find applications in diverse environmental conditions, experiencing fluctuations in temperature and humidity over time.Given its hygroscopic nature, PA6 has the capacity to absorb between 9.5 and 10% of its weight in moisture, and PA 66 is capable of absorbing up to an average of 8-10% of water upon saturation [17][18][19].The presence of water into polyamide is influenced by factors like temperature, loading conditions, and time.The introduction of moisture can enhance the flexibility of the PA6 chains, leading to changes in its mechanical properties.One consequence of this moisture absorption is the observed reduction in the material's effective stiffness, as evidenced by experimental findings [20,21].Thirumalai et al. conducted research on the use of PA-6 as a polymeric matrix in fiber composite wind turbine blades [22].They reported that PA-6 absorbed approximately 3 wt.% of water at 296 K and 50% relative humidity.The time required for PA-6 to reach moisture equilibrium was found to be contingent on the thickness of the specimen. Gnanamoorthy et al. [23] conducted a study where they fabricated a melt extrusion nanocomposite comprising PA6 and organically modified hectorite.They observed that heightened ambient humidity and water uptake led to a reduction in both hardness and modulus, while simultaneously enhancing the flexural fatigue life.In a separate investigation by Krzyzak et al. [24], the influence of varying amounts of glass fiber in a PA6-based composite was examined.This study specifically delved into the moisture absorption from the atmosphere at 278 K and 70-80% relative humidity.The findings indicated that the inclusion of 20% and 30% of glass fiber resulted in approximately a two-fold and three-fold decrease in water absorption for the pellets immersed in water.A meticulous drying process was undertaken in accordance with the supplier's specifications in this study to prevent excessive moisture in PA6 and PA66, especially during recycling. The present paper focuses on assessing the impact of multiple cycles of recycling on the physical-mechanical properties and microstructure of PA6 and 66 when subjected to injection molding.The objective is to gain insights into the number of times PA6 and PA66 could undergo recycling without compromising their various properties, thereby producing a sustainable and useful material.The materials studied are mechanically recycled, which is one of the most common methods used to recycle PA, as it enables the material to be transformed into a range of products, such as automotive parts, electronic components, and household appliances [25].However, it is crucial to ascertain the effect of multiple recycling cycles on the quality and durability of the material before widespread use.Therefore, this paper will delve into the experimental results obtained after subjecting polyamide 6 and polyamide 66 to six cycles of injection molding and provide insights on the effect of Polymers 2023, 15, 4561 3 of 15 multiple recycling steps on the physical-mechanical properties and morphology of the materials dedicated to the manufacturing of assembly components in a finished product.Hence, their UV degradation has not been investigated. Materials The materials under investigation in this study consist of two types of polyamide granules: polyamide 6 (Teknor Apex, Rothenburg ob der Tauber, Germany, Chemlon MD82) and polyamide 66 (Technyl Solvay, Leuna, Germany, A205F BLACK 21N).The black color of both types of granules is achieved by incorporating 2% of dye pellets, also known as "color masterbatch", into a batch of white-colored base material granules.This process enables the uniform dispersion of the dye throughout the granules, ensuring consistent color quality and reproducibility, resulting in higher sample quality and ensuring opacity is preserved throughout all the process steps.The use of masterbatch in the plastic industry is a well-established technique for producing high-quality granules with optimal physical and chemical properties for specific applications [26].Table 1 presents the properties and essential attributes for the injection of the PA6 and PA66 materials used in this study. Sample Preparation The experimental procedure entailed six cycles of mechanical recycling.Each cycle within the series involved a sequence of grinding, followed by a drying process, and concluded with injection molding.At various intervals within each series, samples were extracted for subsequent analysis.The rectangular specimens used in this study were produced using the Kraussmafei 50-ton injection molding machine.The injection conditions provided by the technical documents of the producers are summarized in Table 2.The specifically designed mold was used to create the specimens illustrated in Figure 1a, with final dimensions of 125 × 13 × 3 mm 3 .Polyamides are known for their hygroscopic nature.This property can significantly impact the quality and properties of the final product, particularly in melting processes.Additionally, it synergistically interacts with thermomechanical degradation, further accentuating the loss of polyamides' properties.Prior to injection, the PA6 material was carefully dried for 2 h at 80 • C and PA66 was dried for 4 h at the same temperature to remove any moisture that may have accumulated during storage or handling.This step is critical to ensure that the material properties are consistent and reproducible throughout the testing process.The dimensions of each specimen were precisely controlled.Some of the specimens of the first batch were then granulated in a grinding mill to obtain pellets of a uniform size (2.8 mm in mean diameter).These pellets were then carefully dried and re-injected under the same conditions as the virgin PA.The resulting specimens were tested and analyzed again.This recycling process, starting with the collection of sprues, milling, drying, injecting, and assessment, was repeated six times for both PA6 and PA66.reproducible throughout the testing process.The dimensions of each specimen were precisely controlled.Some of the specimens of the first batch were then granulated in a grinding mill to obtain pellets of a uniform size (2.8 mm in mean diameter).These pellets were then carefully dried and re-injected under the same conditions as the virgin PA.The resulting specimens were tested and analyzed again.This recycling process, starting with the collection of sprues, milling, drying, injecting, and assessment, was repeated six times for both PA6 and PA66.Once all the rectangular plates were molded (Figure 1b), the standard tensile samples (Figure 1c) were cut.The tooling was conducted using a CIF Techno-drill 3 XL milling machine with a 2.5 mm diameter bur, an advancing speed of 5 mm/s, a rotation frequency of 22,000 rpm, and a cutting depth of 1 mm. Material Characterization Quasi-static tensile and 3-point bending tests were performed on the studied materials using the Zwick/Roell Z100 universal machine.The tensile tests were conducted at room temperature using a 100 kN force cell and Clip-On Extensometer strain gage following the ISO 527-4 standard [28] (Figure 2a), with a test rate of 2 mm/min.Once all the rectangular plates were molded (Figure 1b), the standard tensile samples (Figure 1c) were cut.The tooling was conducted using a CIF Techno-drill 3 XL milling machine with a 2.5 mm diameter bur, an advancing speed of 5 mm/s, a rotation frequency of 22,000 rpm, and a cutting depth of 1 mm. Materials Characterization 2.3.1. Material Characterization Quasi-static tensile and 3-point bending tests were performed on the studied materials using the Zwick/Roell Z100 universal machine.The tensile tests were conducted at room temperature using a 100 kN force cell and Clip-On Extensometer strain gage following the ISO 527-4 standard [28] (Figure 2a), with a test rate of 2 mm/min.Flexural tests (Figure 2b) were conducted on the samples in accordance with the ASTM D-790 standard [29], with dimensions of 125 × 13 × 3 mm 3 on a Zwick/Roell Z100 tensile machine with a 100 kN force cell.The tests were run at room temperature with a Flexural tests (Figure 2b) were conducted on the samples in accordance with the ASTM D-790 standard [29], with dimensions of 125 × 13 × 3 mm 3 on a Zwick/Roell Z100 tensile machine with a 100 kN force cell.The tests were run at room temperature with a test rate of 1 mm/min.Each measurement was repeated on 5 samples. Microscopic Analysis Scanning Electron Microscopic (SEM) observations were conducted using the JEOL JSM6010 PLUS/LA microscope.Transversally cut surfaces of cracked specimens resulting from tensile tests were not metallized and analyzed under low pressure to reduce the interference caused by the atmosphere (Figure 3).In order to obtain a comprehensive understanding of the damage mechanics, the SEM was performed on virgin and polyamides recycled six times to accurately compare the degradation in material quality at different levels of use.Flexural tests (Figure 2b) were conducted on the samples in accordance with the ASTM D-790 standard [29], with dimensions of 125 × 13 × 3 mm 3 on a Zwick/Roell Z100 tensile machine with a 100 kN force cell.The tests were run at room temperature with a test rate of 1 mm/min.Each measurement was repeated on 5 samples. Microscopic Analysis Scanning Electron Microscopic (SEM) observations were conducted using the JEOL JSM6010 PLUS/LA microscope.Transversally cut surfaces of cracked specimens resulting from tensile tests were not metallized and analyzed under low pressure to reduce the interference caused by the atmosphere (Figure 3).In order to obtain a comprehensive understanding of the damage mechanics, the SEM was performed on virgin and polyamides recycled six times to accurately compare the degradation in material quality at different levels of use. Fluidity Test The comprehensive fluidity test was carried out in accordance with the ISO 1133-1 standard [30].In this test, PA6 and PA66 melt flow rate measurements were performed, namely the material flow index measured by mass (MFR) and the material flow index measured by volume (MVR). For MFR, the mass of material extruded in a given time is measured (expressed in g/10 min).For MVR, the volume of material extruded in a given time is measured (expressed in cm 3 /10 min).MVR is perhaps the preferred measurement of flow behavior due to its independence of density, especially for plastics with rheological properties that are sensitive to their moisture content, such as polyamide [16].A standard die was employed, with a nominal length of 8 mm and a nominal bore diameter of 2 mm.The specific conditions used for each material are summarized in Table 3. Fluidity Test The comprehensive fluidity test was carried out in accordance with the ISO 1133-1 standard [30].In this test, PA6 and PA66 melt flow rate measurements were performed, namely the material flow index measured by mass (MFR) and the material flow index measured by volume (MVR). For MFR, the mass of material extruded in a given time is measured (expressed in g/10 min).For MVR, the volume of material extruded in a given time is measured (expressed in cm 3 /10 min).MVR is perhaps the preferred measurement of flow behavior due to its independence of density, especially for plastics with rheological properties that are sensitive to their moisture content, such as polyamide [16].A standard die was employed, with a nominal length of 8 mm and a nominal bore diameter of 2 mm.The specific conditions used for each material are summarized in Table 3. Tensile Test Moisture plays a crucial role in influencing the mechanical properties of polyamides.To mitigate this effect, the materials were carefully maintained at a moisture content under 0.2%.The accurate measurement of moisture levels was achieved using the MS-70 Moisture Analyzer from A&D.In this study, PA6 underwent six reprocessing cycles.Interestingly, no significant color change was observed in either PA6 or 66 between the virgin material and the material subjected to six cycles.Figure 4 illustrates the correlation between the number of processes and the mechanical characteristics of both PA6 and PA66.To mitigate this effect, the materials were carefully maintained at a moisture content under 0.2%.The accurate measurement of moisture levels was achieved using the MS-70 Moisture Analyzer from A&D.In this study, PA6 underwent six reprocessing cycles.Interestingly, no significant color change was observed in either PA6 or 66 between the virgin material and the material subjected to six cycles.Figure 4 illustrates the correlation between the number of processes and the mechanical characteristics of both PA6 and PA66.In our investigation, we conducted tests on virgin PA6 and PA66 to assess its tensile modulus and tensile strength.The results were then compared with the data provided by the supplier in Table 1 of the technical datasheet.For virgin PA 66, our test results closely align with the values specified by the supplier.The measured parameters exhibit a high level of similarity, indicating a strong consistency between our experimental findings and the manufacturer's provided data.However, a notable difference was observed for PA6, particularly in the tensile strength parameter, where our measured value was 51 MPa, while the supplier's datasheet indicated a value of 70 MPa.Several factors could contribute to this observed difference.Firstly, the raw material used to manufacture polyamide 6 may exhibit variations between different batches.These variations can arise due to differences in the properties of the raw materials, even if they comply with the supplier's specifications.Secondly, the manufacturing process, including factors such as temperature, pressure, and cooling times, can influence the mechanical properties of the material.Deviations from the specified conditions during the production process may lead to variations in the material's performance.Eventually, the effects of storage conditions on the material cannot be discounted.Changes in environmental conditions, such as humidity or temperature during storage, may impact the material's properties over time.In our investigation, we conducted tests on virgin PA6 and PA66 to assess its tensile modulus and tensile strength.The results were then compared with the data provided by the supplier in Table 1 of the technical datasheet.For virgin PA 66, our test results closely align with the values specified by the supplier.The measured parameters exhibit a high level of similarity, indicating a strong consistency between our experimental findings and the manufacturer's provided data.However, a notable difference was observed for PA6, particularly in the tensile strength parameter, where our measured value was 51 MPa, while the supplier's datasheet indicated a value of 70 MPa.Several factors could contribute to this observed difference.Firstly, the raw material used to manufacture polyamide 6 may exhibit variations between different batches.These variations can arise due to differences in the properties of the raw materials, even if they comply with the supplier's specifications.Secondly, the manufacturing process, including factors such as temperature, pressure, and cooling times, can influence the mechanical properties of the material.Deviations from the specified conditions during the production process may lead to variations in the material's performance.Eventually, the effects of storage conditions on the material cannot be discounted.Changes in environmental conditions, such as humidity or temperature during storage, may impact the material's properties over time. Young's modulus decreased by up to 20% from the virgin state to the first cycle for PA6, from 2490.16 to 1996.16MPa.Then, it reduced slowly by about 5% as the number of processes augmented.At the sixth cycle, it decreased to 1595.76 MPa (Figure 4a).As illustrated by Young's modulus, a decreasing pattern was also observed in tensile strength.Figure 4b shows that it was reduced by 14% after the sixth reprocessing cycle.Moreover, alterations in elongation (%) were noticed with each process.A considerable increase in elongation (%) of about 21% occurred between the virgin PA6 and its first recycling cycle, followed by successive increases, whose intensity depended on the number of recycling cycles.Cycles 2 and 3 exhibited similar elongations (%).After the third cycle, the improvement was more pronounced, and reached 22% at the sixth cycle.In brief, a total increase of 49% in elongation (%) was observed (Figure 4c). The variation in the mechanical properties of polyamide 6 has already been investigated by Maspoch et al. [31] and Crespo et al. [32] using PA6 scrap obtained from injection molding.They demonstrated that with an increase in the number of processing operations, Young's modulus and the maximum tensile stress decreased for PA6.Additionally, for more than three processing cycles, a reduction of 28% in tensile stress and 11% in Young's modulus was observed. Meanwhile, in the case of PA66, there was an 18% decrease in Young's modulus from the virgin state to the first cycle, followed by a slow and continued reduction up to the sixth cycle.In total, 34% of the reduction was unregistered compared to the virgin material in the sixth cycle (Figure 4a).As demonstrated by the tensile modulus, tensile strength also showed a decreasing trend.It was reduced by 64% after the sixth reprocessing cycle.Additionally, changes in elongation (%) were observed throughout each process.Similar to PA6, between virgin PA66 and the first recycling cycle, there was an improvement in elongation (%) of 16%, while a subsequent increase of about 20% was identified between the virgin and the second recycling cycle.To conclude, an overall increase of 52% in elongation (%) was noted.These results align with the findings of Djeddi and Mohellebi [33] who demonstrated that polyamide PA66 exhibits strong mechanical resistance characteristics with a low elongation at break.Recycling this material leads to a notable improvement in ductility, around 233%, accompanied by a reduction in maximum stress by 11%. Microscopic Analysis SEM images were utilized to examine the inclusion size alterations in both virgin and recycled PA6 and PA66.Using 500× magnification, microphotographs were captured to investigate the qualitative changes in inclusion size.Figure 5 shows the microstructure of the virgin PA6 at the cracked section in the situations of dried and non-dried material before injection.Three elements can be distinguished: cupules, inclusions, and the matrix.In the case of undried PA6, most cupules contain inclusions (Figure 5a).However, the difference for the dried PA6 is that some cupules become empty and the inclusions disappear, or their sizes are reduced [Figure 6a-g, Table 4], potentially due to the formation of non-melted granules arising from polymer degradation that contributes to the development of smaller crystals [34], leading to a higher crystallization temperature.Consequently, with a higher number of recycling cycles, inclusion sizes become smaller, displaying changes in the crystalline regions [35].The investigation of the impact of recycling on microstructure is shown in Figure 6.Nevertheless, the distinction between differences seems insufficient based on simple observations.This is why we used the image analysis tool Image J v1.53k software from Wayne Rasband and the US National Institutes of Health [36].To analyze its pixels, this program transforms particle fractions into a binary image.Particle area fractions were obtained by counting pixels [37].The results are given in Table 4.It was found that for PA6, the mean size of inclusion diminished as the number of recycling increased. The total area of inclusions on the treated SEM images also decreased continuously from the virgin state to the fifth recycling cycle.Also, the percentage of porosity increased as the number of recycling cycles increased.This modification of the microstructure's composition was linked to the decrease in viscosity and mechanical properties observed previously.The degeneration of inclusions can be explained by chain breakage or hydrolysis.Consequently, with a higher number of recycling cycles, the inclusion sizes became smaller and the porosity of the material became higher.The investigation of the impact of recycling on microstructure is shown in Figure 6.Nevertheless, the distinction between differences seems insufficient based on simple observations.This is why we used the image analysis tool Image J v1.53k software from Wayne Rasband and the US National Institutes of Health [36].To analyze its pixels, this program transforms particle fractions into a binary image.Particle area fractions were obtained by counting pixels [37].The results are given in Table 4.It was found that for PA6, the mean size of inclusion diminished as the number of recycling increased. The total area of inclusions on the treated SEM images also decreased continuously from the virgin state to the fifth recycling cycle.Also, the percentage of porosity increased as the number of recycling cycles increased.This modification of the microstructure's composition was linked to the decrease in viscosity and mechanical properties observed previously.The degeneration of inclusions can be explained by chain breakage or hydrolysis.Consequently, with a higher number of recycling cycles, the inclusion sizes became smaller and the porosity of the material became higher. As shown in Figure 7, SEM analysis for polyamide 66 was carried out to determine the mode or cause of failure.The fracture morphology of polyamide 66 had a brittle fracture morphology since the first cycle.This was due to contamination particles that produce these defective sites. Flexural Test The flexural properties are summarized in Figure 8. Regarding virgin PA6 and PA66, a slight difference has been observed in the results for both the flexural modulus and flexural strength presented in Table 1.This variance is attributed to several factors mentioned previously and potential effects of injection molding, where the direction of the material injection is perpendicular to the direction of the applied flexural force.This perpendicularity creates a unique stress distribution within the material, impacting its response to bending.Consequently, the mechanical properties measured in flexural testing may exhibit variations compared to the specifications provided by the supplier.The flexural properties are summarized in Figure 8. Regarding virgin PA6 and PA66 a slight difference has been observed in the results for both the flexural modulus and flexural strength presented in Table 1.This variance is attributed to several factors mentioned previously and potential effects of injection molding, where the direction of the material injection is perpendicular to the direction of the applied flexural force.This perpendicularity creates a unique stress distribution within the material, impacting its response to bending.Consequently, the mechanical properties measured in flexural testing may exhibit variations compared to the specifications provided by the supplier. Fluidity Test Figures 9 and 10 illustrate the MFR and MVR of recycled PA6 and PA66.For PA6 they are linked to the increase in processing cycles until the fifth recycling cycle, signifying a decrease in viscosity due to chain breakage or hydrolysis.Starting from the sixth processing cycle, a notable reduction in MFR and MVR values is observed, which could be attributed to degradation caused by an increase in molecular weight due to crosslinking or condensation processes [16]. Fluidity Test Figures 9 and 10 illustrate the MFR and MVR of recycled PA6 and PA66.For PA6, they are linked to the increase in processing cycles until the fifth recycling cycle, signifying a decrease in viscosity due to chain breakage or hydrolysis.Starting from the sixth processing cycle, a notable reduction in MFR and MVR values is observed, which could be attributed to degradation caused by an increase in molecular weight due to crosslinking or condensation processes [16]. Polymers 2023, 15, 4561 Figures 9 and 10 illustrate the MFR and MVR of recycled PA6 and PA66.For PA6, they are linked to the increase in processing cycles until the fifth recycling cycle, signifying a decrease in viscosity due to chain breakage or hydrolysis.Starting from the sixth processing cycle, a notable reduction in MFR and MVR values is observed, which could be attributed to degradation caused by an increase in molecular weight due to crosslinking or condensation processes [16].For PA66 and during the first recycling cycle, the variation in the melt index reveals fascinating dynamics.At this stage, the extrusion volume per unit of time increased while the mass decreased, implying a potential interaction between viscosity and porosity effects.It is noteworthy that the material exhibited increased porosity, demonstrated by a reduction in the melt flow rate (MFR), along with a decrease in viscosity, resulting in improved fluidity, as evidenced by a higher melt volume rate (MVR).As a result, the degradation caused by chain breakage became apparent in the first recycling cycle, affecting both MVR and viscosity.Figures 9 and 10 illustrate the MFR and MVR of recycled PA6 and PA66.For PA6, they are linked to the increase in processing cycles until the fifth recycling cycle, signifying a decrease in viscosity due to chain breakage or hydrolysis.Starting from the sixth processing cycle, a notable reduction in MFR and MVR values is observed, which could be attributed to degradation caused by an increase in molecular weight due to crosslinking or condensation processes [16].For PA66 and during the first recycling cycle, the variation in the melt index reveals fascinating dynamics.At this stage, the extrusion volume per unit of time increased while the mass decreased, implying a potential interaction between viscosity and porosity effects.It is noteworthy that the material exhibited increased porosity, demonstrated by a reduction in the melt flow rate (MFR), along with a decrease in viscosity, resulting in improved fluidity, as evidenced by a higher melt volume rate (MVR).As a result, the degradation caused by chain breakage became apparent in the first recycling cycle, affecting both MVR and viscosity.For PA66 and during the first recycling cycle, the variation in the melt index reveals fascinating dynamics.At this stage, the extrusion volume per unit of time increased while the mass decreased, implying a potential interaction between viscosity and porosity effects.It is noteworthy that the material exhibited increased porosity, demonstrated by a reduction in the melt flow rate (MFR), along with a decrease in viscosity, resulting in improved fluidity, as evidenced by a higher melt volume rate (MVR).As a result, the degradation caused by chain breakage became apparent in the first recycling cycle, affecting both MVR and viscosity. In the following recycling cycles (cycles 2 through 5), MVR decreased again, falling below the original levels of the material.This caused an increase in viscosity due to degradation due to increased molecular weight and crosslinking.An intriguing change took place in the sixth recycling cycle, when the MVR trend reversed and displayed an average value higher than that of the fifth cycle.This reversal suggests that degradation through chain breakage reemerged as a contributing factor. The degradation of the material, characterized by breaks in polymer chains leading to shorter chains, results in a decrease in viscosity.This phenomenon occurs when the material undergoes repeated processing, such as injection, extrusion, and palletization.This trend is consistent with findings reported by other researchers [31,32,38] who studied the reprocessing of PA6 and PA66.The viscosity of the materials diminishes with repeated cycles.However, comparing the final data is challenging due to various factors, including differences in the materials used, variations in process parameters, and potential contamination during the recycling process.These variables could influence the final results, making an exact comparison challenging. Figure 8 . Figure 8. Flexural properties of virgin and recycled PA6 and PA66: (a) flexural strength; (b) elongation at break; (c) flexural modulus.The flexural strengths of virgin polyamide 6 and polyamide 66 were 156.06 MPa and 192.80 MPa, respectively, which were reduced to 99.72 MPa and 113.80 MPa after the sixth reprocessing cycle.The changes in flexural strength and flexural modulus both showed a decreasing trend, while the change in elongation showed an increasing trend.The trends of the results are similar to those of the tensile strength properties discussed above.This conclusion aligns with the research conducted by Mospoch et al. [31]. Table 4 . SEM images analysis vs. of number of recycling cycles of PA6. Table 4 . SEM images analysis vs. of number of recycling cycles of PA6.
6,994.4
2023-11-28T00:00:00.000
[ "Materials Science", "Engineering" ]
Stereotactic body-radiotherapy boost dose of 18 Gy vs 21 Gy in combination with androgen-deprivation therapy and whole-pelvic radiotherapy for intermediate- or high-risk prostate cancer: a study protocol for a randomized controlled, pilot trial Background Combination therapy using external-beam radiotherapy (EBRT) with a brachytherapy boost has demonstrated superior biochemical control than dose-escalated EBRT alone. Whereas brachytherapy is disadvantageous because it is an invasive procedure, stereotactic body-radiotherapy (SBRT) using CyberKnife could emulate the dose distribution of brachytherapy and is a non-invasive and safe modality to control intra-fractional movement. We therefore adopted SBRT using CyberKnife as a boost therapy after whole-pelvic radiotherapy (WPRT). Methods/design In this prospective, randomized, single-center, pilot study for intermediate- and high-risk prostate cancer without nodal or distant metastasis, after androgen-deprivation therapy and WPRT, patients will be randomized to one of two SBRT boost regimens, i.e., 18 or 21 Gy administered in three fractions every other day. Discussion The aim of this trial is to evaluate acute toxicities using both physician- and patient-reported outcomes and short-term biochemical control with SBRT boost following WPRT. Additionally, chronic toxicities and long-term biochemical control will be evaluated as secondary endpoints in this trial. Based on the generated results, we will plan the full-scale phase II study for selecting the SBRT boost dose. Trial registration ClinicalTrials.gov, ID; NCT03322020. Retrospectively registered on 26 October 2017. Electronic supplementary material The online version of this article (10.1186/s13063-018-2574-y) contains supplementary material, which is available to authorized users. Background High-dose external-beam radiotherapy (EBRT) with a prescription dose of 76-80 Gy is the current standard treatment for intermediate-and high-risk prostate cancer [1][2][3]. Recently, some studies have reported that combination therapy using EBRT with a brachytherapy boost demonstrated superior biochemical control than dose-escalated EBRT alone [4][5][6]. However, brachytherapy has several disadvantages including invasiveness, the need for anesthesia, insertion of multiple catheters, pain control, and hospitalization. Among the several approaches to replace brachytherapy boost with noninvasive EBRT boost techniques, stereotactic bodyradiotherapy (SBRT) boost using the CyberKnife robotic radiosurgery system (Accuray Incorporated, Sunnyvale, CA, USA) is one of them. A previous trial reported that dose distribution with CyberKnife was comparable to that of brachytherapy [7]. One other advantage of CyberKnife over other SBRT delivery systems, such as volumetric modulated arc therapy, is intra-fraction motion control. With this rationale in mind, whole-pelvic radiotherapy (WPRT) followed by SBRT boost using CyberKnife anticipates that biochemical control will be superior to the conventional high-dose EBRT. In addition, medical expenses will be lower because of a shorter total treatment time. Traditional fraction size has been 1.8-2 Gy; therefore, the duration of EBRT is 8-9 weeks. A longer EBRT is a heavy burden for both patients and the health care system. Because prostate cancer seems to have greater sensitivity to higher doses per fraction than that of organs nearby, [8] increasing the fraction size, called hypofractionation, has been adopted for prostate cancer [9,10]. However, the implementation of hypofractionation with WPRT is challenging because of the concern of toxicity. However, administration of SBRT as boost therapy after WPRT can shorten the total treatment period to approximately 5 weeks. Several retrospective studies have assessed the clinical outcomes of WPRT with 45-Gy administered in 25 fractions, followed by a CyberKnife boost of 18-21 Gy in three fractions and demonstrated feasibility of the SBRT boost after WPRT [11,12]. However, the boost dose and other modalities used in combination, such as androgen-deprivation therapy (ADT), varied among the patient population. This prospective, randomized, single-center trial evaluates the acute toxicities and short-term control of prostate-specific antigen (PSA) of combination treatment including ADT, WPRT, and an SBRT boost. This trial uses two SBRT boost dose regimens, 18 and 21 Gy, both administered in three fractions. Based on the results of the study, we will design the full-scale phase II study for selecting the appropriate SBRT boost dose. Recruitment and study design This prospective, randomized, single-center, pilot study on intermediate-and high-risk prostate cancer patients is ongoing at Asan Medical Center. Patients are randomized into two groups with a 1:1 allocation using blocked randomization, and the random block size is 4. The randomization sequence is generated by the clinical research nurse using Excel software. The allocation sequence is concealed from the primary investigator who enrolls the patients using the Excel file locked with a password. Also, the file is saved in a computer located in the place that the primary investigator cannot approach. One group will receive a CyberKnife SBRT boost dose of 18 Gy administered in three fractions , and the other group will receive a CyberKnife SBRT boost dose of 21 Gy administered in three fractions (21-Gy group). As shown in Fig. 1, all patients will first receive ADT. Several studies strongly support the use of ADT in addition to EBRT, long-term (i.e., 2-3 years) ADT for high-risk patients [13,14] and short-term ADT (4-6 months) for the intermediate group [15,16]. Risk group definition is based on the current National Comprehensive Center Network guidelines (https://www.nccn.org/ professionals/physician_gls/pdf/prostate.pdf ). Informed consent of patients is mandatory before recruitment and is obtained from the radiation oncologists. Three fractions) with intensity-modulated radiotherapy (IMRT) will be initiated. Next, CyberKnife boost will be implemented. Since the primary investigator has to confirm the plan for CyberKnife boost, the blinding is not possible afterward. The dose schedules are designed to achieve the biological effective dose (BED) as that of the conventional regimen with 78 Gy administered in 38 fractions. BED is widely used in radiation oncology to compare diverse radiation dose regimens using different fraction sizes, fraction numbers, and total doses [17]. For calculation of BED, an α/β ratio constant, i.e., the dose at which cellular death rates with linear and quadratic components are equivalent, is needed. A usual α/β ratio for normal tissue is 3, whereas that for prostate cancer is 1.5. As summarized in Table 1, the normal tissue BED for conventional regimen is 130.0 Gy, which is similar to that for the boost regimen of 18 Gy (130.7 Gy) and lower than that for a boost regimen of 21 Gy (146.7 Gy). The BED values for prostate cancer are 182.0, 197.3, and 226.3 Gy with conventional, 18-Gy boost, and 21-Gy boost therapies, respectively. As result, the 18-Gy group will receive the same BED for adjacent organs, while the tumor will receive 110% BED of conventional EBRT, while the 21-Gy group will be irradiated 110% and 120% BED of conventional EBRT for normal organ and prostate cancer, respectively. Inclusion criteria Patients fulfilling the following criteria are eligible for this trial: age, 20 years or older; diagnosis of pathologically confirmed intermediate-or high-risk prostate cancer within 6 months after enrollment; and an Eastern Cooperative Oncology Group performance status of 0-1. Patients should also fulfill the following biochemical and laboratory criteria based on tests performed within 6 months before enrollment: absolute neutrophil count, ≥ 1500 cells/mm 3 ; platelet count, ≥ 50,000 cells/mm 3 ; hemoglobin, ≥ 8.0 g/dl; normal kidney function within 6 months before enrollment as determined by creatinine < 2.0 ng/dl; normal liver function within 6 months before enrollment as determined by total bilirubin < 1.5 times the maximum normal value; alanine aminotransferase or aspartate aminotransferase < 2.5 times the maximum normal value. Exclusion criteria Patients who fulfill the following criteria are excluded from this trial: distant metastasis; pelvic lymph node metastasis; history of ADT within 6 months before enrollment; history of definitive treatment for prostate cancer such as radical prostatectomy; history of pelvic irradiation; diagnosis of double primary cancer other than skin or thyroid cancer. Dropout criteria Patients who desire to exit the trial or withdraw consent will be dropped out of the study. Additionally, any medical issues as well as unexpected serious complications occurring during treatment or the follow-up period that hinder completion of the study protocol will result in patient dropout. Treatment implementation ADT will be initiated 3 months before WPRT (Fig. 1). In the intermediate-risk group, ADT will be administered for 6 months, whereas in high-risk group, long-term ADT for 2-3 years will be administered. Administration of antiandrogens and/or luteinizing hormone-releasing hormone agonists is allowed for patients in this study. Before computed tomography (CT) for treatment planning, an experienced radiologist will insert three gold intra-prostatic fiducial markers via endorectal sonography for CyberKnife tracking. One week later, enhanced CT scans with a slice thickness of 2.5 mm will be obtained in all patients who will have empty bladders and rectal balloons in place. Participants should be in a relaxed supine position facilitated by an ankle pillow. Afterwards, a radiation oncologist will delineate organs-at-risk (OAR) and target volumes. If available, fusion with diagnostic magnetic resonance imaging (MRI) is recommended. Gross target volume (GTV) includes whole prostate glands, involved extraprostatic tissues, and any suspicion of involvement of the seminal vesicles. If seminal vesicle invasion is excluded definitely, they will not be included in the GTV calculation. Clinical target volume (CTV) includes regional nodal areas including obturator, external/internal iliac, and presacral lymphatic areas. Planning target volume (PTV) is expansion of CTV margins by 5 mm, except for the posterior 3-mm margin for rectal sparing. PTV must be irradiated with ≥ 95% of the prescription dose. OAR includes the rectum, bladder, and femoral heads, and all are contoured according to the Radiation Therapy Oncology Group guidelines [18]. IMRT will be used, and the dose regimen will be 44 Gy in 2.2-Gy fractions every weekday. Dose constraints are as follows: (1) bowel (small and large), 30% of the entire bowel volume must not receive more than 40 Gy; (2) rectum, 60% of the volume must receive ≤ 40 Gy; (3) bladder, 35% of the bladder volume must receive ≤ 45 Gy; (4) femoral heads, 15% of the femoral head volume must receive < 35 Gy. WPRT will be initiated within 10 days after the day of CT for treatment planning. For image guidance, daily cone-beam CT will be used. Patients will be asked to empty their bladder before each fraction, while there will be no routine bowel preparation. For SBRT boost planning, another non-enhanced CT with 1.25-mm slices will be obtained at 1 week before the start of the boost therapy. Patients will be in a supine position with a vacuum cushion and an ankle pillow and with an empty bladder and a Foley catheter inserted for delineation of the urethra. The rectal balloon will not be used for the boost therapy. Boost volume is identical to the GTV of WPRT delineation, whereas the PTV margin is 3 mm in all directions. The urethra as well as the rectum and bladder are also included in the OAR. The CyberKnife robotic radiosurgery system (Accuray Incorporated, Sunnyvale, CA, USA) will be utilized for the SBRT boost, and the PTV must receive ≥ 80% and < 120% of the prescribed dose. OAR dose constraints are as follows: (1) rectum, < 85% of prescription dose; (2) bladder and urethra, < 100% of prescription dose. Prescription doses are 18 Gy and 21 Gy in three fractions, every other day. Tracking for fiducial markers will be used for imageguided radiotherapy. An enema will be performed for bowel preparation before every treatment. The whole radiotherapy period will not exceed 10 weeks. Data collection and follow-up period Patient data will be collected before start of radiotherapy, every week during WPRT, at the end of WPRT, at the end of SBRT boost, and at 1-month and 4-month follow-up visits after radiotherapy (Fig. 2). Patient data will be documented in Case Report Forms on urologic/ gastrointestinal toxicities using the Common Toxicity Criteria for Adverse Events (CTCAE v 4.03), patientreported outcomes (PRO), and PSA levels. After confirmation of the CyberKnife boost plan (at fourth week of radiotherapy), the evaluation of toxicities and efficacy will be no longer be blind. For ADT initiation, patients will be evaluated by the Department of Urology during follow-up visits every 3 months. After radiotherapy sessions, patients will also be evaluated by the Department of Radiation Oncology every 3 months for the first 2 years, every 6 months until 5 years, and annually thereafter. Follow-up evaluations will assess chronic toxicities and PSA levels. If clinical recurrence is suspicious, imaging studies, such as MRI and/or bone scans, will be considered. For the patient data protection, separate identification code will be given to all patients. Also, data will be protected by password-accessible files, and only investigators can assess the files. All data will be deleted at 3 years after the end of the study. Assessment of primary and secondary endpoints Primary endpoints of this trial are acute toxicities and short-term biochemical control. Acute toxicities are defined as events occurring within 3 months after treatment and are evaluated according to the CTCAE v 4.03 and PRO, using the Overactive Bladder Symptom Score and the International Prostate Symptom Score. Secondary endpoints are chronic toxicities and long-term biochemical control. Chronic toxicities are evaluated using the Late Effects of Normal Tissues scoring system. Biochemical recurrence (BCR) is defined as the Phoenix consensus definition of a rise in PSA level by 2 ng/ml or more above the nadir value [19]. The trial design and protocol adhere to the Recommendations for Interventional Trials (SPIRIT) criteria. The SPIRIT Checklist and Figure can be found in Additional file 1: Table S1 and Fig. 2. Statistical analysis The present study is being performed as a purely exploratory pilot study; therefore, by calculation for statistical analysis, a total 15 patients in each treatment group is considered as an appropriate sample size. Comparison of the treatment two groups will be conducted using the chisquare test or Fisher's exact test. Biochemical recurrencefree survival (BCRFS) will be calculated from the ADT start date using the Kaplan-Meier method. Descriptive statistics of all analyzed parameters will be provided, whenever appropriate. Discussion A large comprehensive review involving over 52,000 patients reported that combination therapy including EBRT and brachytherapy appears superior to localized treatment alone in intermediate-and high-risk prostate cancer patients [6]. Additionally, Spratt et al. reported that compared with high-dose IMRT alone, brachytherapy in combination with IMRT was associated with better 7-year BCRFS rates (81.4% vs 92.0%, P < .001) and better distant metastasis-free survival (93.0 vs 97.2%, P = .04) for patients with intermediate-risk prostate cancer [4]. The BCRFS curve was gradually decreased until 10 years after IMRT alone, whereas there was no BCR incidence after approximately 7 years after the combination therapy. A recent Canadian phase III study demonstrated that patients receiving low-dose-rate brachytherapy boost were twice as likely to be free of BCR compared with those receiving dose-escalated EBRT boost [5]. They also reported that 5-, 7-, and 9-year BCRFS rates were 89%, 86%, and 83% with brachytherapy boost and 84%, 75%, and 62% with EBRT boost, respectively (P < .001). Similar to Spratt et al., the BCRFS curves for the treatment arms in the Canadian study sharply diverged after 4 years, and PSA control with brachytherapy boost reached a plateau after approximately 8 years compared with the continuous BCR events in the EBRT boost arm. Therefore, the advantage of combination therapy should increase with longer follow-up. Despite better PSA control, one disadvantage of brachytherapy is the invasive procedure that requires anesthesia, pain control, and hospitalization. A dosimetric study found that CyberKnife could achieve the dose distribution of brachytherapy [7]. The difference of PTV receiving 100% of prescribed dose between CyberKnife and brachytherapy plans was measured only 0.5% (96.5% with Cyber-Knife vs 96.0% with brachytherapy). Furthermore, CyberKnife plans showed lower urethra doses and a rapid rectal dose fall off, suggesting a potential reduction in toxicities. More importantly, CyberKnife is non-invasive and controls intra-fractional movement through real-time fiducial tracking. Through the introduction of SBRT to boost therapy, overall treatment time can also be shortened to 5 weeks. SBRT itself has the potential to increase the therapeutic ratio as well, given that prostate cancer was reported to have a low α/β ratio with high sensitivity for large fraction sizes [8]. This trial uses PRO as well as CTCAE toxicity criteria for accurate assessment of toxicities. Although the CTCAE criteria are the "gold standard" for evaluating adverse events, attempts to capture patient perspective via PRO are increasing. A previous study found that agreement between the CTCAE criteria and PRO was poor to moderate [20]. Another study found that the number of patients reporting severe neuropathy (30%) was higher than that identified by the CTCAE criteria (10%), which suggested that toxicities were often undetected or underestimated by clinicians [21]. As a pilot trial, toxicity assessment is an essential endpoint, and this trial is anticipated to evaluate complications more precisely using PRO. The goal of this prospective, single-center, pilot trial is to evaluate acute toxicities and short-term biochemical control after ADT and WPRT followed by SBRT boost. We attempt two boost dose schemes (18 Gy/3 fractions vs 21 Gy/3 fractions). We anticipate acceptable acute toxicities and favorable short-term control of PSA in intermediate-and high-risk prostate cancer patients who receive the combination treatment of ADT, WPRT, and SBRT boost. We will design the full-scale phase II study for selecting the boost dose referring to the data from the present pilot study. Trial status Patient recruitment has not yet been completed. Additional file Additional file 1: Table S1.
3,977.4
2018-04-02T00:00:00.000
[ "Medicine", "Physics" ]
Representing Text for Joint Embedding of Text and Knowledge Bases Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al., 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model signifi-cantly improves performance over a model that does not share parameters among textual relations with common sub-structure. Introduction Representing information about real-world entities and their relations in structured knowledge base (KB) form enables numerous applications. Large, collaboratively created knowledge bases have recently become available e.g., Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), and DBPedia (Auer et al., 2007), but even though they are impressively large, their coverage is far from complete. This has motivated research in automatically deriving new facts to extend a manually built knowledge base, by using information from the existing knowledge base, textual mentions of entities, and semi-structured data such as tables and web forms (Nickel et al., 2015). In this paper we build upon the work of Riedel et al. (2013), which jointly learns continuous representations for knowledge base and textual relations. This common representation in the same vector space can serve as a kind of "universal schema" which admits joint inferences among KBs and text. The textual relations represent the relationships between entities expressed in individual sentences (see Figure 1 for an example). Riedel et al. (2013) represented each textual mention of an entity pair by the lexicalized dependency path between the two entities (see Figure 2). Each such path is treated as a separate relation in a combined knowledge graph including both KB and textual relations. Following prior work in latent feature models for knowledge base completion, every textual relation receives its own continuous representation, learned from the pattern of its co-occurrences in the knowledge graph. However, largely synonymous textual relations often share common sub-structure, and are composed of similar words and dependency arcs. For example, Table 1 shows a collection of dependency paths co-occurring with the person/organizations founded relation. In this paper we model this sub-structure and share parameters among related dependency paths, using a unified loss function learning entity and relation representations to maximize performance on the knowledge base link prediction task. We evaluate our approach on the FB15k-237 dataset, a knowledge base derived from the Free-base subset FB15k and filtered to remove highly redundant relations (Toutanova and Chen, 2015). The knowledge base is paired with textual mentions for all entity pairs derived from ClueWeb12 1 with Freebase entity mention annotations (Gabrilovich et al., 2013). We show that using a convolutional neural network to derive continuous representations for textual relations boosts the overall performance on link prediction, with larger improvement on entity pairs that have textual mentions. Related Work There has been a growing body of work on learning to predict relations between entities without requiring sentence-level annotations of textual mentions at training time. We group such related work into three groups based on whether KB, text, or both sources of information are used. Additionally, we discuss related work in the area of supervised relation extraction using continuous representations of text, even though we do not use supervision at the level of textual mentions. Nickel et al. (2015) provide a broad overview of machine learning models for knowledge graphs, including models based on observed graph features such as the path ranking algorithm (Lao et al., 2011), models based on continuous representations (latent features), and model combinations (Dong et al., 2014). These models predict new facts in a given knowledge base, based on information from existing entities and relations. From this line of work, most relevant to our study is prior work evaluating continuous representation models on the FB15k dataset. Yang et al. (2015) showed that a simple variant of a bilinear model DISTMULT outperformed TRANSE and more richly parameterized models on this dataset. We therefore build upon the best performing prior model DISTMULT from this line of work, as well as additional models E and F developed in the context of text-augmented knowledge graphs (Riedel et al., 2013), and extend them to incorporate compositional representations of textual relations. 1 http://lemurproject.org/clueweb12/ FACC1/ Relation extraction using distant supervision A number of works have focused on extracting new instances of relations using information from textual mentions, without sophisticated modeling of prior knowledge from the knowledge base. Mintz et al. (2009) demonstrated that both surface context and dependency path context were helpful for the task, but did not model the compositional sub-structure of this context. Other work proposed more sophisticated models that reason about sentence-level hidden variables (Riedel et al., 2010;Hoffmann et al., 2011;Surdeanu et al., 2012) or model the noise arising from the incompleteness of knowledge bases and text collections (Ritter et al., 2013), inter alia. Our work focuses on representing the compositional structure of sentential context for learning joint continuous representations of text and knowledge bases. Combining knowledge base and text information A combination of knowledge base and textual information was first shown to outperform either source alone in the framework of path-ranking algorithms in a combined knowledge base and text graph (Lao et al., 2012). To alleviate the sparsity of textual relations arising in such a combined graph, (Gardner et al., 2013;Gardner et al., 2014) showed how to incorporate clusters or continuous representations of textual relations. Note that these vector representations are based on the co-occurrence patterns for the textual relations and not on their compositional structure. Cooccurrence based textual relation representations were also learned in (Neelakantan et al., 2015). Wang et al. (2014a) combined knowledge base and text information by embedding knowledge base entities and the words in their names in the same vector space, but did not model the textual cooccurrences of entity pairs and the expressed textual relations. combined continuous representations from a knowledge base and textual mentions for prediction of new relations. The two representations were trained independently of each other and using different loss functions, and were only combined at inference time. Additionally, the employed representations of text were non-compositional. In this work we train continuous representations of knowledge base and textual relations jointly, which allows for deeper interactions between the sources of information. We directly build on the universal schema approach of Riedel et al. (2013) as well as the universal schema extension of the DISTMULT model mentioned previously, to improve the representations of textual relations by capturing their compositional structure. Additionally, we evaluate the approach on a dataset that contains rich prior information from the training knowledge base, as well as a wealth of textual information from a large document collection. Continuous representations for supervised relation extraction In contrast to the work reviewed so far, work on sentence-level relation extraction using direct supervision has focused heavily on representing sentence context. Models using hand-crafted features have evolved for more than a decade, and recently, models using continuous representations have been found to achieve new state-of-the-art performance (Zeng et al., 2014;Gormley et al., 2015). Compared to work on representation learning for sentence-level context, such as this recent work using LSTM models on constituency or dependency trees (Tai et al., 2015), our approach using a one-hidden-layer convolutional neural network is relatively simple. However, even such a simple approach has been shown to be very competitive (Kim, 2014). Models for knowledge base completion We begin by introducing notation to define the task, largely following the terminology in Nickel et al. (2015). We assume knowledge bases are represented using RDF triples, in the form (subject, predicate, object), where the subject and object are entities and the predicate is the type of relation. For example, the KB fragment shown in Figure 1 is shown as a knowledge graph, where the entities are the nodes, and the relations are shown as directed labeled edges: we see three entities participating in three relation instances indicated by the edges. For brevity, we will denote triples by (e s , r, e o ), where e s and e o denote the subject and object entities, respectively. The task is, given a training KB consisting of entities with some relations between them, to predict new relations (links) that do not appear in the training KB. More specifically, we will build models that rank candidate entities for given queries (e s , r, ?) or (?, r, e o ), which ask about the object Barack Obama is the 44th and currrent President of United States . or subject of a given relation. This task setting has been used in models for KB completion previously, e.g. (Dong et al., 2014;Gardner et al., 2014), even though it has not been standard in evaluations of distant supervision for relation extraction (Mintz et al., 2009;Riedel et al., 2013). The advantage of this evaluation setting is that it enables automatic evaluation without requiring humans to label candidate extractions, while making only a local closed world assumption for the completeness of the knowledge base -i.e., if one object e o for a certain subject / relation pair (e s , r) is present in the knowledge base, it is assumed likely that all other objects (e s , r, e o ) will be present. Such an assumption is particularly justified for nearly functional relations. To incorporate textual information, we follow prior work (Lao et al., 2012;Riedel et al., 2013) and represent both textual and knowledge base relations in a single graph of "universal" relations. The textual relations are represented as full lexicalized dependency paths, as illustrated in Figure 2. An instance of the textual relation SUBJECT →OBJECT connecting the entities BARACK OBAMA and UNITED STATES, is added to the knowledge graph based on this sentential occurrence. To present the models for knowledge base completion based on such combined knowledge graphs, we first introduce some notation. Let E denote the set of entities in the knowledge graph and let R denote the set of relation types. We denote each possible triple as T = (e s , r, e o ) where e s , e o ∈ E, r ∈ R, and model its presence with a binary random variable y T ∈ {0, 1} which indicates whether the triple exists. The models we build score possible triples (e s , r, e o ) using continuous representations (latent features) of the three elements of the triple. The models use scoring function f (e s , r, e o ) to represent the model's confidence in the existence of the triple. We present the models and then the loss function used to train Basic Models We begin with presenting the three models from prior work that this research builds upon. They all learn latent continuous representations of relations and entities or entity pairs, and score possible triples based on the learned continuous representations. Each of the models can be defined on a knowledge graph containing entities and KB relations only, or on a knowledge graph additionally containing textual relations. We use models F and E from (Riedel et al., 2013) where they were used for a combined KB+text graph, and model DIST-MULT from (Yang et al., 2015), which was originally used for a knowledge graph containing only KB relations. As shown in Figure 3, model F learns a Kdimensional latent feature vector for each candidate entity pair (e s , e o ), as well as a samedimensional vector for each relation r, and the scoring function is simply defined as their inner product: f (e s , r, e o ) = v(r) v(e s , e o ). Therefore, different pairs sharing the same entity would not share parameters in this model. Model E does not have parameters for entity pairs, and instead has parameters for individual entities. It aims to capture the compatibility be-tween entities and the subject and object positions of relations. For each relation type r, the model learns two latent feature vectors v(r s ) and v(r o ) of dimension K. For each entity (node) e i , the model also learns a latent feature vector of the same dimensionality. The score of a candi- . It can be seen that when a subject entity is fixed in a query (e s , r, ?), the ranking of candidate object entity fillers according to f does not depend on the subject entity but only on the relation type r. The third model DISTMULT, is a special form of a bilinear model like RESCAL (Nickel et al., 2011), where the non-diagonal entries in the relation matrices are assumed to be zero. This model was proposed in Yang et al. (2015) and was shown to outperform prior work on the FB15k dataset. In this model, each entity e i and each relation r is assigned a latent feature vector of dimension K. The score of a candidate triple (e s , r, , where • denotes the element-wise vector product. In this model, entity pairs which share an entity also share parameters, and the ranking of candidate objects for queries (e s , r, ?) depends on the subject entity. Denote N e = |E|, N r = |R|, and K = dimension of latent feature vectors, then model E has KN e + 2KN r parameters and model DIST-MULT has KN e + KN r parameters. Model F has KN 2 e + KN r parameters, although most entity pairs will not co-occur in the knowledge base or text. In the basic models, knowledge base and textual relations are treated uniformly, and each textual relation receives its own latent representation of dimensionality K. When textual relations are added to the training knowledge graph, the total number of relations |R| grows substantially (it increases from 237 to more than 2.7 million for the dataset in this study), resulting in a substantial increase in the total number of independent parameters. Note that in all of these models queries about the arguments of knowledge base relations (e s , r, ?) are answered by scoring functions looking only at the entity and KB relation representations, without using representations of textual mentions. The textual mention information and representations are only used at training time to improve the learned representations of KB relations and entities. CONV: Compositional Representations of Textual Relations In the standard latent feature models discussed above, each textual relation is treated as an atomic unit receiving its own set of latent features. However, many textual relations differ only slightly in the words or dependency arcs used to express the relation. For example, Table 1 shows several textual patterns that co-occurr with the relation person/organizations founded in the training KB. While some dependency paths occur frequently, many very closely related ones have been observed only once. The statistical strength of the model could be improved if similar dependency paths have a shared parameterization. We build on work using similar intuitions for other tasks and learn compositional representations of textual relations based on their internal structure, so that the derived representations are accurate for the task of predicting knowledge base relations. We use a convolutional neural network applied to the lexicalized dependency paths treated as a sequence of words and dependency arcs with direction. Figure 4 depicts the neural network architecture. In the first layer, each word or directed labeled arc is mapped to a continuous representation using an embedding matrix V. In the hidden layer, every window of three elements is mapped to a hidden vector using position-specific maps W, a bias vector b, and a tanh activation function. A max-pooling operation over the sequence is applied to derive the final continuous representation for the dependency path. The CONV representation of textual relations can be used to augment any of the three basic models. The difference between a basic model and its CONV-augmented variant is in the parameterization of textual mentions. The basic models learn distinct latent feature vectors of dimensionality K for all textual relation types, whereas the CONV models derive the K-dimensional latent feature vectors for textual relation types as the activation at the top layer of the convolutional network in Figure 4, given the corresponding lexicalized dependency path as input. Training loss function All basic and CONV-augmented models use the same training loss function. Our loss function is motivated by the link prediction task and the performance measures used. As previously men-tioned, the task is to predict the subject or object entity for given held-out triples (e s , r, e o ), i.e., to rank all entities with respect to their likelihood of filling the respective position in the triple 2 . We would thus like the model to score correct triples (e s , r, e o ) higher than incorrect triples (e , r, e o ) and (e s , r, e ) which differ from the correct triple by one entity. Several approaches (Nickel et al., 2015) use a margin-based loss function. We use an approximation to the negative loglikelihood of the correct entity filler instead 3 . We define the conditional probabilities p(e o |e s , r) and p(e s |r, e o ) for object and subject entities given the relation and the other argument as follows: Conditional probabilities for subject entities p(e s |e o , r; Θ) are defined analogously. Here Θ denotes all the parameters of latent features. The denominator is defined using a set of entities that do not fill the object position in any relation triple (e s , r, ?) in the training knowledge graph. Since the number of such entities is impractically large, we sample negative triples from the full set. We also limit the candidate entities to ones that have types consistent with the position in the relation triple (Chang et al., 2014;Yang et al., 2015), where the types are approximated following Toutanova and Chen (2015). Additionally, since the task of predicting textual relations is auxiliary to the main task, we use a weighting factor τ for the loss on predicting the arguments of textual relations (Toutanova and Chen, 2015). Denote T as a set of triples, we define the loss L(T ; Θ) as: fined as: where λ is the regularization parameter, and τ is the weighing factor of the textual relations. The parameters of all models are trained using a batch training algorithm. The gradients of the basic models are straightforward to compute, and the gradients of the convolutional network parameters for the CONV-augmented models are also not hard to derive using back-propagation. Experiments Dataset and Evaluation Protocol We use the FB15k-237 4 dataset, which is a subset of FB15k ) that excludes redundant relations and direct training links for held-out triples, with the goal of making the task more realistic (Toutanova and Chen, 2015). The FB15k dataset has been used in multiple studies on knowledge base completion (Wang et al., 2014b;Yang et al., 2015). Textual relations for FB15k-237 are extracted from 200 million sentences in the ClueWeb12 corpus coupled with Freebase mention annotations (Gabrilovich et al., 2013), and include textual links of all co-occurring entities from the KB set. After pruning 5 , there are 2.7 million unique textual relations that are added to the knowledge graph. The set of textual relations is larger than the set used in Toutanova and Chen (2015) (25,000 versus 2.7 million), leading to improved performance. The number of relations and triples in the training, validation and test portions of the data are given in Table 2. The two rows list statistics for the KB and text portions of the data separately. The 2.7 million textual relations occur in 3.9 million text triples. Almost all entities occur in textual relations (13,937 out of 14,541). The numbers of triples for textual relations are shown as zero for the validation and test sets because we don't evaluate on prediction of textual relations (all text triples are used in training). The percentage of KB triples that have textual relations for their pair of entities is 40.5% for the training, 26.6% for the validation, and 28.1% for the test set. While 26.6% of the validation set triples have textual mentions, the percentage with textual relations that have been seen in the training set is 18.4%. Having a mention increases the chance that a random entity pair has a relation from 0.1% to 5.0% -a fifty-fold increase. Given a set of triples in a set disjoint from a training knowledge graph, we test models on predicting the object of each triple, given the subject and relation type. We rank all entities in the training knowledge base in order of their likelihood of filling the argument position. We report the mean reciprocal rank (MRR) of the correct entity, as well as HITS@10 -the percentage of test triples for which the correct entity is ranked in the top 10. We use filtered measures following the protocol proposed in -that is, when we rank entities for a given position, we remove all other entities that are known to be part of an existing triple in the training, validation, or test set. This avoids penalizing the model for ranking other correct fillers higher than the tested entity. Implementation details We used a value of λ = 1 for the weight of the L 2 penalty for the main results in Table 3, and present some results on the impact of λ at the end of this section. We used batch optimization after initial experiments with AdaGrad showed inferior performance. L-BFGS (Liu and Nocedal, 1989) and RProp (Riedmiller and Braun, 1993) were found to converge to similar function values, with RProp converging significantly faster. We thus used RProp for optimization. We initialized the KB+text models from the KB-only models and also from random initial values (sampled from a Gaussian distribution), and stopped optimization when the overall MRR on the validation set decreased. For each model type, we chose the better of random and KB-only initialization. The word embeddings in the CONV models were initialized using the 50-dimensional vectors from Turian et al. (2010) in the main experiments, with a slight positive impact. The effect of initialization is discussed at the end of the section. The number of negative examples for each triple was set to 200. Performance improved substantially when the number of negative examples was increased and reached a plateau around 200. We chose the optimal number of latent feature dimensions via a grid search to optimize MRR on the validation set, testing the values 5, 10, 15, 35, 50, 100, 200 and 500. We also performed a grid search over the values of the parameter τ , testing values in the set {0.01, 0.1, 0.25, 0.5, 1}. The best dimension for latent feature vectors was 10 for most KBonly models (not including model F), and 5 for the two model configurations including F. We used K = 10 for all KB+text models, as higher dimension was also not helpful for them. Experimental results In Table 3 we show the performance of different models and their combinations 6 , both when using textual mentions (KB+text), and when using only knowledge base relations (KB only). In the KB+text setting, we evaluate the contribution of the CONV representations of the textual relations. The upper portion of the Table) were chosen to maximize MRR on the validation set. The reported numbers were obtained for the test set. base relations, and are not using any information from textual mentions. The lower portion of the Table shows the performance when textual relations are added to the training knowledge graph and the corresponding training loss function. Note that all models predict based on the learned knowledge base relation and entity representations, and the textual relations are only used at training time when they can impact these representations. The performance of all models is shown as an overall MRR (scaled by 100) and HITS@10, as well as performance on the subset of triples that have textual mentions (column With mentions), and ones that do not (column Without mentions). Around 28% of the test triples have mentions and contribute toward the measures in the With mentions column, and the other 72% of the test triples contribute to the Without mentions column. For the KB-only models, we see the performance of each individual model F, E, and DIST-MULT. Model F was the best performing single model from (Riedel et al., 2013), but it does not perform well when textual mentions are not used. In our implementation of model F, we created entity pair parameters only for entity pairs that cooccur in the text data (Riedel et al. (2013) also trained pairwise vectors for co-occuring entities only, but all of the training and test tuples in their study were co-occurring) 7 . Without textual information, model F is performing essentially randomly, because entity pairs in the test sets do not occur in training set relations (by construction of the dataset). Model E is able to do surprisingly well, given that it is making predictions for each object position of a relation without considering the given subject of the relation. DISTMULT is the best performing single model. Unlike model F, it is able to share parameters among entity pairs with common subject or object entities, and, unlike model E, it captures some dependencies between the subject and object entities of a relation. The combination of models E+DISTMULT improves performance, but combining model F with the other two is not helpful. The lower portion of Table 3 shows results when textual relations are added to the training knowledge graph. The basic models treat the textual relations as atomic and learn a separate latent feature vector for each textual relation. The CONV-models use the compositional representations of tex-7 Learning entity pair parameters for all entity pairs would result in 2.2 billion parameters for vectors with dimensionality 10 for our dataset. This was infeasible and was also not found useful based on experiments with vectors of lower dimensionality. tual relations learned using the convolutional neural network architecture shown in Figure 4. We show the performance of each individual model and its corresponding variant with a CONV parameterization. For each model, we also show the optimal value of τ , the weight of the textual relations loss. Model F is able to benefit from textual relations and its performance increases by 2.5 points in MRR, with the gain in performance being particularly large on test triples with textual mentions. Model F is essentially limiting its space of considered argument fillers to ones that have cooccurred with the given subject entity. This gives it an advantage on test triples with textual mentions, but model F still does relatively very poorly overall when taking into account the much more numerous test triples without textual mentions. The CONV parameterization performs slightly worse in MRR, but slightly better in HITS@10, compared to the atomic parameterization. For model E and its CONV variant, we see that text does not help as its performance using text is the same as that when not using text and the optimal weight of the text is zero. Model DISTMULT benefits from text, and its convolutional text variant CONV-DISTMULT outperforms the basic model, with the gain being larger on test triples with mentions. The best model overall, as in the KB-only case, is E+DISTMULT. The basic model benefits from text slightly and the model with compositional representations of textual patterns CONV-E+CONV-DISTMULT, improves the performance further, by 2.4 MRR overall, and by 5 MRR on triples with textual mentions. It is interesting that the text and the compositional representations helped most for this combined model. One hypothesis is that model E, which provides a prior over relation arguments, is needed in combination with DISTMULT to prevent the prediction of unlikely arguments based on noisy inference from textual patterns and their individual words and dependency links. Hyperparameter Sensitivity To gain insight into the sensitivity of the model to hyper-parameters and initialization, we report on experiments starting with the best model CONV-E + CONV-DISTMULT from Table 3 and varying one parameter at a time. This model has weight of the textual relations loss τ = 0.25, weight of the L 2 penalty λ = 1, convolution window size of three, and is initialized randomly for the entity and KB relation vectors, and from pre-trained embeddings for word vectors (Turian et al., 2010). The overall MRR of the model is 40.4 on the validation set (test results are shown in the Table). When the weight of τ is changed to 1 (i.e., equal contribution of textual and KB relations), the overall MRR goes down to 39.6 from 40.4, indicating the usefulness of weighting the two kinds of relations non-uniformly. When λ is reduced to 0.04, MRR is 40.0 and when λ is increased to 25, MRR goes down to 38.9. This indicates the L 2 penalty hyper-parameter has a large impact on performance. When we initialize the word embeddings randomly instead of using pre-trained word vectors, performance drops only slightly to 40.3. If we initialize from a model trained using KBonly information, performance goes down substantially to 38.7. This indicates that initialization is important and there is a small gain from using pre-trained word embeddings. There was a drop in performance to MRR 40.2 when using a window size of one for the convolutional architecture in Figure 4, and an increase to 40.6 when using a window size of five. Conclusion and Future Work Here we explored an alternative representation of textual relations for latent feature models that learn to represent knowledge base and textual relations in the same vector space. We showed that given the large degree of sharing of sub-structure in the textual relations, it was beneficial to compose their continuous representations out of the representations of their component words and dependency arc links. We applied a convolutional neural network model and trained it jointly with a model mapping entities and knowledge base relations to the same vector space, obtaining substantial improvements over an approach that treats the textual relations as atomic units having independent parameterization.
7,042
2015-01-01T00:00:00.000
[ "Computer Science" ]
Two-local qubit Hamiltonians: when are they stoquastic? We examine the problem of determining if a 2-local Hamiltonian is stoquastic by local basis changes. We analyze this problem for two-qubit Hamiltonians, presenting some basic tools and giving a concrete example where using unitaries beyond Clifford rotations is required in order to decide stoquasticity. We report on simple results for $n$-qubit Hamiltonians with identical 2-local terms on bipartite graphs. Our most significant result is that we give an efficient algorithm to determine whether an arbitrary $n$-qubit XYZ Heisenberg Hamiltonian is stoquastic by local basis changes. Introduction and Motivation The notion of stoquastic Hamiltonians was introduced in [1] in the context of quantum complexity theory. The definition of stoquastic Hamiltonians aims to capture Hamiltonians which do not suffer from the sign problem: the definition ensures that the groundstate of the Hamiltonian has nonnegative amplitudes in some local basis while the matrix exp(−H/kT ) is an entrywise nonnegative matrix for any temperature T in this basis. Stoquastic Hamiltonians are quite ubiquitous. Any (mechanical) Hamiltonian defined with conjugate variables [x i ,p j ] = i δ ij of the form with invertible (mass) matrix C > 0, is stoquastic, since the kinetic term is off-diagonal in the position x-basis. The paradigm of circuit-QED produces such Hamiltonians by the canonical quantization of an electric circuit represented by its Lagrangian density. For the subclass of stoquastic Hamiltonians the problem of determining the lowest eigenvalue might be easier than for general Hamiltonians. In complexity terms the stoquastic lowest-eigenvalue problem is StoqMAcomplete while it is QMA-complete for general Hamiltonians. This difference in complexity goes hand-in-hand with the existence of heuristic computational methods for determining the lowest eigenvalue of stoquastic Hamiltonians using Monte Carlo techniques [2,3]. These methods can be viewed as stochastic realizations of the power method, namely a stochastic efficient implementation of the repeated application of the nonnegative matrix exp(−H). However, there is no proof that these heuristic methods are as efficient as quantum phase estimation for estimating ground-state energies (some results are obtained in [4]). Beyond the lowest eigenvalue problem, one can also consider the complexity of evaluating the partition function Z = Tr [exp(−H/kT )] of a stoquastic Hamiltonian. The nonnegativity of exp(−H/kT ) directly ensures that one can rewrite Z as the partition function of a classical model in one dimension higher, which can then be sampled using classical stochastic techniques. Again, rigorous results on this well-known path-integral quantum Monte Carlo method (see e.g. [5]) are sparse: [6] shows how to estimate the partition function with kT = O(log n) for 1D stoquastic Hamiltonians using a rigorous analysis of the path-integral Monte Carlo method. Stoquastic Hamiltonians play an important role in adiabatic computation [7]. It has been proven that there is no intrinsic quantum speed-up in stoquastic adiabatic computation which uses frustration-free stoquastic Hamiltonians: a classical polynomial-time algorithm for simulating such adiabatic computation was described in [8]. The quantum power of the well-known quantum annealing method which uses the transverse field Ising model is still an intriguing open question, given the road-block cases that have been thrown up for the path-integral Quantum Monte Carlo method in [9] as well as the diffusion Monte Carlo method in [10]. However, given that stoquastic adiabatic computation such as quantum annealing is amenable to heuristic Monte Carlo methods, research has also moved into the direction of finding ways of engineering non-stoquastic Hamiltonians (see e.g. [11], [12], [13]). If it is true that questions pertaining to stoquastic Hamiltonians are easier to answer than those pertaining to general Hamiltonians, then it is clearly important to be to able to recognize which Hamiltonians are stoquastic. Work aimed at recognizing where and when a sign problem does not exist, is not new, but few systematic approaches to this problem exist. In this paper we report on our first results in this direction (see e.g. [14]), summarized in Section 1.2. The definition of being stoquastic is basis-dependent, but it is motivated by computational efficiency. The easiest manner in which one can use the basisdependence is to consider a change of basis implemented by local single-qudit or qubit rotations. This basis change can be efficiently executed in that the description of the k-local Hamiltonian in this new basis can be easily obtained. This paper will focus largely on single qubit rotations. It is however clear that the subclass of single-qubit unitary transformations are only the simplest example of a mapping or encoding of one Hamiltonian H to another Hamiltonian H sim as has been formalized in [15]. Such a more general mapping could allow: constant-depth circuits; ancilla qubits; perturbative gadgets and approximations such that only λ(H) ≈ λ(H sim ) or Z H ≈ cZ Hsim for some known constant c. The goal of such an encoding is to map the original Hamiltonian H onto a simulator Hamiltonian H sim where H sim is stoquastic and its low-energy dynamics are effectively captured by H. The upshot is that the encoded Hamiltonian H sim is directly computationally useful. If for a class of Hamiltonians H, simulator Hamiltonians H sim can be found, then the groundstate energy problem or the adiabatic computation of H is amenable to quantum Monte Carlo techniques and its complexity is in StoqMA. Additionally, the definition of encoding employed in [15] ensures that the partition function of the simulator Hamiltonian can be used to estimate that of the original Hamiltonian. An example is the mapping of any k-local Hamiltonian onto a real k + 1-local Hamiltonian [15]. Surprisingly, there are no known no-go's for the use of perturbative gadgets to map seemingly non-stoquastic Hamiltonians onto stoquastic simulator Hamiltonians. Even though mapping all Hamiltonians onto stoquastic Hamiltonians seems to be excluded from a computational complexity perspective, it is possible that there are Hamiltonians which are not stoquastic by a local basis change but for which a stoquastic simulator Hamiltonian exists. Definitions To make contact with some of the mathematics literature, we will use some standard terminology: Definition 1 (Symmetric Z-matrix). A matrix is a symmetric Z-matrix if all of its matrix elements are real, it is symmetric, and its off-diagonal elements are non-positive. Note that for a symmetric Z-matrix H the matrix exponential exp(−H/kT ) is entrywise non-negative for all kT ≥ 0 1 . We reproduce the definition of a Hamiltonian being termwise stoquastic from [1] 2 . We will refer to it as stoquastic for simplicity. Given the discussion in the introduction and following [16], we also introduce the following definition (loosely stated): Another aspect of stoquasticity is the complexity of determining whether H is (computationally) stoquastic. The question is only clearly formulated when we restrict ourselves to specific mappings. For example, one can ask: is there an efficient algorithm for determining whether a 2-local Hamiltonian on n qubits is stoquastic in a basis obtained by single-qubit Clifford rotations. In this light, it was shown in [16] that for 3-local qubit Hamiltonians determining whether singlequbit Clifford rotations can indeed "cure the sign problem" is NP-complete. For two-local Hamiltonians the question how hard it is to decide whether a Hamiltonian is stoquastic by local basis changes is entirely open. The next section reviews our main results on this matter. Summary of Results In Section 2 we consider, as a warm-up, the two-qubit Hamiltonian problem and analyze under what conditions such a Hamiltonian is stoquastic by a local basis change. This turns out to be surprisingly complex. In Theorem 7 we show that one can determine whether a 2-qubit Hamiltonian is real under local basis changes by inspecting a subset of non-local invariants. In Section 2.1 we show that basis changes beyond single-qubit Clifford rotations are strictly stronger than Clifford rotations. For the general two-qubit case we show that a two-qubit Hamiltonian H is stoquastic iff one can find a solution to a set of degree-4 polynomials in two variables (the polynomials are quadratic in each variable), with details in Appendix C. In Section 3 we move to the problem of determining stoquasticity by local basis changes for n-qubit 2local Hamiltonians. In general, the problem of deciding whether a multi-qubit Hamiltonian is stoquastic by a local basis change can be hard, since the local rotations bringing the two-qubit terms to symmetric Z-matrix form have to be mutually consistent. In addition, there is a subtlety involving the interplay of single and twoqubit terms. Definition 2 requires one to find a decomposition into 2-qubit terms D i each of which is a symmetric Z-matrix, i.e. 1-local terms can be allocated to different 2-qubit terms, leading to different decompositions D i . For a fixed basis, we show at the beginning of Section 3 that an efficient "parsimonious" strategy can find the optimal decomposition D i given a fixed basis (a similar idea is discussed in the Suppl. Material of [16]). If we allow basis changes, rotating the single-terms, the problem becomes potentially more complex. Proposition 8 shows that for an n-qubit 2-local Hamiltonian on a bipartite graph with identical terms everywhere, stoquasticity of the two-qubit case gives a sufficient condition for stoquasticity of the full Hamiltonian, and a necessary condition when restricted to local basis changes which are identical on each bipartition. Such classes of Hamiltonians are well motivated physically. Our most important result is that for a subclass of nqubit Hamiltonians without 1-local terms, namely XYZ Heisenberg models on arbitrary graphs, we can find an efficient algorithm to determine stoquasticity by local basis changes. The Hamiltonian of the XYZ Heisenberg model for n qubits is Here the coefficients a uv P P , which can be different for each uv, are specified with some k bits. The theorem that we prove in Section 4 is: There is an efficient algorithm that runs in time O(n 3 ) to decide whether an n-qubit XYZ Heisenberg model Hamiltonian is stoquastic. The algorithm finds the local basis change such that the resulting H is a symmetric Z-matrix or decides that it does not exist. This result relies partially on Lemma 9 (proven in Appendix D) which shows that when an XYZ Heisenberg model is stoquastic, it is so by a basis change induced by single-qubit Clifford rotations. Going beyond the XYZ Heisenberg Hamiltonian, one can ask about the complexity of deciding stoquasticity by local Clifford rotations. Even though this is not the most general class of local basis changes, see for example Equation (7) in Section 2, our algorithm for the XYZ Hamiltonian suggests progress could most readily be made in this direction. We are able to prove that determining whether a Hamiltonian is real by local Clifford rotations is NP-complete: Note that this result does not imply that deciding stoquasticity by single qubit Cliffords or even general local basis changes for 2-local Hamiltonians is NP-complete. We give the proof of Theorem 5 in Appendix E since it is not central to the paper. Warm-Up: Two Qubit Hamiltonians Consider a general two qubit Hamiltonian where we may take a II = 0 without loss of generality. It is simple to show that Visually, the positive off-diagonal contributions of XX and YY terms need to be removed leading to a XX ≤ −|a Y Y |. Similarly, the positive off-diagonal contributions of IX and ZX terms need to be removed, leading to a IX ≤ −|a ZX | (and similarly a XI ≤ −|a XZ |). The set of traceless symmetric Z-matrices forms a polyhedral cone C 2 , that is, a cone supported by a finite number of hyperplanes. In the case of 4 × 4 matrices, each matrix is determined by 9 parameters (three of them real and 6 of them nonnegative). Viewed as a polyhedral cone, C 2 has 12 extremal vectors [17] Appendix A discusses some of this structure and its generalization to two qudits. Two-qubit Hamiltonians are conveniently parametrized by a 3 × 3 real matrix and two threedimensional real vectors: This implies that for 2-local qubit Hamiltonians it suffices to consider pairs of local SO ( To characterize the set of stoquastic 2-qubit Hamiltonians one needs to consider the orbit of C 2 under the local rotations. For a trace-zero Hermitian matrix acting on two qubits, there is a complete set of 18 polynomial invariants {I l } specifying the Hermitian matrix up to local unitary rotations, which we will refer to as the Makhlin invariants [18]. Note that the fact that there are 18 polynomial invariants can be consistent with the existence of only d inv = 15 − 6 = 9 independent nonlocal parameters in a two-qubit traceless Hermitian matrix [19,20]. One can see these 18 Makhlin invariants simply as a convenient set of invariants. Stoquastic two-qubit Hamiltonians thus form a volume in the d inv = 9-dimensional non-local parameter space: the question is how the invariants are constrained in this stoquastic subspace. We have not been able to express the stoquasticity of a two-qubit Hamiltonian in terms of inequalities involving the Makhlin or other invariants. However, we are able to capture the condition that H can be made real by local rotations in terms of the Makhlin invariants, see Theorem 7. The convenience of the Makhlin invariants is that they can be expressed in terms of inner products and triple products constructed from β, S and P which are invariant under SO(3) rotations. Of particular interest are the triple product Makhlin invariants I 10 , I 11 and I 15 − I 18 . These take the following form (using the notation that ( a, b, c) stands for the scalar triple product a · ( b × c)): One can prove that Table 6 are equal to zero. Theorem 7. A two-qubit Hamiltonian H is real under local unitary rotations if and only if all of the triple product invariants given in The proof of this Theorem is simple in one direction, namely when H is real, then the triple product invariants of the corresponding β, S, P are zero. This can be seen by observing that realness implies that the vectors S and P lie in a common two-dimensional subspace and the repeated multiplication by β or β T keeps them in this subspace. Any triple of vectors, all lying in a twodimensional space have zero triple product. In order to prove the other direction, one can observe To prove that zero triple product invariants implies realness under local rotations is more work since β can be degenerate and have rank less than 3, so the proof has to incorporate the various cases. We have included a full proof of Theorem 7 in the Appendix B. Stoquastic Characterization: Beyond Clifford Rotations The goal of this section is to better understand what freedoms to exploit when transforming a two-qubit Hamiltonian into a symmetric Z-matrix by local basis changes. The single-qubit Clifford rotations play a special role as local basis changes. The action of singlequbit Clifford rotations on the Pauli-matrices form a discrete subgroup of SO(3) representing the symmetries of the cube [21]. The Clifford rotations realize any permutation of the Paulis (the group S 3 ) as well as signflips σ i → ±σ i (with determinant 1). However, when deciding if a 2-qubit Hamiltonian is stoquastic, it does not suffice to consider only the local Cliffords. We can consider a concrete example It is easy to argue that no single-qubit Clifford basis change can make this Hamiltonian into a symmetric Zmatrix. Since H has to be real, no permutations to Y -components are allowed. A sign-flip which makes the XX term negative also messes up the sign of one of the single-qubit X terms. Interchanging X and Z for both qubits leads to the same problem for the ZZ and Z terms. Applying a sign flip on the Z of one of the qubits, followed by interchanging X and Z on that same qubit leads to the requirement that −a Z ≤ −1 and −a X ≤ −|a XX |. This is not satisfiable as long as 0 < a Z < 1. However, we could create off-diagonal XZ and ZX terms by a single-qubit non-Clifford rotations. For example, on both qubits one can apply a π-rotation around the Z-axis followed by a π/4-rotation around the Y -axis mapping For small a XX 1, and sufficiently large |a X − a Z |, these inequalities can certainly be satisfied. In Figure 1 we show in what region of the parameter space (a X , a Z , a XX ) H is stoquastic using the general set of inequalities given in Appendix C. Since local Cliffords are insufficient, we need a more general strategy. A general analytical strategy to determine if a two qubit Hamitonian is stoquastic, and in what basis, is as follows. We assume that one first applies local rotations to the two-qubit Hamiltonian H to bring it to a standard form where β is diagonal: A trivial consequence of the previous section is that H is real under local unitary rotations if and only if the diagonalizing transformation lead to the 1-local terms being real: Note that when β has a degeneracy, for example when |a XX | = |a Y Y |, an additional rotation may need to be performed on the degenerate space to retrieve the above form. Without loss of generality we can assume that a ZZ ≥ a XX ≥ 0. This is because we can always perform sign flips on the eigenvalues of β by Pauli rotations, and dump any extra sign into a Y Y so as to preserve the determinant. As a simple special case, when S = P = 0, H can always be made into a symmetric Z-matrix by Clifford rotations, i.e. choosing an appropriate choice of permutations and sign-flips on β such that a XX ≤ −|a Y Y |. So let us assume that at least one of the vectors S and P is not zero. Furthermore, if β = 0 then H is stoquastic, since S and P may be freely rotated. As a second special case, if a ZZ = a XX = 0 and a Y Y = 0 then any transformation of our Hamiltonian into a symmetric Zmatrix will have to make a Y Y = 0. Therefore either the Hamiltonian is not stoquastic or a permutation can be applied preserving realness and setting a Y Y = 0 and a ZZ = 0. Thus we can assume a ZZ > 0 and normalize our Hamiltonian such that a ZZ = 1. We wish to know then if there exists a pair of SO(3) where β , S , P are associated with a symmetric Z-matrix Hamiltonian. Any such transformation must preserve the realness of the Hamiltonian. If at least one of the vectors S and P is rank-2 (if this is not the case, see the extra step in Appendix C) then the only realness-preserving transformations which may be performed are SO(3) rotations in the X-Z subspace combined with reflections in the X-or Z-axis, given by This gives a β , S , P . Recalling Proposition 6, three inequalities must be satisfied in order for the rotated H with β , S , P to be a symmetric Z-matrix. As we show in Appendix C these inequalities can be re-expressed as systems of two-variable polynomials which are at most quadratic in either variable, so analytic solutions to their roots can be constructed and solutions can be found using graphical methods. However, the complexity of these inequalities makes it unclear whether their interest goes beyond that of numerically finding a set of local basis changes. The Stoquastic Problem for 2-local Hamiltonians Consider a general n-qubit 2-local Hamiltonian where u and v are the vertices of the interaction graph corresponds to a qubit so that H v are the 1-local terms (corresponding to S and P in the two-qubit case). Each edge e = uv ∈ E is represented as H uv and can be associated with the 3 × 3 matrix β uv for that edge. We can define the set of 2-local symmetric Z-matrices as the polyhedral cone C n which is generated by the extremal vectors (viewed in a n-qubit Hilbert-Schmidt space, with Is appended) of each of the 2-qubit cones C 2 . The cone C n is thus the set of n-qubit 2-local Hamiltonians which are stoquastic with respect to Definition 2 without further basis changes. If we do not allow for basis changes, it is simple to determine whether a n-qubit Hamiltonian is real by checking the realness conditon. So for the next step we assume that H is real and want to decide whether H ∈ C n . It is important to note that for any v, C uv 2 and C vu 2 intersect: for any u, u = v, the extremal vectors obey This non-empty intersection shows the freedom in the decomposition D i for a fixed basis. The parsimonious strategy below shows how to find a decomposition D i of a n-qubit Hamiltonian H in terms of 2-local terms each of which is a 4 × 4 symmetric Z-matrix or decide that it does not exist. Let H X u be the 1-local term proportional to X u and similarly H Z u . First, note that a real H can only be in C n when for all u, H X u ≤ 0. This follows directly from Proposition 6 demanding that a IX , a XI ≤ 0. Hence we assume this to be the case (otherwise decide not stoquastic in the given basis). Efficient Parsimonious Strategy: Repeat the following for all edges uv in H. Given the current Hamiltonian H, pick a pair of vertices u and v and consider h(α, β) which includes all current singlequbit terms which act on vertices u and v and wlog In the latter case, decide that H ∈ C n and exit. Note that when h(α, β) / ∈ C 2 then ∀α ≤ α, β ≤ β, h(α , β ) / ∈ C 2 since a IX ≤ −|a ZX | and a XI ≤ −|a XZ |, aka it is easier to satisfy the stoquastic condition for large α, β, more negative 1-local Xterms. Hence the goal is to use as little α and β as possible, be parsimonious, so that a lot of −X u and −X v will be left over for another edge. This is then clearly an optimal strategy. Let us next consider the problem of admitting local basis changes. A particular simple example for which a resolution of the two-qubit problem analyzed in the previous section suffices, is the following: Furthermore, H = uv∈E h uv where h uv acts with both one and two local terms on sites u and v, and If the two-qubit Hamiltonian h is stoquastic, then H is stoquastic. If h is not stoquastic then H will not be stoquastic under any local basis change which acts identically on all qubits in a partition. Bipartite graphs include linear arrays, square lattices, cubic lattices and hexagonal lattices, all of which are very natural structures to consider. Suppose h were not stoquastic, but there existed a pair of unitaries U A and U B such that a decompo- cannot be a symmetric Zmatrix, and so D uv = h uv . However both D uv and h uv must share the same purely two local parts. As such, D uv must differ from h uv by its 1-local terms, and in particular one or both of the −X u and −X v terms. Since h uv is not stoquastic, D uv must have more support on either −X u or −X v than h uv . Suppose wlog that D uv has more support on −X u , then, by the parsimonious reasoning outlined earlier, there must exist some D ux which has less support on −X u than h ux does, and consequently D ux is not a symmetric Zmatrix, leading to a contradiction. Efficient Algorithm for the XYZ Heisenberg Models In this section we will outline and detail the proof of our main Theorem, Theorem 4. Consider an XYZ Heisenberg model Hamiltonian on n qubits and arbitrary interaction graph G: with For each H uv , S = P = 0 and The aim is to show an efficient algorithm (which we will show has O(n 3 ) steps) which either decides that H is not stoquastic by local basis changes or finds it is stoquastic (and also gives you the local basis change). As a result of S and P being zero for every term H uv , it follows from Proposition 6 that H uv will not be a symmetric Z-matrix unless β uv is in diagonal form. Thus we can say that H is stoquastic if and only if there exists a set of SO (3) which is equivalent to We emphasize that we order the indices Z → 1, X → 2, Y → 3 to denote the entries ofβ. The proof of Theorem 4 now has the following outline. First, we prove a Lemma which establishes that it suffices to consider single-qubit unitary Clifford rotations in order to determine whether H is stoquastic. The action of these single-qubit Clifford unitaries on a matrix β uv is simply to permute the diagonal elements and add ± signs to the diagonal elements, which is the only freedom when β is to remain diagonal. Since we know that in order for H to be stoquastic, β should remain diagonal, it seems natural that the set of Clifford rotations indeed suffices, but the proof which can be found in Appendix D requires some actual work. Lemma 9 (Single-Qubit Clifford Rotations Suffice). The XYZ Heisenberg Hamiltonian is stoquastic by local basis changes if and only if it is stoquastic by single-qubit Clifford rotations. To be precise: given a set of diagonal for all u and v. Before we start with proving the main theorem, we observe the fact that a solution set with Π u ∈ O(3) can be always be converted into a solution set for which all Π u ∈ SO(3). More precisely, for any solution set ) and Π u ∈ S 3 , which obeys conditions (12) and (13), where some 11 is irrelevant. By Lemma 9, in order to determine whether H is stoquastic, one needs to consider whether there is a set of permutations {Π u } such thatβ uv = Π u β uv Π T v obeys Equation (14). If such set of permutations cannot be found, one decides that H is not stoquastic. If such sets of permutations can be found, one needs to determine whether for any of these solution sets of permutations one can find sign flips {R u } such that Equation (15) is also obeyed. In order to determine the permutation solution set {Π u }, we first establish in a simple lemma (Lemma 11) that for any edge β uv which has rank 2 or 3, -we can call these rigid edges-, any solution set needs to have Π u = Π v . Hence for each rigid connected component (see Definition 10) one has only 6 possible permutations Π ∈ S 3 to choose from and we can iterate over these choices to determine a candidate permutation solution set, see Proposition 12. For the remaining rank-1 edges, which can be labelled with colors 1, 2 or 3 based on which diagonal element of β uv is non-zero, Condition (14) implies that none of these edges can haveβ uv 33 = 0, hence the permutations have to eliminate edges of color 3 while keeping β diagonal and being members of the candidate permutation subset. By eliminating the freedom residing in clusters of internal vertices which only connect to vertices by single-color edges (see Definition 13), the problem of determining the permutation solution set is then shown to map onto the problem of finding the satisfying solutions of some (disjoint) Ising problems (Lemma 14). In Section 4.1 we then go back to the determining {R u } on the basis of these permutation solutions and argue that the additional freedom in internal vertices does not cause any complexity blow-up. First, we define rigid connected components and the flexible quotient multi-graph: Definition 10 (Rigid Connected Components and Flexible Quotient Multi-Graph of G). Take the graph G = (V, E) associated with a Hamiltonian H and remove all edges associated with β uv of rank 1. The resulting graph G − is empty or has connected components Γ − i , i = 1, . . . , K. To each connected component Γ − i we now add the rank-1 edges in G connecting vertices in Γ − i : these graphs will be called Γ i , i = 1, . . . , K and they are the rigid connected components of G. We can define the flexible quotient multi-graph G Q = (V Q , E Q ), see Fig. 2, with a vertex i ∈ V Q for each rigid connected component Γ i (thus |V Q | = K). For each rank-1 β uv edge uv ∈ E with u ∈ Γ i and v ∈ Γ j with j = i, the graph G Q has an edge between i ∈ V Q and j ∈ V Q . Note that G Q can have multi-edges. Lemma 11 (Permutations are identical on rigid connected components). Given a rigid connected component Γ and a solution to that graph {Π u = R u Π u }, there exists a permutation Π such that Π u = Π for every vertex u in Γ. Figure 2: An illustration of the mapping of a graph G to its flexible quotient multi-graph GQ, with red edges corresponding to rigid edges and black edges associated with rank-1 β uv . . Since β uv mm = 0 for two of the three values of m, Π u = Π v . This holds for every pair of connected vertices in the connected components, hence for all u ∈ Γ, Π u = Π for some Π. We now work with the flexible quotient multi-graph G Q . If this graph G Q has multiple connected components, we treat each component separately, so assume it has a single connected component. Definition 12 (Candidate Permutation Set). For each rigid connected component Γ i (associated with a vertex of G Q ), we consider all 6 permutations Π ∈ S 3 on its vertices and determine the subset S i ∈ S 3 for which all edges β uv satisfy Condition (14). If any S i is the empty set, we decide that H is not stoquastic. This iteration over at most 6n 2 permutations is clearly efficient. We call {S i } i=K the candidate permutation solution set. If there are no rigid connected components in G, the candidate solution set is the set of all permutations S ×n 3 . Let G Q thus be a multi-graph with an associated candidate permutation set {S i } K i=1 such that each simple edge or loop has one of three different indices or colors x = 1, 2, 3. This coloring is determined by which diagonal element in β is non-zero. The problem of determining whether there exists a set of permutations Π i , one at each vertex i, such that for each simple edge (i, j) of the multi-graph, one has is equivalent to determining whether there exists a set of permutations Π u which obey Condition (14) in the original graph G. Conditions (16) and (17) ensure that every edge will be permuted into an edge not labelled by 3 which is required due to each edge being of rank 1 and Condition (14). Condition (18) ensures that every edge is mapped to an edge which is still in diagonal form (Condition (12)) and the last Condition (19) verifies that the permutations are consistent with the internal constraints in a connected rigid subgraph. The next step is to alter the multi-graph G Q to an effective multigraph G eff Q that will be analyzed in Lemma 14. Definition 13 (From G Q to G eff Q : Removing Single-Color Clusters). Given the flexible multi-graph G Q associated with a graph G and a Hamiltonian H. If a vertex in G Q is incident to edges of three different colors, 1,2, 3, then H is not stoquastic due to constraints (16) and (17). If not, construct the effective multi-graph G eff Q as follows. For each color x = 1, 2, 3 define the subset of internal vertices V x ⊆ V as those which are only incident to edges of color x. V x can be composed of dis- Note that each permutation solution set of G eff Q uniquely determines the map x → x for each cluster V c x due to Condition (18). This also means that the permutations on these internal vertices are not necessarily fixed, for each internal vertex j there are two permutations Π 1 j (x) = x and Π 2 j (x) = x which work as possible permutations as long as Π 1 j , Π 2 j ∈ S j . Since these permutations have the same permutational action, it is irrelevant which one to take, but when we expand internal vertices in G Q to rigid connected components again, we will discuss the effect of these freedoms in efficiently solving for {R u } in section 4.1. Proof. Since every vertex is incident to edges with only two colors, there are three types of vertices, namely 12, 23 and 13 vertices, see Figure 3. Then, for each type of vertex, only two possible permutations obeying conditions (16)-(17) exist and we can label these choices as Ising spins ±1, as illustrated by each pair of antipodal points in Figure 3. The last condition, Condition (19), which expresses the candidate solution set, imposes a restriction on these Ising spins, i.e. they can fix the Ising spin to be +1 or −1, modeled as some h i S i = −1. Furthermore, note that if we choose one of the two permutations for one vertex i which is connected by an edge x to a vertex j, then this uniquely fixes the permutation at vertex j due to Equation (18). So for each edge, one can define a ferromagnetic (J ij = 1) and anti-ferromagnetic Ising interaction (J ij = −1) as follows: Lemma 14 (Ising Problem). Let G eff Q be a multi-graph on n vertices such that each vertex is incident to edges of precisely two colors and a candidate permutation solution set for each vertex. The problem of determining whether there are permutations of the colors on the vertices of G eff Q which obey conditions (16)-(19) is equivalent to the question whether there is a {S • Any color-3 edge with one incident 13-vertex and one incident 23-vertex will act as an antiferromagnetic bond J ij = −1 on the {+1, −1} labels of the permutations in order satisfy conditions (16)- (18). This can be seen easily in Figure 3 by noting the labelling of the vertices connected to color-3 edges. • Any color-3 edge where both incident vertices are either 13 or 23 will act as a ferromagnetic bond J ij = +1. • Any color-2 or color-1 edge, regardless of its incident vertices will act as a ferromagnetic bond J ij = +1. Once a single permutation is fixed, the Ising interactions determine all other permutations, and one can verify whether these permutations are consistent with the restriction due to the candidate solution set. Since a single vertex only admits at most two possible assignments of permutations, this means that one only needs to make at most two of these checks. Determining The Sign Flips {R u } If there is a stoquastic basis for H, then Lemma 14 will give either one or two set of permutations {Π j } for G eff Q . These solution sets can then be extended to possible permutations on G Q which include the internal vertices in clusters. It may be the case, depending on the candidate permutation sets, that for some of the internal vertices j in a cluster V c x , two possible permutations Π j (x) = x and Π j (x) = x are valid permutations (16), (17) and (18) are satisfied. The schematic shows that a permutation on a vertex which is incident to two different edges x = y can only be one of two choices (antipodal points in the figure) which can be labelled as ±1. In addition, once we fix a permutation for a vertex, only one choice of permutation is allowed for its adjacent vertex: for example if we take a 12-vertex and fix its permutation to be (1)(2)(3), any neighboring 12-vertex must also be assigned (1)(2)(3) while any neighboring 23-vertex must be assigned (13)(2). to make H stoquastic. We call these permutations solutions on internal vertices internal permutations. In addition, the action of one of these permutations on a vertex will extend to the rigid connected component associated with that vertex, leading to some possible permutation solution sets {Π u } |V | u=1 for the whole graph G. Let's first establish that given a fixed permutation solution set {Π u }, it is simple to verify whether one can satisfy Condition (13), i.e. do there exist sign-flips {R u } such that (R u Π u β uv Π T u R T u ) 22 ≤ 0. For a given set {Π u }, we construct a new graph G 2 from G. G 2 has the same vertices as G but we keep only the edges for which (Π u β uv Π T u ) 22 = 0. With each edge we associate the sign of [Π u β uv Π T v ] 22 as a ±1 sign. Note that G 2 is not necessarily connected. 22 > 0 then in order to satisfy Condition (15) we want to apply a sign flip R = diag(1, −1, 1) to either vertex u or vertex v. If [Π u β uv Π T v ] 22 < 0 we want to either apply the sign flip to both vertices u and v, or not apply a sign flip to either. Again, this is precisely a question of satisfying all Ising interactions in a classical Ising problem. Here negative edges are mapped onto an anti-ferromagnetic bond, and positive edges are mapped onto a ferromagnetic bond. If G 2 has several disconnected components, then each component will have two possible independent sign solutions which can be efficiently enumerated. If our graph G Q has clusters with internal vertices, on which there can be more than one internal permutation solution, one has to be careful to argue that one does not end up with an exponential number of such Ising problems due to the multiplicative number of choices of permutation assignment for the whole set of internal vertices. In order to argue that this is not the case, we need the following observations: • If a rigid connected component Γ i ∈ G is represented by an internal vertex i in a cluster V c x where x is mapped to 1 by the permutation solution of G eff Q , then in the graph G 2 all the vertices of Γ i are disconnected from the other vertices in the G 2 graph. This implies that each Γ i in G representing an internal vertex i in V c x corresponds to a disconnected component in G 2 . For such a disconnected component one has to consider whether one can find a set of sign-flips for at most two internal permutations, so this requires checking at most two Ising problems, and then disregarding that disconnected component while checking the rest of the internal vertices. • A rigid connected component cannot be a vertex in a cluster V c x where x is mapped to 3 by the permutation solution of G eff Q since the permutation solution has to eliminate simple edges of color 3 by conditions (16) and (17). Hence this case is excluded. Thus, despite the large number of possible permutation assignments to the full set of internal vertices, the impact of a given choice of permutation for any individual internal vertex i on the subsequent problem of assigning sign-flips is either non-existent, or isolated to only those vertices in G associated with the rigid subgraph Γ i , and can be considered independently of the other choices. Therefore the number of Ising problems one will need to consider is bounded. To summarize, the algorithm requires the following steps, with the following cost, where n is the number of vertices in the graph G. (1) First the rigid connected components are identified. A candidate permutation set is decided for each rigid connected component The flexible quotient graph is constructed by modding out the rigid connected components. Each edge is coloured according to what entries are non-zero in their β matrix. (5) Solving the Ising model gives two possible assignments of signs. Some of these assignments may be incompatible with candidate permutation sets of a given rigid connected component. (6) With a sign assignment, we may use figure 3 to assign permutations. This fixes all permutations in G Q except for isolated vertices. • Identifying rigid connected components: O(n). • Constructing candidate permutation sets for the rigid connected components: O(n 2 ). • Constructing G eff Q by first identifying and removing internal vertices, which takes O(n 2 ) time, then linking up boundary vertices, which also takes O(n 2 ) time. • Solving the Ising problem for G eff Q : O(n 2 ). • The above Ising problem produces at most two possible sets of permutation assignments on G eff Q , each of which correspond to multiple possible sets of distinct permutation assignments on G. However this number is at most 2 times the number of internal vertices in V x for which x is mapped to 1. So the number of possible permutation assignments on G is O(n). Hence we conclude that the problem of determining whether there is a basis change {D u } such that H is stoquastic can be done efficiently in worst case O(n 3 ) time. It should be noted that a more careful analysis would probably find a tighter worst-case of O(n 2 ), since the worst-case of O(n 2 ) for deciding {R u } assumes all vertices in G 2 are connected, while the worst-case of O(n) for the number of permutation assignments would imply that almost none of the vertices in G 2 are connected. For the sake of clarity, we have included an illustrated overview of the algorithm in tables 1 and 2. Discussion There remain many open questions with regards to stoquasticity. It would be interesting to capture the twoqubit (or two-qudit) stoquastic space in terms of inequalities involving invariants. It would be desirable if the problem of deciding whether a Hamiltonian is computationally stoquastic is easier when allowing more transformations than just local basis changes. Such simplification would also be welcome for the two-qubit case. Various of the proofs in this paper are tedious and involved since the reasoning is different for different subcase Hamiltonians. We did not see a way of simplifying these arguments. It is an interesting question whether one could supply such proofs using a proof assistant, automatically ensuring their validity. The presence of single-qubit terms causes our algorithm for deciding the stoquasticity of the XYZ Heisenberg model to fail. Furthermore, the existence of singlequbit terms makes the class of single-qubit Clifford rotations strictly weaker than that of single-qubit rotations. Nevertheless, it seems likely that a generalization of the algorithm described here could be constructed to decide stoquasticity under single-qubit Clifford rotations of an XYZ model with single qubit terms included. It remains an open question whether it is computationally hard to decide whether such a Hamiltonian is stoquastic by local basis changes beyond single-qubit Clifford rotations. We expect that the problem of deciding whether a Hamiltonian is stoquastic by local basis changes is easy for n-qubit Hamiltonians with an underlying line or tree interaction graph G by employing a combination of an efficient dynamic programming strategy with the parsimonious strategy (i.e. discretize the set of local basis changes, show how the optimization of a partial problem giving a basis change at the boundary can be used to solve a next-larger partial problem etc.). An intriguing idea is that results showing hardness of deciding whether a Hamiltonian is stoquastic could be used for quantum computer verification. A general adiabatic computation with frustration-free Hamiltonians requires a quantum computer to run it, but let's assume that Alice, the verifier, picks a stoquastic frustrationfree Hamiltonian, but then hides this stoquasticity by local basis changes or, say, the application of a constant depth circuit. Bob with a quantum computer can run the adiabatic computation and provide Alice with samples from its output which she can efficiently verify to be statistically correct using the algorithm in [8]. On the other hand, a QC-pretender Charlie may have to either find the constant-depth hiding circuit or classically simulate the quantum adiabatic algorithm, and either task could be too daunting. Funding We acknowledge support through the EU via the ERC GRANT EQEC No. 682726. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development & Innovation. Acknowledgements BMT would like to thank David DiVincenzo for interesting discussions. BMT and JDK would also like to thank Sergey Bravyi as well as Christophe Vuillot for helpful discussions. It is understood that the set of symmetric Z-matrices forms a polyhedral cone. For convenience and concreteness we present the structure of this cone for operators acting on C d 2 in terms of generalizations of the Pauli matrices. Any Hermitian operator acting on C d ⊗ C d with basis elements {|i | i ∈ 0, ..d − 1}, can be written as : with a xy ij;mn ∈ R. Here the λ D,A,S ij matrices with labels S (symmetric), A (anti-symmetric) and D (diagonal) are a modified set of generalized Gell-Mann matrices [22] given by: with e i = |i i| − 1 d I. Note that Tr[λ x ij λ y mn ] = δ im δ jn δ xy d, so that the λ matrices can be thought of as spanning vectors of the Hilbert-Schmidt vector space. We can thus employ the vector inner product of two bipartite matrices A and B acting on C d ⊗ C d as: The necessary and sufficient conditions for H to be a symmetric Z-matrix (see Definition 1) are (1) ∀i, j, m, n im|H|jn ∈ R and (2) im|H|jn ≤ 0 when either i = j or m = n (or both). The λ-basis is convenient because of the following properties. The diagonal part of H only has support on vectors of the form λ D ⊗ λ D while the off-diagonal part has no contribution from λ D ⊗ λ D . Furthermore, the imaginary part of H only has support on vectors of the form λ A ⊗λ S , λ A ⊗λ D , λ S ⊗λ A and λ D ⊗λ A , while both the real and the diagonal part of H have no contribution from these vectors. Thus the vector H has support on three orthogonal subspaces, the diagonal subspace D spanned by vectors λ D ⊗ λ D , the imaginary subspace I, spanned by vectors λ A ⊗ λ S , λ A ⊗ λ D , λ S ⊗ λ A and λ D ⊗ λ A , and the off-diagonal real subspace R spanned by the remaining vectors. We may write H = H D +H R +H I where H D , H R , H I are the vectors in these three subspaces. The first ("realness") condition thus corresponds to demanding that H I = 0. By the linear independence of the λ-basis it implies that ∀i, j, m, n, a AS ij;mn = a SA ij;mn = a AD ij;mn = a DA ij;mn = 0. (23) The second ("negativity") condition then says that for all i = j or m = n (or both): This condition thus specifies some of the facets of the symmetric Z-matrix cone. When a vector |im jn| is interpreted as the normal vector of an oriented hyperplane in Hilbert-Schmidt space, the negativity condition can be thought of as the requirement that H R lies within the union of half spaces defined by these oriented hyperplanes. To write down the inequalities describing these facets, we can use that One can then write down three types of inequalities: • case 1: i = j and m = n There is an interesting geometric fact here. Consider for instance Inequality (25). This is illustrated in figure 4 for d = 2, 3. In the case of two qubits λ S 01 = X, λ A 01 = Y , λ D 01 = Z, e 0 = Z/2 and e 1 = −Z/2. For the realness condition we require that a IY = a Y I = a Y Z = a ZY = a Y X = a XY = 0. For the negativity condition we retrieve the inequalities: Together these (in)equalities are given in condensed form in Proposition 6. B Proof of Theorem 7 Recall the definitions of When the triple product invariants are zero, Γ L and Γ R each contain coplanar vectors. By Proposition 15 (see below) we know that S is in the span of at most two left singular vectors of β and P is in the span of at most two right singular vectors of β. If S, ββ T S and [ββ T ] 2 S are linearly independent, then βP must be in the same span as S. If they are not linearly independent then S is a left singular vector of β, with unit vectorê S . It follows that βP ∈ span(ê S ,ê S ⊥ ), and ββ T βP ∈ span(ê S ,ê S ⊥ ). This implies that ββ Tê S ⊥ ∈ span(ê S ,ê S ⊥ ). Sinceê T S ββ Tê S ⊥ = 0 it follows that ββ Tê S ⊥ ∈ span(ê S ⊥ ) andê S ⊥ is a left singular vector of β. Therefore there always exists a pair of left singular vectors of β such that S and βP are in their span. By Proposition 16 (see below) this implies that S is spanned by at most two left singular vectors of β and P is spanned by the corresponding right singular vectors. By Proposition 17 (see below) we see that H is real under local unitary rotations. Proof. Let S ∈ span(ê L x ,ê L y ). Assume βP is in the same span as S, then β T βP, [β T β] 2 P ∈ span(ê R x ,ê R y ). If β T βP and [β T β] 2 P are not proportional to one another then P ∈ span(β TêL x , β TêL y ). If β T βP and [β T β] 2 P are proportional to one another then they are proportional to a right singular vector of β,ê R P . If β is full rank then β T β is invertible and P ∝ê R P ∈ span(ê R x ,ê R y ). If β is not full rank and P ∝ê R P then P ∈ span(ê R P ,ê R 0 ) with βê R 0 = 0. Consider the left singular vector expansion of S into an orthonormal basis: We require the following three vectors to be coplanar It is easy to see that S and P are spanned by at most two left singular vectors of β That concludes the proof for one direction of the biconditional. Suppose S and P are spanned by two or fewer left and right (respectively) singular vectors of β which share the same singular values. Then S and P are expressible as C Polynomial Inequalities for Stoquasticity of a 2-qubit Hamiltonian The rotated two-qubit Hamiltonian H is described by Three inequalities must be satisfied in order for H to be a symmetric Z-matrix: Note that the parameters γ 1 and γ 2 have fallen out of these inequalities. If the above inequalities can be satisfied, then the Hamiltonian is stoquastic. If they cannot be satisfied then the Hamiltonian is not stoquastic, barring a single case as follows. If either {a XI , a IX } = {0, 0} or {a ZI , a IZ } = {0, 0}, then an additional test of the same form must be applied after first performing a permutation such that the pair of zero coefficients are now in the Y position. For example, if {a XI , a IX } = {0, 0} then one must also perform the above test on the diagonal form: The inequalities (30), (31) and (32) can be broken up into six inequalities in order to ignore the absolute values: We wish to map these trigonometric inequalities into polynomial inequalities of two variables by defining x 1 = tan(θ L ) and x 2 = tan(θ R ). However the mapping will depend on the value of cos(θ 1 ) and cos(θ 2 ), and so breaks up into several cases. Case 1: cos(θ L ), cos(θ R ) = 0 Let δ L = Sign(cos(θ L )) and δ R = Sign(cos(θ R )) then Case 2: cos(θ L ) = 0, cos(θ R ) = 0 Let δ L = Sign(sin(θ L )) and δ R = Sign(cos(θ R )) then 2 )a 2 ZI ≥ 1 Case 3: cos(θ L ) = 0, cos(θ R ) = 0 Let δ L = Sign(cos(θ L )) and δ R = Sign(sin(θ R )) then Case 4: cos(θ L ) = 0, cos(θ R ) = 0 Let δ L = Sign(sin(θ L )) and δ R = Sign(sin(θ R )) then All of these inequalities are at most quadratic in any single variable x 1 or x 2 , so analytic solutions to their roots can be constructed and solutions can be found using graphical methods. If there exist values of x 1 and x 2 such that for some choice of δ L , δ R ∈ {−1, 1} at least one of the above cases is satisfied, then H is stoquastic. If not, then we may say that H is not stoquastic save the exception described above. This concludes the complete characterization of stoquasticity of a two-qubit Hamiltonian H. D Proof of Lemma 9 Consider a set of 3 × 3 matrices {β uv } indexed by the pair uv which are diagonal in some fixed basis {ê 1 ,ê 2 ,ê 3 }. Here we establish that whenever there exists a set of SO (3) where by signed permutations we mean matrices of the form Π u = Π u R u , with Π u a permutation and R u a diagonal matrix with diagonal elements ±1. Therefore whenever considering the class of sets of SO(3) transformations which preserve the diagonality of the matrices {β uv }, it suffices to consider only the signed permutations. Of course this is trivially true when considering a single diagonal matrix, but is less obvious when considering a set of matrices. The proof is achieved by introducing a consistent mapping from a given SO(3) matrix O to a signed permutation Π(O), which we call the Π-reduction. This mapping will have the property that for a diagonal We can associate each of the monomials in the right hand side of the above equation with a matrix which is zero for all terms in the matrix which do not contribute to that monomial, and sign(x) for all terms x which do contribute to that monomial. So for example: We may also impose a lexicographical ordering on the matrices, so that the matrix associated with bdi is lower in the ordering than the matrix associated with bf g. This is to ensure that the D-reduction has a consistent definition, and the particular choice of ordering is not important. We may then define the Π-reduction as follows: Note that the determinant of O is +1, so there always exists a non-zero monomial. Finally, note that the Π-reduction is always a signed permutation, but not always an SO (3) Clearly, the realness-under-Clifford rotations problem is in NP, since a prover can give the verifier the singlequbit Clifford rotations which make the terms in H real. To prove the problem to be NP-hard, we show that if one can efficiently solve the realness-under-Cliffords problem, then one can also efficiently solve an NPcomplete subclass of the exact cover problem, called restricted exact cover by three-sets.
13,430.2
2018-06-14T00:00:00.000
[ "Physics" ]
Generation of photon vortex by synchrotron radiation from electrons in Landau states under astrophysical magnetic fields We explore photon vortex generation in synchrotron radiations from a spiral moving electron under a uniform magnetic field along z-axis using Landau quantization. The obtained wave-function of the photon vortecies is the eigen-state of the z-component of the total angular momentum (zTAM). In m-th harmonic radiations, individual photons are the eigen-state of zTAM of m~. This is consistent with previous studies. Using the presently obtained wave-functions we calculate the decay widths and the energy spectra under extremely strong magnetic fields of 1012−1013 G, which are observed in astrophysical objects such as magnetized neutron stars and jets and accretion disks around black holes. The result suggests that photon vortices are predominantly generated in such objects. Although they have no coherency it is expected that photon vortices from the universe are measured using a detector based upon a quantum effect in future. This effect also affects to stellar nucleosynthesis in strong magnetic fields. Introduction In the universe strong magnetic fields play important roles in various phenomena. Strong magnetic fields in rotating massive stars contribute to magnetohydrodynamic (MHD) driven core-collapse supernova explosions and formation of magnetized neutron stars or black holes associated with an accretion disk and jets (collapsar) [1,2,3]. Neutron stars may be associated with strong magnetic fields of 10 12 −10 15 G and, in particular, extremely strongly magnetized neutron stars, so-called magnetars, are considered to be the sources for soft γ repeaters and anomalous X-ray pulsars [4] and to be the central engines of short Gamma-Ray Bursts (GRBs) [5,6,7]. Collapsars with a magnetized accretion disk and jets are also a candidate for the origin of long GRBs. Recent progress in γ/X ray astronomy enabled to measure linear (circular) polarization of gamma-rays from these astrophysical objects. A high linear polarization of 80% ± 20% measured by the RHESSI satellite [8] was reported, although it decreases to approximately 41 +59 −44 % by re-analysis [9]. The linear polarization as high as 98% ± 33% in the prompt emission of GRB 041219A was measured by the SPI telescope onboard the INTEGRAL satellite [10]. The GRB polarimeter onboard the IKAROS solar power sail measured the polarization degrees of 70% ± 22% for GRB 110301A and the polarization of 84 +16 −28 % for GRB 110721A [11]. As the generation mechanism for these high linear polarized γ rays, some scenarios have been proposed; they are synchrotron radiations from relativistic electrons under strong magnetic fields [12] and inverse Compton scattering on low energy photons with relativistic electrons [13]. The origin of these highly polarized photons has been unresolved but synchrotron radiation from electrons in strong magnetic fields is one the candidates for the mechanism [14]. The synchrotron radiation in quantum theory was first derived by a pioneering work [15,16] and the quantum synchrotron radiation under various conditions were studied [17,18,19,20,21,22,23,24]. The electron orbitals under magnetic fields are in Landau levels. An electron in a Landau level transits to a lower lying Landau state through an emission of a photon, where the wave-function of the radiated photon depends on the initial and final electron wave-functions. As a magnetic field strength becomes weaker, an average level spacing decreases so that the Landau levels become quasi-continues and classical calculation becomes a good approximation. However, in strong magnetic fields as high as 10 12 −10 15 G each level is separated in the energy space and thus calculated results based upon Landau quantization are largely different from those without that [20,21,22]. Allen et al. [26] pointed out that a single photon could have a vortex wave-function such as Laguerre-Gaussian in quantum level. Katoh et al. [44] suggested that an l-th harmonic photon in synchrotron radiation from spiral moving electrons under uniform magnetic fields is a photon vortex carrying l total angular momentum and that such photon vortices are naturally generated in astrophysical objects with strong magnetic fields. However, the wave-function of photon vortices radiated from an electron under a uniform magnetic field has not been calculated by taking Landau quantization into account. In extremely strong magnetic fields as high as 10 12 −10 15 G, which are observed in astrophysical objects, the difference between calculated results of synchrotron radiations with/without Landau quantization becomes large [20,21,22,25]. The purpose of this paper is to present the wave-functions of photons radiated from electrons under strong magnetic fields calculated using Landau quantization and the fraction of photon vortices in synchrotron radiations in extremely strong magnetic fields of up to 10 13 G. We also discuss the possibility of detection of photon vortices from the universe and its effects to stellar nucleosynthesis. Calculation and Result In the present study, we consider a uniform dipole magnetic field along z-direction, B = (0, 0, B), and that an electron trajectory draws a circle in a plane perpendicular to the z-direction under this magnetic field (see Fig. 1). We use the natural unit = c = 1. The electron wave-function ψ(r) in this system is obtained from the following Dirac equation: where α and β are the Dirac matrices, A is an electro-magnetic vector potential, E is the electron energy, e is the elementary charge, and m e is the electron mass. We choose the symmetry gauge with the vector potential being A = (−y, x, 0)B/2. We make the scale transformation for x and y coordinates as X(Y ) = ( eB/2)x(y) , and define the operators as As well known, Hamiltonian of the two-dimensional harmonic oscillator and the operator of the z-component of orbital angular momentum L are written aŝ We write following equations in the cylindrical coordinate of r = (r T , z) = (r T cos φ, r T sin φ, z). A function G L n is the eigen-state of the operators of (3) aŝ where L |L| n is the associated Laguerre function, L is the z-component of the orbital angular momentum, p z is the z-component of the momentum, and n is the number of nodes. L satisfy a relationship of L = J + 1/2 where J is the z-component of the total angular momentum (zTAM). A solution of Eq. (1) in this system is known as [20] where n ′ = n when L ≥ 0 and n ′ = n − 1 when L ≤ −1, N is the normalization factor and U is a 4-dimensional vector. The wave-function is the eigen-state of zTAM. The electron energy in a Landau level is given by E = 2eBN L + p 2 z + m 2 e where N L indicates the Landau level number defined as N L = (L + |L|)/2 + n. To obtain the Dirac spinor U we rewrite Eq. (1) with Eq. (6) as By solving the above characteristic equation, we can obtain the electron energy as where E is the Landau energy. We finally obtain the wave-function as where R z is the size of the system along the . Note that n ′ = n when L ≥ 0 and n ′ = n − 1 when L ≤ −1. The form of χ h is arbitrary when N L ≥ 1 because there are two degenerate states at fixed L and n, whereas the state with N L = 0, which is so called the lowest Landau state, is not degenerate and its spinor is taken to be only t χ −1 = (0, 1). The wave-function for n = 0 corresponds to the helical motion along the z-axis when L ≥ 0, whereas the wave-function for n ≥ 1 indicates the helical motion along an axis that is different from the initial axis [47]. Thus, we consider various node numbers for the final state and take only n = 0 for the initial state. Using the Coulofcolormb gauge ∇ · A = 0 with the photon as A 0 = 0, we obtain the wave-function propagating along the z-direction for the emitted photon as a solution of the Klein-Gordon equation in the cylindrical coordinate as stated previously. We obtain A as where m is the zTAM, s is the helicity, q z is the z-component of the photon momentum, e q is the photon energy, q T = e 2 q − q 2 z , ǫ s = (1, is, 0)/ √ 2, and J M is the Bessel function. This A does not generally satisfy the gauge condition ∇ · A = 0 except for e 2 q − |q z | 2 ≪ 1 in the para-axial limit. Then, we add a z-component of A z and rewrite A as which is consistent to the wave function given in Ref. [45] when q T ≪ |q z |. Because A(s = +1) and A(s = −1) do not satisfy the orthogonal relation, we take the eigen-state wave-functions of the transverse magnetic (TM) state with A(s = +1) − A(s = −1) and those of the transverse electric (TE) state with A(s = +1) + A(s = −1) at fixed m [48]. Finally, we obtain the wave-function of the radiated photons as This photon wave-function is the eigen-state of zTAM, where the zTAM of an emitted photon m satisfies a relationship of m = J i -J f , where J i and J f are the zTAM of the initial and final electron states, respectively. The decay width does not in general depend on the set of the wave-function of the radiated photon when the electron and photon eigen-states for a system are given. For further study we calculate the decay width using the presently obtained photon wavefunction. A decay width of an electron can be calculated from the initial and final wavefunctions of the electron in Eq. (9) by the imaginary part of the electron self-energy. The electron self-energy with an energy of E is given by where S and D are the electron and photon propagators in the magnetic field: S(r 1 , r 2 , p 0 ) = L,n,h dp z 2π ψ(r 1 ; L, n, h, p z )ψ(r 2 ; L, n, h, p z ) where we omit the contribution from negative energy electrons in the electron propagator. The decay width of the electron at an initial state i is obtained as where f indicates the final electron state. We rewrite the electron wave-function in Eq. (9) and the photon field in Eq. (13) as respectively. Using Eqs. (17) and (18), the decay width at a fixed final state is written as where with where p i(f )T = 2eBN i(f ) . Because the electron spin is not a good quantum number in the relativistic framework, we make the average of the strength in Eq. (17) for the initial spin and the summation for the final spin. We finally obtain the decay width of The possible momentum of an emitted photon with m is limited by The maximum (minimum) energy for a radiation with m is proportional to m, and the average energy at m increases as m increases. To investigate radiations from magnetized neutron stars we numerically calculate the decay widths of electrons under strong magnetic fields of 10 12 G and 10 13 G. Figure 3 shows the decay widths of electrons as a function of m using Eq. (24). The results are multiplied with the gamma factor along the z-direction, γ z = E i / E 2 i − p 2 iz , so that the results are independent of the z-component of the initial electron momentum of p iz when |p iz | ≫ m e . To obtain the total energy spectrum and the fraction of photon vortices of radiated photons from an electron in an initial state, we calculate the decay widths to individual final states for various m values. In Fig. 4(a) we present the total energy spectrum and energy spectra of individual modes of photon vortices with m in synchrotron radiations from electrons with an energy of 50 MeV, which is integrated over all the radiation angle. Discussion and conclusion We have finally obtained the wave-function of the radiated photon as Eq. (13). The m-th harmonic radiation is the eigen-state of the zTAM m. This is consistent with the previous results [44,45,46]. Though in Ref. [45,46] the photon wave-functions were chosen to be the eigen-states of the helicity in the limit of the photon transverse momentum over the photon energy q T /e q = 0, we have chosen them to be the eigen-state of TM and TE [see Eq. (13)] [48]. To examine the choice of the basis we show the contributions from TM and TE states and those from the helicity s = +1 and s = −1 states as the function of q T /e q in Fig. 6. When q T 0.01e q the state with s = +1 dominantly contributes to the decay width. However, as q T increases the contribution from s = −1 becomes larger. In contrast, the TM state in Eq. (13) dominates when q T 0.02e q . Although the basis of the helicity is useful for calculating in the condition of q T ≪ e q , the wave-function as the eigen-state of TM and TE can give more precise results for wide region of q T under extremely strong magnetic fields. Furthermore, the previous studies [41,45,46] calculated the synchrotron radiation from helical undulators, which consists of different direction magnets. If it is calculated using Landau quantization, its result is expected be different from the present result for uniform magnetic fields. The fraction of photon vortices with m ≥ 2 in the total flux depends on the magnetic field strength and the initial angular momentum. As the magnetic strength becomes stronger, the fraction of photon vortices increases. Furthermore, as shown in Fig. 3 the fraction of photon vortices increases with increasing the initial electron angular momentum, which is proportional to the square of the diameter of the electron spiral motion. The highly linear polarization of γ-rays from GRBs indicate a magnetized baryonic jet with large-scale uniform strong magnetic fields [49]. However, the present calculation does not depend on the large-scale structure of a magnetic field. It is considered that magnetic fields in astrophysical environments are locally homogeneous for photon vortex generation compered with electron spiral motion diameters. The diameter of a spiral motion of an electron is estimated to be d ∼ 1.6 × 10 13 /B [G]L 1/2 i [pm]. In the present assumed conditions, the diameters are shorter than 10 −8 cm. Thus, the uniform magnetic field is a good approximation for calculation of synchrotron radiation in astrophysical objects. The possible energies of the fundamental radiation and m-th harmonic radiations are restricted by the region expressed by Eq. (25). As shown in Fig. 4(a) the upper limit of the energy of the fundamental radiations is 4.1 MeV when the initial electron energy is approximately 50 MeV. Therefore, below 4.1 MeV both the fundamental radiation and the high-order harmonic radiations are emitted, whereas in the energy region of 4.1-−50 MeV only photon vortices are radiated. This trend is seen in spectra in the wide energy region as shown in Figs. 4(b) and 5(a, b). In astrophysical environments, the energy distribution of thermal electrons follows Fermi-Dirac distribution. High energy γ-rays observed in afterglow phases of GRBs suggest non-thermal electrons whose energy distribution is described by power law [51]. In both cases the spectral density of electrons decreases with increasing electron energy in high energy region. With the present result that the only photon vortices are generated in the energy region higher than the upper limit of the fundamental radiation, it is expected that the fraction of photon vortices increases with increasing energy and photon vortices are predominantly produced in the high energy region. The observation of polarized X/γ-rays with detectors onboard satellites and interplanetary space explores [10,11,49,50] suggests that photon vortices could be also observed in the space near the earth. A method to measure Laguerre-Gaussian light at optical wavelengths from astrophysical objects has been proposed [29]. The present calculation predicts photon vortex generation but does not predict any coherency. This means that the observed light seems to be white light and photon vortices should be identified using a method based upon a quantum phenomenon. At present, linearly polarized γ/X-rays are measured using detectors based upon Compton scattering. To measure γ-rays with a wave-function of Laguerre-Gaussian [38] or Hermite-Gaussian [52] we have proposed the use of Compton scattering. Because the presently obtained wave-function in Eq. (13) has the feature similar to that of the Laguerre-Gaussian photon [38], the measurements of photon vortices generated by the synchrotron radiations are probably possible in similar manners. Photon vortices may affect stellar nucleosyntheses. It is known that neutron-deficient isotopes of heavy elements are synthesized by photodisintegration reactions in supernova explosions (γ-process) [53,54]. Giant dipole resonance (GDR) is the dominant reaction between photons and nuclei in the energy region of 10−30 MeV. However, it was pointed out that, if a photon vortex with large total angular momentum of J ≥ 2 incidents on an even-even nucleus of a spin and parity of J π = 0 + , the excitation to states with J π = 1 − through GDR is forbidden because of the conservation law of angular momentum [36]. Thus, when nucleosyntheses associated with photodisintegration reactions occurs in a strong magnetic field, the isotopic abundance distribution of a synthesized element is expected to be different form the solar abundances. The r-process paths in the magnetohydrodynamical supernovae [55] and in jets in collapsars [56] shifts toward more neutron rich regions by photon vortices. The isotopic abundances affected by photon vortices may be observed in presolar grains [57], which record individual nucleosyntheses before the solar system formation. Summary The difference between calculations with/without Landau quantization for strong magnetic fields becomes large increasing magnetic field strengths. In the present study we have calculated the wave-function of photon vortices radiated from electrons under Landau levels. The wave-function is the eigen-state of the z-component of the total angular momentum when the electron has a spiral motion along z-axis. This is consistent with the previous results. The photon vortices, in principle, predominantly produced in astrophysical objects with extremely strong magnetic fields as high as 10 12 −10 13 G. These photons have not any coherent structure so that they seem to be like white light. However, they are expected to be measured by a new detector based upon Compton scattering onboard satellites in future. Photonuclear reactions with photon vortices are hindered because of the conservation law of angular momentum so that photon vortices change the isotopic abundances of synthesized nuclide in astrophysical environments with strong magnetic fields. [ x y z B Electron Motion Figure 1: Coordinate in the present study. We assume the uniform dipole magnetic field along the z-axis. The electron is in the spiral moving along the z-axis.
4,424
2021-11-01T00:00:00.000
[ "Physics" ]
GTP binding controls complex formation by the human ROCO protein MASL1 The human ROCO proteins are a family of multi-domain proteins sharing a conserved ROC-COR supra-domain. The family has four members: leucine-rich repeat kinase 1 (LRRK1), leucine-rich repeat kinase 2 (LRRK2), death-associated protein kinase 1 (DAPK1) and malignant fibrous histiocytoma amplified sequences with leucine-rich tandem repeats 1 (MASL1). Previous studies of LRRK1/2 and DAPK1 have shown that the ROC (Ras of complex proteins) domain can bind and hydrolyse GTP, but the cellular consequences of this activity are still unclear. Here, the first biochemical characterization of MASL1 and the impact of GTP binding on MASL1 complex formation are reported. The results demonstrate that MASL1, similar to other ROCO proteins, can bind guanosine nucleotides via its ROC domain. Furthermore, MASL1 exists in two distinct cellular complexes associated with heat shock protein 60, and the formation of a low molecular weight pool of MASL1 is modulated by GTP binding. Finally, loss of GTP enhances MASL1 toxicity in cells. Taken together, these data point to a central role for the ROC/GTPase domain of MASL1 in the regulation of its cellular function. Structured digital abstract HSP60 and MASL1 physically interact by molecular sieving (View interaction) MASL1 physically interacts with HSP70 and HSP60 by anti tag coimmunoprecipitation (View interaction) MASL1 physically interacts with HSP60 by anti tag coimmunoprecipitation (View interaction) Introduction Malignant fibrous histiocytoma amplified sequences with leucine-rich tandem repeats 1 (MASL1) is the smallest member of the human ROCO protein family, members of which are characterized by a conserved supra-domain containing a Ras of complex proteins (ROC) domain and a C-terminal of ROC (COR) domain [1]. In addition to MASL1, this protein family includes the leucine-rich repeat kinases 1 and 2 (LRRK1/2) and death-associated protein kinase 1 (DAPK1) (Fig. 1) [2]. All four of the human ROCO proteins have been linked to human disease, with LRRK2 in particular being the focus of an intense research effort due to its importance in genetic Parkinson's disease as well as inflammatory bowel disease and leprosy [3]. MASL1 was originally identified in malignant fibrous histiocytomas and has been linked to other forms of cancer [4][5][6][7]. It is clear that understanding the cellular and molecular functions of these proteins is an important challenge in several areas of biomedical research. Based upon bioinformatic analyses, the ROC domain that defines this protein family is predicted to bind guanosine triphosphate (GTP), a property that has been demonstrated for LRRK1, LRRK2 and DAPK1 [8][9][10]. Functional and structural investigations support a role for GTP binding by the ROC domains of these proteins as well as an involvement in complex formation and in the control of protein function. In the case of DAPK1 and LRRK2, GTP binding has been implicated in the regulation of the kinase activities of these proteins, although the precise mechanism whereby this occurs is unclear [9,11,12]. Structural information obtained from the bacterial Chlorobium tepidum ROCO protein suggests that the COR domain functions as a dimerization device to activate ROC GTPase activity [13]. Interestingly, the C. tepidum ROCO protein and human MASL1 display conserved domain organization, with N-terminal leucine-rich repeats and a C-terminal ROC-COR bidomain [13]. MASL1 is unique among the human ROCO proteins in not possessing a clear effector domain, either a kinase (as is the case for LRRK1, LRRK2 and DAPK1) or a death domain (DAPK1). No information regarding the function or structure of this protein is available to date. The aim of this study was therefore to investigate typical ROCO protein features such as GTP binding, GTPase activity and complex formation of MASL1 as well as to identify possible binding partners. These data demonstrate for the first time that MASL1 is a functional GTP binding protein and that complex formation by MASL1 is regulated in a manner dependent on guanosine nucleotide binding. MASL1 is a GTP binding protein To investigate the function of the ROC domain of MASL1, an artificial mutation (K442A), designed to disrupt nucleotide binding, was introduced into the open reading of hemagglutinin (HA) tagged MASL1. This mutation was chosen based on homology to small GTPases and LRRK2, where it has previously been shown to impact upon GTP binding properties [14]. The K422A mutant had no impact on steady state expression of MASL1 ( Fig. 2A). Wild type HA-tagged MASL1 was able to bind GTP immobilized on sepharose beads, an interaction that could be disrupted by a molar excess of GTP (10 mM) (Fig. 2B). In contrast, the predicted nucleotide binding deficient form of the protein (K422A MASL1) was incapable of binding GTP, confirming the importance of this residue for GTP binding. To assess whether MASL1 is able to hydrolyse as well as bind GTP, a series of in vitro radiometric assays was carried out. As a positive control the small GTPase RAC1 [15] was used. MASL1 (wild type and K422A) and wild type RAC1 could be immunoprecipitated and eluted at similar levels ( Fig. 2C). Under the assay conditions used in this experiment, RAC1 displayed GTPase activity whereas neither wild type nor K442A MASL1 was able to hydrolyse GTP above background levels (Fig. 2D,E). Based upon these in vitro data, it is impossible to exclude the possibility that MASL1 is capable of hydrolysing GTP in a cellular context with the aid of cofactors. Guanosine nucleotides regulate complex formation by MASL1 Data from a number of studies investigating complex formation by DAPK1 and LRRK2 suggest that the human ROCO proteins form multi-component complexes in a cellular context [10,[16][17][18]. To investigate complex formation by MASL1, size exclusion chromatography (SEC) and blue native (BN) gel electrophoresis were applied to clarified cell lysates of HEK293T cells overexpressing wild type and K422A HA-tagged MASL1. These analyses revealed that wild type MASL1 fractionates as two complexes, a low molecular weight (LMW,~200 kDa) and a high molecular weight (HMW,~600 kDa) (Fig. 3A). To verify that the LMW peak was not a degraded product of the HMW complex, SDS/PAGE followed by immunoblotting was carried out on the two chromatographic fractions. As shown in Fig. 3B, both peaks represent full-length proteins. Additionally, wild type and K422A MASL1 were analysed by SEC in the presence of EDTA (5 mM in lysis buffer and SEC running buffer to equilibrate the mobile phase), resulting in a loss of the LMW peak (Fig. S1). These data suggest that the observed shift in MW is due to a disruption of nucleotide binding rather than protein misfolding or degradation. It is notable that loss of nucleotide binding by K422A MASL1 disrupts the LMW peak specifically (Fig. 3A), while it does not impact upon immunoreactivity at~600 kDa. To examine complex formation using an alternative approach, BN gel electrophoresis was used. In agreement with the SEC data, wild type MASL1 displayed a biphasic distribution with a distinct band at~200 kDa and a higher molecular mass smear consistent with the~600 kDa complex observed using SEC. Disruption of nucleotide binding again resulted in a change in apparent molecular mass, with a dramatic reduction in the intensity of the~200 kDa band (Fig. 3D). To assess whether the loss of guanosine nucleotide binding has an impact on the folding of MASL1, fluorescence spectra were recorded for purified wild type and K422A variants of the protein in the presence and absence of the denaturant guanidine hydrochloride (Fig. S2). Spectra for wild type and K422A forms of MASL1 were comparable, and both underwent a clear shift in spectra upon exposure to 6 M GdHCl suggesting that, under the conditions used in the experiments described above, both forms of MASL1 adopt a native fold. Based upon the importance of nucleotide binding properties on complex formation by MASL1, the impact of GTP or GDP addition to cell lysates upon the predilection of MASL1 to form LMW or HMW forms of the protein were tested for binding to GTP immobilized on sepharose beads with or without a molar excess of GTP. This analysis revealed that wild type MASL1 is a GTP binding protein and that a molar excess of GTP as well as a targeted mutation of the lysine residue at 422 disrupt GTP binding. (C) In order to investigate the ability of MASL1 to hydrolyse GTP both forms of MASL1 and a known GTPase RAC1 were expressed in HEK293T cells. The FLAG-tagged proteins expressed equally (a-FLAG) and could also be eluted in a very pure fashion from FLAG-agarose beads with 150 ngÁlL À1 FLAG peptide (Coomassie). (D), (E) GTPase activity was observed over a period of 0-120 min displaying activity only by RAC1. No GTPase activity was detected in either form of MASL1 under these experimental conditions. Untransfected HEK293T cells, treated with transfection reagent polyethylenimine (PEI) only, were used as control samples (CON) for all analyses. complexes was examined (Fig. 3C). Cell lysates, as well as the mobile phase of the chromatography column, were pre-loaded with 200 lM GTP or GDP. As shown in Fig. 3C, the formation of the LMW complex was modulated by the presence of guanosine nucleotides. Specifically, GDP shifted the equilibrium between A C D B Fig. 3. GTP binding properties of MASL1 regulate complex formation. (A) SEC was used to analyse whole cell lysates of MASL1 transfected HEK293T cells. Wild type MASL1 was found in two complexes, an HMW complex and an LMW complex of~600 and 200 kDA respectively. In contrast, K422A MASL1 was only found as an HMW complex; representative dot blots are shown. (B) Fractions containing the HMW or LMW complexes were probed for HA showing that full-length MASL1 can be found in both. (C) To investigate the effect of GTP/GDP upon complex formation of MASL1, cell lysates and the SEC column were incubated with 200 lM GTP or GDP prior to analysis. No effect was seen in the K422A mutant. In contrast, wild type MASL1 complexes shifted between HMW and LMW upon GDP or GTP treatment respectively as shown in the graphs. Ratios between HMW and LMW peaks further emphasize this shift: in untreated conditions the ratio between HMW and LMW is 1 : 0.94; upon GDP treatment the equilibrium shifts towards HMW resulting in a ratio of 1 : 0.34; GTP treatment results in a shift towards LMW as seen by a ratio of 1 : 1.28. (D) As an alternative approach to SEC, BN PAGE also showed that wild type MASL1 and the K422A mutant form different complexes. Data are presented as the mean of three individual experiments AE SEM. LMW and HMW complex towards the latter, whereas incubation with GTP favoured formation of the LMW form of MASL1. Neither GTP nor GDP had any impact on the complex formed by the nucleotide binding dead form of MASL1. As the ability of MASL1 to hydrolyse GTP is still unclear, a SEC run was carried out in the presence of a non-hydrolysable guanosine trinucleotide (GMppCp, Fig. S3). This reveals a similar pattern to that observed for wild type MASL1 in the presence of GTP. Taken together, these data suggest that the guanosine nucleotide binding status of MASL1 has a major impact on complex formation by this protein. Heat shock protein 60 (HSP60) interacts with MASL1 The presence of MASL1 in two cellular complexes of 600 kDa and~200 kDa suggests that MASL1 may form an active complex with other proteins in the cell. To identify such potential binding partners, HA-tagged MASL1 protein overexpressed in HEK293T cells was immunoprecipitated. Co-purifying proteins were isolated by SDS/PAGE (Fig. 4A) and the corresponding bands were subjected to mass spectrometry analysis (Fig. 4B). A representative image (Fig. 4A) shows that MASL1 (band a) was successfully immunoprecipitated as well as additional bands representing possible interactors/binding partners, which were not present in the control condition (untransfected). Two major bands of 60 and~40 kDa were analysed and identified as HSP60 and HSP70 respectively (Fig. 4B). The band at 60 kDa (band b) was identified as HSP60 and displayed a high correlation score (30%). The band at 40 kDa (band c) was identified as HSP70 and displayed a low correlation score. Due to the divergence in the relative molecular mass of this band and the predicted mass of HSP70, in combination with the low correlation score, subsequent experiments focused on HSP60. To validate these mass spectrometry findings, the SEC fractions of HEK293T cell lysates expressing MASL1 (either wild type or K422A) were assessed for the presence of HSP60. These data reveal HSP60 as being present in the same fractions as MASL1 (Fig. 5A). In co-immunoprecipitation experiments followed by immunoblot for HSP60, MASL1 (both wild type and mutant) was found to co-purify with HSP60 ( Fig. 5B). Cellular localization of MASL1 Given the impact of guanosine nucleotide occupancy on complex formation by MASL1, the cellular localization of wild type and K422A MASL1 was assessed by immunocytochemistry. To achieve this, HEK293T cells transfected with HA-tagged wild type and K422A forms of MASL1 were analysed by immunocytochemistry and confocal microscopy. Both wild type and K422A MASL1 were expressed at similar levels and displayed broad cytoplasmic localization (Fig. 6). ROCO proteins have been suggested to participate in vesicle cycling; in particular LRRK2 regulates synaptic vesicle trafficking and LRRK1 epidermal growth factor receptor trafficking [19][20][21]. To determine whether MASL1 preferentially associates with any specific organelles/structures within the cell, co-localization studies were performed with the lysosomal marker LAMP1, the cytoskeletal marker phalloidin and complex V as a marker of mitochondria. Although both forms of MASL1 occasionally co-localized with the lysosomal marker LAMP1, no consistent co-localization with the markers used in this study was observed (Fig. 6). Cytotoxicity of MASL1 in HEK293T cells Overexpression of the ROCO proteins DAPK1 and LRRK2 is associated with cytotoxicity in cellular models [22][23][24]. In common with both of these proteins, MASL1 has been implicated in cancer suggesting a possible role in the control of cell death or proliferation. To examine cellular toxicity of MASL1 upon transfection in HEK293T cells, two approaches were used: a 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay and fluorescence-activated cell sorting (FACS) to allow identification of viable, necrotic and apoptotic cells. Expression of wild type MASL1 resulted in a significant decrease in viable cells compared with controls, an effect that was exacerbated by the expression of the nucleotide binding dead form of the protein (Fig. 7). To confirm these observations with an independent technique, FACS analysis of cells transfected with MASL1 was carried out. Approximately 5000 transfected cells were analysed per sample (n = 3 per genotype). Untransfected control cells were used to set up gating of transfected cells (p4) (Fig. S4). Analysis of MASL1 positive cells showed a similar trend towards increased cell death in K422A MASL1 transfected cells (p4) as observed with the MTT assay (Fig. 8). This increase predominantly consisted of necrotic cells (Fig. 8B,C). No difference of either apoptosis or necrosis could be observed in the untransfected cell population (p3) or with control cells (Fig. S4). Discussion This study provides an insight into the biology of MASL1, the smallest member of the human ROCO protein family. The data presented here demonstrate that MASL1 possesses an ROC domain that is capable of binding GTP, and that guanosine binding properties of MASL1 are important for complex formation and for the cytotoxic properties of this protein. Two approaches were used to determine the GTP binding properties of MASL1. First, a competitive GTP binding assay in which MASL1 was incubated with or without GTP free in solution along with immobilized GTP on sepharose beads showed that the wild type protein is able to bind GTP and that this interaction is competed away by incubation with a molar excess of GTP. Second, a targeted genetic disruption of a conserved lysine (K422) important for nucleotide binding identified by sequence alignment with DAPK1 and LRRK2 resulted in complete loss of GTP binding. These data are consistent with results obtained for the other human ROCO proteins. Protein binding to GTP immobilized on sepharose beads is a useful technique to assess the ability of MASL1 to bind GTP; however, it does not provide any information about the ability of this protein to hydrolyse GTP. MASL1 GTPase activity was therefore assessed using previously published protocols [14]. Using the small GTPase RAC1 as a positive control, FLAG-tagged MASL1 free in solution displayed no detectable GTPase activity above background. While these findings suggest that MASL1 has very low endogenous GTPase activity, it is possible that cofactors such as GTPase activating proteins are required for MASL1 to hydrolyse GTP at detectable rates in vitro. The loss of GTP binding has the potential to disrupt the stability of MASL1; however, analysis of steady state expression levels of the wild type and K422A forms of the protein demonstrated that both forms have equivalent expression levels, suggesting that loss of nucleotide binding does not result in a major alteration in folding. This is further supported by SEC Fig. 7. Increased cellular toxicity in the GTP dead form of MASL1. Expression of wild type and K422A MASL1 resulted in significantly increased cellular toxicity compared with untransfected control cells. In addition, cellular toxicity was more severe in the GTP dead form of the protein compared with wild type MASL1. Data are presented as the mean of three individual experiments AE SEM. One-way ANOVA followed by the Bonferroni post hoc test was used to determine potential statistical significance, with P values ≤ 0.05 considered significant. analysis in the presence of EDTA (Fig. S1), which demonstrates that disruption of nucleotide binding results in similar changes in complex formation of MASL1 as observed for the K422A mutant. Fluorescence data from the wild type and K422A forms of the protein indicate that both adopt a folded conformation when purified and react in an identical fashion upon denaturation. It is also notable that the K422A mutant protein displayed a similar pattern of cellular localization compared with wild type MASL1, suggesting that the nucleotide binding dead form of the protein is not destabilized to the point where it is targeted for degradation. One possible consequence of amino acid substitutions impacting on active site binding such as the K422A mutation is an increased co-expression of chaperones such as HSP60 in order to aid protein folding. However, when co-immunoprecipitation of HSP60 was examined no significant difference was observed between wild type and K422A MASL1, suggesting that the lysine to alanine mutation at 422 does not destabilize the protein. The paucity of data relating to MASL1 precludes direct comparisons with other MASL1 studies. Such comparisons are possible, however, with the other members of the ROCO protein familyproviding an opportunity to examine whether MASL1 shares features with its close paralogues. A number of studies have demonstrated that there is a close link between the enzymatic function or nucleotide binding of the ROCO proteins and the complexes that they form, and that a disruption of guanosine nucleotide binding can impair dimerization as is the case for LRRK2 [17,25]. In contrast to LRRK2, loss of guanosine nucleotide binding does not appear to disrupt dimerization of DAPK1 but more probably controls the equilibrium between dimeric or multimeric forms of DAPK1 or controls recruitment of cofactors [10,12]. The SEC data from the experiments described above reveal that MASL1 can be found in two distinct complexes, one of~600 kDa and a smaller one with an apparent molecular mass of~200 kDa (Fig. 3). While both wild type and mutant forms of MASL1 eluted at 600 kDa, only the wild type form of MASL1 (i.e. able to bind guanosine nucleotides) eluted at 200 kDa. Although the precise nature of the two complexes remains to be determined, these data support a model where the LMW complex represents monomeric MASL1 associated with a binding partner/ partners of approximately 50-80 kDa, as a dimeric MASL1 complex should migrate at a higher apparent molecular mass of > 220 kDa. These data appear to conflict with a recent model for ROCO protein function [26]; however, an important caveat to the data reported above is the low resolution of both SEC and BN analyses of cellular complexes and so it is impossible to exclude the possibility that this lower peak consists in part of dimeric MASL1. This highlights the need to derive more detailed structural information for the ROCO proteins, including MASL1. An affinity purification coupled with mass spectrometry analysis revealed that MASL1 strongly interacts with HSP60. Although the interaction could be influenced by the overexpression of MASL1, this result suggests that MASL1 probably requires the aid of chaperones to be correctly folded. It is noteworthy that LRRK2 has been shown to interact with a chaperone, HSP90, and disruption of this interaction results in substantial loss of LRRK2 stability [27]. Further studies are required to determine whether additional interacting proteins are responsible for the observed complex formation between wild type and the nucleotide binding dead form of MASL1. Very little is known about the cellular role of MASL1. Two recent publications have highlighted a potential role in regulating inflammatory response, and data from both DAPK1 and LRRK2 suggest that these proteins may be involved in pathways regulating cell death/proliferation. In light of the link between MASL1 and cancer, the ability of this protein to bind guanosine nucleotides (and the impact of this upon its biology and the complexes that MASL1 forms) may be important for processes such as proliferation, cell death or inflammatory signalling. Based upon previous experiments linking ROC domain biology to the toxicity of LRRK2 and DAPK1, the impact of overexpressing MASL1 and the K422A mutant of MASL1 was evaluated. Wild type MASL1 displayed a cytotoxic impact in HEK293T cells as measured by MTT assay a toxicity that was accentuated to a small but significant extent by the loss of guanosine nucleotide binding. These data are in contrast to results for LRRK2 and DAPK1, where loss of guanosine nucleotide binding decreases the cytotoxic phenotype observed upon overexpression of these proteins. This result was further supported by data from FACS analysis of transfected cells. Following on from these data, it is noteworthy that the cell death induced by MASL1 expression is almost exclusively necrotic, providing a potential insight into the pathways that MASL1 is involved in. Although expression of wild type MASL1 alone results in cytotoxicity (as is observed for both LRRK2 and DAPK1), these data support a role for the switch from GTP to GDP active site occupancy in modulating this toxic impact, with loss of guanine nucleotide binding (and shift to the higher molecular mass complex) accentuating toxicity. Taken together, the data from this study are consistent with a model for MASL1 function where guanosine nucleotide binding/occupancy controls complex formation and the downstream consequences of MASL1 activity (Fig. 9). Based upon the conserved role of guanosine nucleotide binding proteins as molecular switches within the cell, MASL1 may act to regulate cell death/ proliferation based upon its GTP binding. One potential scenario is that the GTP-bound state represents the 'on', or active, state with MASL1 competent to bind its cellular effector(s), while losing the affinity for its effector in the GDP-bound 'off' state results in accumulation within the HMW pool. In this scenario, the HMW complex could represent an inactive, readily releasable pool of oligomeric MASL1. Future information on the identity of effector proteins/binding partners could provide important clues to the physiological function of MASL1, as well as presenting opportunities to test whether the model in Fig. 9 is correct. For example, the identification of candidate guanine nucleotide exchange factors or GTPase activating proteins that interact with MASL1 may shed light on how the transition from one complex to another is managed within the cell. It should be noted, however, that basal toxicity is observed independent of alterations in the ROC domain of MASL1, and it is possible that cytoxicity linked to MASL1 does not rely on the GTP/GDP-bound status of the holoprotein and is independent of the ROC domain. Further investigations into MASL1 function will reveal which scenario is correct. With regard to the cellular role of MASL1, it is of interest that a subset of human leucine-rich repeat proteins function as pathogen responsive genes in primary macrophages, and MASL1 in particular was shown to regulate Toll-like receptor signalling and the Raf/MEK/ERK signalling pathway in CD34(+) cells [28,29]. As LRRK2 has also been robustly linked to immune function [30][31][32], one intriguing possibility is that immune response regulation represents a converging function of human ROCO proteins and that alteration of their immune-related function (i.e. mutations) leads to disease [33]. In summary, the data presented in this study represent the first report of the biochemical and cellular properties of MASL1. The results demonstrate that MASL1 is a GTP binding protein and that its GTP binding properties act to regulate complex formation and cytotoxicity ex vivo. Further studies are required to determine the exact pathways that this protein is involved in, and to understand its basic cellular function and possible contribution towards pathology. Importantly, a more detailed understanding of this so far little studied protein has the potential to contribute to a greater comprehension of how the ROCO proteins function in a cellular context. Materials and methods Constructs HA and FLAG epitope tagged wild type MASL1 constructs were obtained from Genecopoeia (http://www.genecopoe ia.com). To generate mutant forms of MASL1 site directed mutagenesis was carried out using the QuikChange II mutagenesis kit (Agilent, Santa Clara, CA, USA) according to the manufacturer's instructions. Constructs were sequenced prior to use and primer sequences used for the mutagenesis are available from the authors upon request. Cell culture and MASL1 expression HEK293T cells were obtained from ATCC Ò (line identifier CRL-1573) and grown under standard tissue culture conditions at 37°C in 5% CO 2 in DMEM (Life Technologies, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (Life Technologies). To transfect cells 20 lg of plasmid DNA was dissolved in 500 lL DMEM before 40 lL polyethylenimine (PEI, Polysciences, Warrington, PA, USA) was added and the solution was mixed and left to incubate for a minimum of 15 min at room temperature. The DNA/ PEI solution was then added to cells in 10-15 cm Petri dishes with experimental procedures being carried out 48 h thereafter. PEI solution treated HEK293T cells were used throughout this study as control samples. All incubations, washes and other treatments were kept consistent between MASL1 and control samples. GTP pulldown assay Forty-eight hours after transfection HEK293T cells were washed twice with ice-cold phosphate buffered saline (NaCl/ P i ) prior to lysis. This was achieved by harvesting cells in cell lysis buffer (Cell Signaling Technology, Danvers, MA, USA) containing 1 9 complete protease inhibitor cocktail (Roche, Indianapolis, IN, USA). Subsequently cell lysates were rotated for 30 min at 4°C and then spun at 10 000 g for 10 min to remove cell debris. Next cell lysates were incubated with 35 lL sepharose beads for 30 min at 4°C and rotation. Finally, cell lysates were again centrifuged at 10 000 g, 4°C for 10 min to remove sepharose beads, with the clarified lysates used in subsequent experiments. GTP-conjugated sepharose beads (Sigma, St Louis, MO, USA) were washed with lysis buffer and incubated for 1 h at 4°C and rotation with 0.1 mgÁmL À1 BSA. GTP beads were then added to cell lysates for 2 h at 4°C and rotation AE 10 mM GTP (Sigma) before beads were washed three times with lysis buffer and denatured in 4 9 SDS loading buffer (NuPage SDS sample buffer, Life Technologies) with 5% b-mercaptoethanol at 100°C for 10 min. Denatured samples were loaded onto 4-12% Bis Tris gels (Life Technologies). Simultaneously, to verify MASL1 expression 10 lg per sample as determined by bicinchoninic acid assay (ThermoFisher, Rockland, IL, USA) according to the manufacturer's instructions were also loaded onto Bis Tris gels (Life Technologies). Proteins were then transferred to poly (vinylidene difluoride) membrane (Millipore, Billerica, MA, USA), blocked with 5% milk in NaCl/ P i -Tween (NaCl/P i + 1% Tween-20) and probed with primary antibodies at 4°C overnight (1 : 2000 rat anti-HA, Roche; 1 : 5000 mouse anti-b-actin, Sigma). The next day membranes were washed three times in NaCl/P i -Tween and incubated for 1 h with secondary antibodies in 5% milk in NaCl/P i -Tween at room temperature followed by another three washes and incubation with Pierce ECL substrate. Finally membranes were exposed to Kodak biomax films (Kodak, Rochester, NY, USA) and developed on a Kodak developer according to the manufacturer's instructions. GTPase assay Cell lysis was carried out as described above with cell lysis buffer (Cell Signaling Technology). Wild type MASL1, K422A MASL1 and RAC1 supernatants were incubated with anti-FLAG beads (Sigma) overnight at 4°C. The next day beads were washed 2 9 in each of five wash buffers (20 mM Tris, 150-500 mM NaCl, 0.02-1% Triton X-100) before being eluted in elution buffer (20 mM Tris, 150 mM NaCl, 0.02% Triton X-100) with 150 ngÁlL À1 FLAG peptide (Sigma) for 45 min at 4°C. Eluted protein was then incubated with assay buffer (20 mM Hepes, pH 7.2, 2 mM MgCl 2 , 1 mM dithiothreitol and 0.05% BSA) and [a 32 P] GTP (5 lCi; Perkin Elmer, Waltham, MA, USA) was added to each reaction. Samples were incubated at 37°C with vigorous shaking and 2 lL aliquots were removed at time points from 0 to 120 min and spotted onto TLC plates (Sigma). Samples were then subjected to rising TLC under 1 M formic acid and 1.2 M LiCl. After drying the plates they were exposed to Kodak biomax films overnight and developed the following day using a Kodak developer. The remaining samples were denatured, run on SDS/ PAGE, transferred onto poly(vinylidene difluoride) membranes and immunoblotted as described above. Intrinsic fluorescence Fluorescence emission spectra were recorded on a Cary Eclipse fluorescence spectrophotometer (Agilent Technologies) using the Cary Eclipse program. Sample measurements were carried out using an optical path length of 10 mm. Fluorescence spectra were obtained using an excitation wavelength of 288 nm, with an excitation bandwidth of 5 nm. Emission spectra were recorded between 300 and 400 nm at a scan rate of 30 nmÁs À1 . Spectra were acquired using 100 nM proteins in 20 mM Tris/HCl buffer (pH 7.5), 150 mM NaCl and 0.02% Tween-20. Immunocytochemistry To visualize expression of MASL1, transfected HEK293T cells were grown on glass coverslips (VWR, Radnor, PA, USA) for 48 h after transfection before being fixed with 4% PFA at room temperature for 10 min. Fixed cells were then blocked for 30 min in 15% normal goat serum (NGS) in NaCl/P i + 0.1% Triton X-100 before incubation with primary antibodies in 10% NGS in NaCl/P i + 0.1% Triton X-100 (1 : 2000 rat anti-HA, Roche; 1 : 1000 mouse anti-LAMP1; 1 : 500 rabbit anti-complex V) at 4°C overnight. Next cells were incubated with fluorescently labelled secondary antibodies in 10% NGS in NaCl/P i + 0.1% Triton X-100 (goat anti-rat 488, goat anti-mouse IgG1 568; goat anti-rabbit 568) before being counterstained with DAPI at 1 : 2000 in NaCl/P i . To detect F-actin Alexa Fluor Ò 633 phalloidin was used at 1 : 300 (Life Technologies) before cells were mounted onto uncoated glass slides (VWR) using Fluoromount G mounting medium (Southern Biotech, Birmingham, AL, USA). Images were obtained on an LSM 710 confocal microscope and compiled using Adobe Photoshop. Size exclusion chromatography MASL1 transfected HEK293T cell lysates obtained as described above were injected and separated on a Superose 6 10/300 column (GE Healthcare, Little Chalfont, Buckinghamshire, UK). The column was pre-equilibrated with buffer (20 mM Tris/HCL, pH 7.5, 150 mM NaCl and 0.07% Tween-20 or with the same buffer containing 200 lM GTP, GDP or GMppCp) and used at a flow rate of 0.5 mLÁmin À1 . Elution volumes of standards were 12 mL for thyreoglobin (669 kDA), 14 mL for ferritin (440 kDA), 15.5 mL for catalase (232 kDA) and 17 mL for aldolase (158 kDA). Elution fractions were collected and analysed using dot blots. For the analysis in the presence of guanosine nucleotides the Superose 6 10/300 column was pre-equilibrated with buffer (as mentioned above) containing 200 lM GTP, GDP or GMppCp for 60 min. Cell lysates were incubated with the same concentration of nucleotides prior to analysis by SEC. Dot blot 1 lL of each collected SEC fraction was spotted onto a nitrocellulose membrane. Once dried the membrane was blocked in NaCl/P i -Tween with 5% milk for 1 h before being incubated with 1 : 2000 rat anti-HA and secondary anti-rat (Sigma) both in 5% milk. Membranes were then developed as described above. Mass spectrometry analysis MS analysis was carried out by the Biological Mass Spectrometry Centre at the Institute of Child Health, University College London (http://www.ucl.ac.uk/ich/services/lab services/mass_spectrometry). For this analysis MASL1 transfected HEK293T cells were lysed, immunoprecipitated and washed as described above (GTPase assays) before denatured samples were loaded and run on 4-12% Bis Tris gels (Life Technologies), stained with Imperial Protein Stain (Thermo Scientific) and excised from gels (Fig. 4A) for MS analysis by MALDI-QTOF MS (Waters Corporation, Milford, MA, USA). Cell toxicity assay Forty-eight hours post-transfection of HEK293T cells with MASL1 (wild type and mutant forms) MTT (Sigma Aldrich) was added at a final concentration of 500 lgÁmL À1 for 3 h. The medium was then discarded before formazan crystals, accumulated within energetically active cells, were dissolved in pure dimethylsulfoxide. The assay was carried out in 96-well plates, which were then analysed in a multiwell plate reader measuring the absorbance of every single well at a wavelength of 570 nm. Results are shown as percentage cell viability after transfection in comparison with transfection reagent only treated cells. Fluorescence-activated cell sorting To establish cellular toxicity of wild type and K422A MASL1 in HEK293T cells an apoptotic/necrotic/healthy cells detection kit (Promokine, Heidelberg, Germany) was used, following the manufacturer's instructions. To detect MASL1 transfected cells a fluorescently labelled FLAG-tag antibody (DYKDDDDK-tag antibody, Alexa Fluor 647 conjugate; Cell Signaling) was used. Cells were stained for 30 min in NaCl/P i containing 0.1% Triton X-100 with FLAG-tag antibody at 1 : 500. Post staining cells were resuspended in cold NaCl/P i containing 0.1 mM EDTA to minimize clumping and passed through a cell strainer (70 lm) (BD Biosciences, San Jose, CA, USA). Cells were sorted on a BD Bioscience FACS LSRII running FACSDIVA6.3 software. Fluorescently labelled cells were excited using the red laser (560 nm), with emission analysed using the PE PMT via a 505 long pass and 525/50 band pass filter, the green laser (488 nm) and analysed using the FITC PMT via a 505 long pass and a 525/50 band pass filter, or a violet (405 nm) laser and the Pacific Blue PMT with a 450/50 band pass filter. Debris, doublets and dead cells were excluded from analysis (Fig. 8, p2 represents all cells analysed). Approximately 27 000 FLAG positive cells were analysed for both wild type and K422A MASL1. Statistical analysis SPSS software (IBM, Armonk, NY, USA) was used for statistical comparison. To assess potential significant differences one-way analysis of variance analysis (ANOVA) with Bonferroni post hoc correction was used. Significance was considered to be < 0.05. Supporting information Additional supporting information may be found in the online version of this article at the publisher's web site: Fig. S1. Nucleotide binding regulates complex formation. Fig. S2. Representative fluorescence spectra of purified wild type and K422A MASL1 in the presence and absence of 6 M GdHCl using an excitation wavelength of 280 nm. Fig. S3. SEC analysis of wild type MASL1 in the presence of 200 lm GTP or GMppCp. Fig. S4. FACS analysis quantification.
8,244.6
2013-11-28T00:00:00.000
[ "Biology" ]
In Vitro Induction of Erythrocyte Phosphatidylserine Translocation by the Natural Naphthoquinone Shikonin Shikonin, the most important component of Lithospermum erythrorhizon, has previously been shown to exert antioxidant, anti-inflammatory, antithrombotic, antiviral, antimicrobial and anticancer effects. The anticancer effect has been attributed to the stimulation of suicidal cell death or apoptosis. Similar to the apoptosis of nucleated cells, erythrocytes may experience eryptosis, the suicidal erythrocyte death characterized by cell shrinkage and by phosphatidylserine translocation to the erythrocyte surface. Triggers of eryptosis include the increase of cytosolic Ca2+-activity ([Ca2+]i) and ceramide formation. The present study explored whether Shikonin stimulates eryptosis. To this end, Fluo 3 fluorescence was measured to quantify [Ca2+]i, forward scatter to estimate cell volume, annexin V binding to identify phosphatidylserine-exposing erythrocytes, hemoglobin release to determine hemolysis and antibodies to quantify ceramide abundance. As a result, a 48 h exposure of human erythrocytes to Shikonin (1 µM) significantly increased [Ca2+]i, increased ceramide abundance, decreased forward scatter and increased annexin V binding. The effect of Shikonin (1 µM) on annexin V binding was significantly blunted, but not abolished by the removal of extracellular Ca2+. In conclusion, Shikonin stimulates suicidal erythrocyte death or eryptosis, an effect at least partially due to the stimulation of Ca2+ entry and ceramide formation. Eryptosis is triggered by a myriad of xenobiotics [25, , and excessive eryptosis is observed in a wide variety of clinical conditions, such as diabetes, renal insufficiency, hemolytic uremic syndrome, sepsis, malaria, sickle cell disease, Wilson's disease, iron deficiency, malignancy, phosphate depletion and metabolic syndrome [25]. The present study explored, whether and, if so, how Shikonin stimulates eryptosis. To this end, [Ca 2+ ] i , cell volume and phosphatidylserine exposure were determined in the absence and presence of Shikonin. Results and Discussion The present study explored whether the naphthoquinone Shikonin triggers eryptosis, the suicidal erythrocyte death characterized by cell shrinkage and phosphatidylserine translocation. As both hallmarks of eryptosis are stimulated by the increase of cytosolic Ca 2+ activity ([Ca 2+ ] i ), the effect of Shikonin on [Ca 2+ ] i was tested in a first series of experiments. To this end, human erythrocytes were loaded with Fluo3-AM and the Fluo 3 fluorescence determined by flow cytometry. Prior to the determination of Fluo 3 fluorescence, the erythrocytes were incubated for 48 h in Ringer solution without or with Shikonin (0.1-1 µM). As illustrated in Figure 1, the exposure to Shikonin was followed by an increase of Fluo 3 fluorescence, an effect reaching statistical significance at a 1-µM Shikonin concentration. Accordingly, Shikonin increased cytosolic Ca 2+ concentration. (B) Arithmetic means ± SEM (n = 8) of the Fluo 3 fluorescence (arbitrary units) in erythrocytes exposed for 48 h to Ringer solution without (white bar) or with (black bars) Shikonin (0.1-1 µM). *** (p < 0.001) indicates significant difference from the absence of Shikonin (ANOVA); (C) Original dot blots of the Fluo 3 fluorescence as a function of forward scatter following exposure for 48 h to Ringer solution without and with the presence of 1 µM Shikonin; (D) Arithmetic means ± SEM (n = 4) of the Fluo 3 fluorescence (arbitrary units) in erythrocytes exposed for 1-48 h to Ringer solution without (white squares) or with 1 µM Shikonin (black squares). * (p < 0.05) indicates significant difference from the absence of Shikonin (ANOVA). ] i may trigger cell shrinkage due to the activation of Ca 2+ -sensitive K + channels and the subsequent exit of KCl and osmotically obliged water. Thus, cell volume was estimated from forward scatter in flow cytometry. As illustrated in Figure 2, a 48 h exposure to Shikonin was followed by a decrease of forward scatter, an effect reaching statistical significance at 1 µM Shikonin concentration. Accordingly, Shikonin treatment was followed by erythrocyte shrinkage. An increased [Ca 2+ ] i may further trigger cell membrane phospholipid scrambling with phosphatidylserine translocation to the erythrocyte surface. Phosphatidylserine exposing erythrocytes were identified utilizing annexin V binding as determined in flow cytometry. As illustrated in Figure 3, a 48 h exposure to Shikonin was followed by an increase of the percentage of annexin V binding erythrocytes, an effect reaching statistical significance at a 0.5 µM Shikonin concentration. Accordingly, Shikonin triggered erythrocyte phosphatidylserine translocation with the subsequent translocation of phosphatidylserine to the cell surface. Further experiments were performed to quantify the effect of Shikonin on hemolysis. The percentage of hemolyzed erythrocytes was calculated from the hemoglobin concentration in the supernatant. As illustrated in Figure 3, the percentage of hemolyzed erythrocytes did not significantly increase following Shikonin exposure. In order to test whether the Shikonin induced phosphatidylserine translocation required the entry of extracellular Ca 2+ , erythrocytes were treated for 48 h with 1 µM Shikonin in either the presence or nominal absence of extracellular Ca 2+ . As illustrated in Figure 4, the effect of Shikonin on annexin V binding was significantly blunted in the nominal absence of Ca 2+ . However, even in the nominal absence of Shikonin, the percentage of annexin V binding erythrocytes increased significantly. Thus, the effect of Shikonin was apparently not fully dependent on Ca 2+ entry. In the search for an additional mechanism, further experiments were performed in order to test whether Shikonin increased the formation of ceramide. Ceramide abundance at the erythrocyte surface was determined utilizing an anti-ceramide antibody. As illustrated in Figure 5, the exposure of erythrocytes to 1 µM Shikonin significantly increased the ceramide abundance at the erythrocyte surface. Additional experiments tested the effect of Shikonin on erythrocytic ATP concentration. As illustrated in Figure 6, a 48 h incubation of erythrocytes with 1 µM Shikonin was followed by a significant decrease of erythrocytic ATP concentration. For a comparison, the effect of glucose removal on erythrocytic ATP concentration is shown in Figure 6. The Shikonin-induced cell shrinkage is presumably due to the increase of [Ca 2+ ] i , which activates Ca 2+ -sensitive K + channels [25], leading to K + exit, cell membrane hyperpolarization, Cl − exit and, thus, cellular loss of KCl with osmotically obliged water [26]. The cellular loss of KCl counteracts erythrocyte swelling, which may jeopardize the integrity of the erythrocyte membrane. Excessive erythrocyte swelling leads to hemolysis with the subsequent release of hemoglobin, which subsequently undergoes glomerular filtration and precipitation in the acidic tubular lumina [72]. Stimulation of phosphatidylserine translocation following Shikonin treatment is similarly a result of Ca 2+ entry. Accordingly, the effect of Shikonin was significantly blunted in the nominal absence of Ca 2+ . Removal of Ca 2+ decreased phosphatidylserine translocation even in the absence of Shikonin, indicating that spontaneous eryptosis following the exposure of erythrocytes to Ringer without Shikonin was similarly, in part, caused by cytosolic Ca 2+ . However, significant phosphatidylserine exposure was observed in the presence of Shikonin even in the nominal absence of Ca 2+ , indicating that Shikonin was effective by an additional mechanism. As a matter of fact, Shikonin triggered the formation of ceramide, which, in turn, fosters phosphatidylserine translocation [25]. Ceramide is formed by a sphingomyelinase, which is, in turn, activated by a platelet activating factor [25]. Ceramide fosters suicidal death similarly in nucleated cells [73]. It is effective by potentiating proapoptotic signaling [73]. Shikonin further decreases erythrocytic ATP concentration, another known trigger of eryptosis [25]. The molecular mechanism underlying phosphatidylserine translocation has not yet been identified [25]. Recently, Anoctamin 6 (Ano6; TMEM16F gene) has been suggested to mediate cell membrane scrambling [74]. The molecule may function as a Cl − channel, a Ca 2+ -regulated nonselective Ca 2+ permeable cation channel, a scramblase mediating phosphatidylserine translocation upon the increase of cytosolic Ca 2+ and a regulator of cell membrane blebbing and microparticle shedding [74]. However, Ano6 is inhibited by tannic acid [75], which itself triggers phosphatidylserine translocation in erythrocytes [60]. Thus, the role of Ano6 in the regulation of phosphatidylserine translocation in the erythrocyte membrane remains elusive. Stimulation of eryptosis may be favorable, as it allows the elimination of defective erythrocytes prior to hemolysis [25]. The phosphatidylserine exposure at the cell surface may be particularly important during the course of malaria, as infected phosphatidylserine exposing cells may be recognized and, thus, rapidly removed from circulating blood. [76]. Infected erythrocytes undergo eryptosis, as the intraerythrocytic parasite activates several ion channels, including the Ca 2+ -permeable erythrocyte cation channels [77,78]. Activation of the channels in the host cell membrane is required for the intraerythrocytic survival of the pathogen [77,78], as the channels provide the pathogen with nutrients, Na + and Ca 2+ , and they mediate the disposal of intracellular waste products [78]. By the same token, the Ca 2+ entry through the Ca 2+ -permeable cation channels triggers eryptosis [76], which is, in turn, followed by the rapid clearance of the infected erythrocytes from circulating blood [25]. Ca 2+ entry and Ca 2+ -induced eryptosis thus lead to elimination, not only of the infected erythrocyte, but also of the pathogen [76]. Accordingly, genetic disorders predisposing to accelerated eryptosis, such as the sickle-cell trait, the beta-thalassemia trait, homozygous Hb-C and G6PD deficiency [25], lead to a relatively mild clinical course of malaria following infection with Plasmodia [79][80][81]. Moreover, several clinical conditions, such as iron deficiency [82], and eryptosis stimulating drugs, such as lead [83], chlorpromazine [84] or NO synthase inhibitors [85], have been shown to favorably influence the clinical course of malaria. It may be worth considering whether Shikonin similarly decreases parasitemia in malaria. At least in theory, Shikonin and further proeryptotic substances could be employed for the treatment of malaria. However, the applicability of Shikonin in the treatment of malaria has not been tested, and its clinical use may depend on further properties and side effects not studied here. Excessive stimulation of eryptosis may lead to anemia. Phosphatidylserine at the surface of eryptotic cells triggers the phagocytosis of the cells and, thus, leads to the rapid removal of the suicidal erythrocytes from circulating blood [25]. Anemia develops, if the accelerated clearance of erythrocytes during stimulated eryptosis outcasts the formation of new erythrocytes [25]. Phosphatidylserine exposing erythrocytes may further interfere with microcirculation [86][87][88][89][90][91]. The phosphatidylserine exposing cells adhere to endothelial CXCL16/SR-PSO [87] and may stimulate blood clotting and thrombosis [86,92,93]. Anemia and compromised microcirculation may, at least in theory, limit the use of Shikonin. The substance has been proposed to counteract oxidative stress [1], inflammation [1][2][3], thrombosis [1,2], infections [1,2] and cancer [2,[5][6][7][8]. Moreover, Shikonin has been used to support wound healing [1,2,5]. However, the toxicity of Shikonin is still ill-defined [1]. The present observations point to a novel potentially toxic effect of Shikonin. The local application of the substance, such as in creams and ointments [1], is not expected to trigger eryptosis. Systemic application of the substance may, however, jeopardize erythrocyte survival and microcirculation, thus potentially limiting the use of the substance. This may particularly be true in patients suffering from diseases with enhanced eryptosis [25] or receiving other eryptosis-inducing xenobiotics [25, . After oral and intramuscular administration, Shikonin is rapidly absorbed and has a half-life in plasma of 8.8 h and a distribution volume of 8.91 L/kg [1]. Several approaches have improved the stability and water solubility of Shikonin, thus providing the basis for the systemic application of the substance [1]. Future studies will be required to define the impact of Shikonin on erythrocyte survival and microcirculation following systemic application of Shikonin. Analysis of Annexin V Binding and Forward Scatter After incubation under the respective experimental condition, a 50 µL cell suspension was washed in Ringer solution containing 5-mM CaCl 2 and then stained with annexin V FITC (1:200 dilution; ImmunoTools, Friesoythe, Germany) in this solution at 37 °C for 20 min under protection from light. In the following, the forward scatter (FSC) of the cells was determined, and the annexin V fluorescence intensity was measured with an excitation wavelength of 488 nm and an emission wavelength of 530 nm on a FACS Calibur (BD, Heidelberg, Germany). Determination of Ceramide Formation For the determination of ceramide, a monoclonal antibody-based assay was used. After incubation, cells were stained for 1 hour at 37 °C with 1 µg/mL of anti-ceramide antibody (clone MID 15B4, Alexis, Grünberg, Germany) in PBS containing 0.1% bovine serum albumin (BSA) at a dilution of 1:5. The samples were washed twice with PBS-BSA. Subsequently, the cells were stained for 30 minutes with polyclonal fluorescein isothiocyanate (FITC) conjugated goat anti-mouse IgG and IgM-specific antibody (Pharmingen, Hamburg, Germany) diluted 1:50 in PBS-BSA. Unbound secondary antibody was removed by repeated washing with PBS-BSA. The samples were then analyzed by flow cytometric analysis with an excitation wavelength of 488 nm and an emission wavelength of 530 nm. ATP Content For the determination of the intracellular ATP concentration, erythrocytes were lysed in distilled water and proteins were precipitated by HClO 4 (5%). After centrifugation, an aliquot of the supernatant (400 µL) was adjusted to pH 7.7 by the addition of saturated KHCO 3 solution. All manipulations were performed at 4 °C to avoid ATP degradation. After dilution of the supernatant, the ATP concentration of the aliquots was determined utilizing the luciferin-luciferase assay kit (Roche Diagnostics, Mannheim, Germany) and a luminometer (Berthold Biolumat LB9500, Bad Wildbad, Germany) according to the manufacturer´s protocol. Statistics Data are expressed as arithmetic means ± SEM. As indicated in the figure legends, statistical analysis was made using ANOVA with Tukey's test as the post-test and the t-test as appropriate. n denotes the number of different erythrocyte specimens studied. Since different erythrocyte specimens used in distinct experiments are differently susceptible to the triggers of eryptosis, the same erythrocyte specimens have been used for control and experimental conditions. Conclusions Shikonin stimulates Ca 2+ entry and triggers ceramide formation, which, in turn, leads to shrinkage and phosphatidylserine translocation of erythrocytes. Accordingly, Shikonin triggers eryptosis, the suicidal erythrocyte death.
3,201.8
2014-05-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Characterisation of Solid Mine Wastes Produced during Iron Ore Mining and Processing and their Potential Environmental Impacts Solid waste deposited on the surface acts as a potential source of environmental pollution. High concentration of toxic elements in solid mine wastes can pose serious environmental risks. This study aims at characterising solid mine wastes produced due to iron ore mining and processing for their geochemistry and mineralogy. Samples were collected using a stratified sampling strategy. X-ray Fluorescence (XRF), X-ray Diffraction (XRD), and Petrographic analysis techniques were used. XRD analysis revealed a high abundance of Hematite (29%), with low amounts of Quartz stockpile samples. Berlinite (33%) amounts were high in waste dump samples, where Quartz was in high concentrations (34%) in the overburden samples. XRF analysis revealed a high amount of iron in the stockpile and waste dump, while Silica was highest in the overburden. Petrography analysis revealed major minerals in the solid mine waste: magnetite, Hematite, and Quartz with traces of mica, olivine, feldspar, and biotite. The minerals were characterised by a lamellar structure with mutual grain boundaries. Sulfide minerals that may cause acid drainage and various heavy metals were in considerable amounts. These elements have the potential of causing adverse environmental impacts hence the need for such characterisation to devise mitigation strategies and rehabilitation. Introduction The characterisation of mineral and processing wastes is a key step during mining operations to mitigate risks associated with these wastes (Jamieson et al. 2015). This is made possible by providing appropriate remediation and waste management schemes. This characterisation help reveal the nature of the mine wastes and their potential interactions with the ecosystem (Hudson-Edwards and Dold 2015). In the recent past, iron ore has been extracted in sedimentary rocks in different parts of the world over a long period (Lascelles 2011). Iron ore occurs as magnetite (Fe 3 O 4 ), Goethite (FeO(OH)) and Hematite (Fe 2 O 3 ) (Shrimali et al. 2016). Iron ore has very vast usage across the globe that makes it an essential commodity in industrialisation that has been continuously sought through mining (Cheneket 2018). Mining of iron is associated with various mineral processing operations such as blasting, grinding, and magnetic separation, which results in the production of various solid mine wastes (Ferreira and Leite 2015). According to Bett et al. (2016), signi cant iron ore deposits have been found in Marimanti in Tharaka Nithi County, Kitui County, Migori County, and Taita Taveta County, among other many areas. Mine wastes constitute the largest percentage of any wastes produced by various industrial activities (Suleman and Baffoe 2017) & (Lottermoser and Lottermoser 2003). In mining and mineral processing operations, mining wastes are sub-economic materials that contain low cutoff grades or do not contain any ore mineral (Lottermoser and Lottermoser 2003). Mining operations, metallurgical extractions, and mineral processing produces various gaseous, liquid, and solid wastes. These are generated in different processes during mining, mineral processing, and metallurgical operations, as shown in Fig. 1. During open pit mining and development, various mine wastes such as waste rocks, spoils, overburden, atmospheric emissions, and mine water may be emitted (Ferreira and Leite 2015). Mineral processing activities produce various processing wastes such as tailings, stockpiles of less grade materials, sludges, milling water, and emissions (David Meehan 2012). The chemical and physical characteristics of mine wastes may vary in relation to the geochemistry and mineralogy of the resource, the size of the crushed mineral particles, processing chemicals, materials handling method, method and type of blasting techniques utilised, and the processing technology used During materials handling at any mine, about 70% of the material handled is waste whose geochemical properties may be equivalent to the ore mined (Jamieson 2011). According Nordstrom (2011), drainage chemistry results from iron sul de minerals oxidation such as pyrite, whose reaction generates acidity and various sulfates. Most of the environmental challenges are associated with various mine wastes and their capability to react chemically with the water and air (Jamieson 2011). To carefully understand these reactions involved, it shall be needful to characterise the solid mine wastes in order to determine their geochemistry and mineralogy. This shall be done using XRD, XRF, and Petrographic techniques. Study Area This study was conducted within the Samrudha Resources Iron Ore mining area in Taita Materials and Equipment Equipment and materials used in this study include a Laboratory jaw crusher, pulveriser, X-Ray Diffractometer (Roller 2011), X-ray Fluorescent (XRF) machine (Langhoff et al. 2006), petrological microscope (Dias et al. 2020), and solid mine waste samples. Sample Collection Strati ed sampling strategy was employed in sample collection to allow replication and representation with the number of increments determined per ISO 3082 (ISO 2017). Sampling pits were dug, and the samples were collected and stored in Khaki bags. Approximately 200 g of each sample were taken and stored in well-labeled khaki bags sealed and ferried to the laboratory for preparation and analyses. Sample Preparation Representative rock samples (each 200g) were dried in an oven at 120°C to a constant weight in a furnace and then allowed to cool at room temperature. The samples were homogenised then pulverised to 200 microns. Elemental Analysis Pulverised sample (50g) for each designate was mixed with 5g of ux starch, and the resultant mixture was mixed in a mortar and pestle. The resulting mixture was made into pellets using a hydraulic press. The pellets were fed into the XRF machine for analysis. Mineralogical Analysis Mineralogy was determined using both X-Ray Diffraction (XRD) and Petrological analysis. For the XRD, samples of 50 grams were pulverised to ne sizes of between 30 to 50 µm following procedures outlined by Knorr and Bornefeld (2005). For qualitative evaluation and homogenisation of various mineral components, the samples were ground to pass through a 45 µm mesh and then placed on the sample holder for analysis using Bruker A.X.S. D2 phaser SSD16 and relevant plots done by Difrac Eva software. For petrological analysis, 30mm samples were cut from the whole samples using a diamond cutter. The samples were mounted onto glass slides, ground using a grit series (400, 600 & 800), and then covered with a slide for analysis using Fein Optic Polarizing Light Microscope. Results And Discussion Elemental Analysis of the Solid Mine Wastes Waste Dumps The results for elemental analysis for waste dumps are shown in Table I. The results indicated that all samples had higher amounts of Fe 2 O 3 than other elements, with WD1 recording the highest amount. Silica was also in moderate amounts. Other elements such as Thorium, Chromium, and Vanadium were identi ed in traces. Heavy metals were present in the waste dump samples. High amounts of iron ore identi ed in waste dump samples inferred high dilution during mining. Only high-grade iron ore is mined at Samrudha resources mine; therefore, ore that is highly diluted with gangue minerals (Silica) is disregarded as waste. The waste dumps, which are generally the blasted material, contain chemical elements such as Sulphur that may be potential environmental pollution agents. Stockpiles The stockpiled materials result shown in Table II contains iron ore of low grade and iron ore nes from the processing operations at the mine. and SP4 samples were iron ore nes; despite having high amounts of iron ore concentration, the material is unwanted and is therefore waste. These nes are stockpiled near the processing plant for disposal. The other stockpiled materials (SP2, SP3) are low-grade iron ore, and small size (50mm) oat iron occurring alluvially in the area. These materials are currently not utilised by the mining company and contain potential environmental pollutants, as revealed in the XRF analysis in Table III. Overburden The overburden material, most abundant in the area, generally consists of a mixture of topsoil and rocks either blasted or dozed off during access to the ore before blasting operation. . Quartz and iron were in low amounts. This is because the ore has already been processed; therefore, it contains low amounts of gangue minerals. causing the magnetite mineral, which contains iron ore, to be identi ed in low amounts. Petrographic Microscopy of Solid Mine Wastes The petrographic analysis was done on the solid mine wastes to study the transparent composition of these samples. Both cross-polarised light (X.P.L) and plane-polarised light (PPL) were applied to identify the samples' minerals. Petrography microscopy analysis results are shown in Figs. 9, 10, 11, 12, and 13. The thin section of stockpile (SP2) sample ( Fig. 9) consists of Magnetite, Pyro -Garnets Quartz, Biotite, and Hematite. The dark opaque visible colour in the thin sections is the magnetite. The whitish colour indicates the biotite iron ore and bands of quartz assuming different shades. The brownish-grey colour indicates martitized magnetite in the sample hence occurring as Hematite. A thin section of the waste dump sample (Fig. 10) shows a dominant opaque dark grey colour, indicating traces of magnetite and the rust-like colours of brown-red iron hydroxide (Hematite). The white -pink colour indicates the quartz. Feldspar is structured in a network with intrusions of quartz. There are olivine, biotite, and Zircon minerals appearing in between mineral grains of magnetite. In Fig. 11, magnetite is the dominant mineral. It is occurring as opaque; however, hematite occurring as a brownish colour. The solid waste is characterised by specula and simple lamellae texture with mutual grain boundaries between various individual mineral intrusions suggesting that the minerals break along the grain boundaries. The overburden material sample generally occurs, and needlelike crystals of feldspar with inclusions of quartz, mica, and shades of pyroxene as viewed in Fig. 12. Sample of stockpile (SP1) (Fig. 13) contains high amounts of magnetite and hematite minerals occurring as opaque and brown shades. There are traces of gangue minerals such as quartz, pyroxene, garnet, and olivine minerals in the sample. The occurrence of the individual grains which are well-formed exhibit resistance to weathering. Grain boundaries also separate different minerals with varying sizes without individual grains interlocking. When exposed to various atmospheric conditions such as meteoric water and air, this phenomenon indicates that the minerals may react at a higher rate. Therefore, toxic chemical elements pose a high risk to the environment in the event of a reaction. Conclusion Most of the environmental impacts of iron ore mining and processing are associated with the subsequent release of various toxic elements from the mine wastes produced. These mine wastes tend to pose serious problems, not because of the volumes produced but also the areal extents and coverage. Due to the continuous pollution and contamination, solid mine wastes from iron ore mining must be isolated and treated to reduce oxidation, erosion, toxicity, or dumped waste in well-designed waste dumps. Uncontrolled disposal of solid mine wastes can be associated with increased turbidity in various water sources and the release of harmful chemical elements, which may also be acidic (Bernd G. Lottermoser 2011). These contaminants travel through the soil, water, biosphere, and atmosphere to cause environmental impacts. Most of the adverse environmental effects caused by solid mine wastes are their tendencies to chemically react with available water or air hence producing contaminants (Jamieson 2011). These contaminants may also be inhaled or ingested by humans and react with uids in the body (Kirsimäe 1999). Airborne dust and winds may also cause transportation of these chemical elements present in the solid mine waste to soils and surface water. Minerals and chemical elements in solid mine wastes may occur as sul de minerals, non-sul de minerals such as carbonates, secondary minerals formed by chemical weathering such as goethite, and chemical compounds produced by ore processing as oxides produced through the leaching process (Jamieson 2011). There is a presence of considerable amounts of Sulphide minerals such as Helvine (Be 3 Mn 2 + 4 (SiO 4 ) 3 S) and Sphalerite (ZnS) in the stockpile and waste dump samples. Exposure of these sul de elements to an oxygenated environment, surface runoff, and groundwater will cause the sul des to be oxidised and produce acid waters that can be sources of contaminants such as heavy metals and metalloids. When these sul de minerals are exposed to the oxidising atmosphere, they become unstable chemically (Parbhakar-Fox et al. 2018). During this process, chemical weathering reactions are initiated due to a lack of equilibrium with the environment. Therefore, weathering proceeds aided with various microorganisms present, meteoric waters, and atmospheric gases (Bernd G. Lottermoser 2011). Iron ore nes stockpile is at the highest risk. This is because of its increased surface area, subsequently increasing its potential for exposure to oxidation and weathering. Petrography analysis also revealed poorly crystalline samples with structural defects that contained distorted crystal lattice. This eventually leads to building up stresses in the rock structure, making the mineral susceptible to chemical attack. The mine wastes also contain phosphates, silicates, oxides, hydroxides, carbonates, and halides that can pose an environmental problem. Recommendations Due to the adverse impacts of solid mine wastes produced during iron ore mining, reducing their environmental and health impacts would require proper management, skills, and expertise. These include a characterisation of all solid wastes in terms of their geochemistry and mineralogy. Understanding the solid mine waste interactions to the various condition can be of key bene t to the mining company. The following are practical ways that can be done to minimise and reduce the environmental impacts caused or likely to be caused by solid mine wastes: Bene ciation of Low-Grade Iro Ore Low-grade iron ore that is disregarded as waste could be bene ciated further to avoid causing environmental pollution if dumped as waste. The bene ciation method could include using bio oatation Utilisation of Iron Ore Dust in Cement Companies Iron ore dust can also be sold to Cement Manufacturing Companies. This provides an alternative ready alumina-silicate raw material that is more effective and reactive than normal clinker (Luo et al. 2016). Phytoremediation Plants can be used to clean up contaminated environments. Restoration of dense vegetation cover can physically stabilise these solid mine wastes dumped and help reduce the pollution of the chemical elements. Different species of plants that can adapt to the atmospheric condition in the area, such as Leucaena leucocephala (Leucaena), can absorb chemical elements (Ssenku et al. 2017). This plant species accumulates heavy metals and produces large amounts of biomass, therefore a potential plant for phytoremediation (Ssenku et al. 2017). Conventional Remediation Conventional methods of remediation of areas where the mine wastes are located majorly focus on physical and chemical stabilisation. This entails covering up the solid mine wastes with innocuous material, generally soil or rocks from other sites, to reduce wind and water erosion. However, this may only offer a temporary solution. Complete removal of the material and soil washing to remediate the soil can be used though it is costly (Pilon-Smits 2005).
3,430
2021-01-01T00:00:00.000
[ "Geology" ]
Early dental caries detection using a fibre-optic coupled polarization-resolved Raman spectroscopic system A new fibre-optic coupled polarization-resolved Raman spectroscopic system was developed for simultaneous collection of orthogonally polarized Raman spectra in a single measurement. An application of detecting incipient dental caries based on changes observed in Raman polarization anisotropy was also demonstrated using the developed fibre-optic Raman spectroscopic system. The predicted reduction of polarization anisotropy in the Raman spectra of caries lesions was observed and the results were consistent with those reported previously with Raman microspectroscopy. The capability of simultaneous collection of parallel-and cross-polarized Raman spectra of tooth enamel in a single measurement and the improved laser excitation delivery through fibre-optics demonstrated in this new design illustrates its future clinical potential. Introduction Dental caries (i.e. dental decay) is a common chronic infectious oral disease affecting people worldwide. Despite recent research dedicated to developing better methods for detecting dental caries, current clinical practice is still largely limited to conventional visual and visuo-tactile tools such as sharp explorers and dental radiographs. However, these conventional methods which are aimed at detecting cavitated lesions were found to have low sensitivity in detecting early caries lesions [1,2]. For example, dental radiographs are useful in detecting larger, advanced and possibly cavitated dental caries, but limited image resolution and poor radiographic contrast of early carious lesions renders radiographs insensitive for detecting early stage dental caries. In the last decade, several new caries detection methodologies have emerged. Many of these new developments are spectroscopic or optical imaging based technologies that show promising clinical potential. Digital imaging fibre-optic trans-illumination (DIFOTI) [3], quantitative light-induced fluorescence (QLF) [4], laser-induced fluorescence [5], multiphoton imaging [6], infrared thermography [7], terahertz imaging [8], optical coherence tomography (OCT) [9], Raman spectroscopy [10,11] and NIR trans-illumination imaging [12,13] were all reported as useful in detecting caries lesions. Although some of these research efforts have led to commercial products, such as DIFOTI from Electro-Optical Sciences, Inc., QLF from Inspecktor Research Systems and DIAGNOdent from KaVo, it has also been found that there is a significant learning curve for operators to achieve necessary expertise for clinical use [14]. The sensitivity and specificity of each of these diagnostic tools, however, is less than optimal, leading to some false positives and false negatives. Raman spectroscopy, a form of vibrational spectroscopy once not considered a viable optical modality for biomedical applications due to its slower speed and the need for powerful excitation sources, is becoming increasingly important in biomedical research for its high biochemical specificity, low water sensitivity and capability to work in the near-infrared (NIR) region with fibre-optics. Recent technological improvements in NIR laser sources and the detector response in the NIR wavelength region also helped the advancement of biomedical Raman spectroscopy. In previous studies we have reported caries detection methodologies using non-polarized and polarized Raman microspectroscopy [11,15]. Using non-polarized Raman spectroscopy [11], we reported that caries lesions consistently exhibited stronger Raman bands at 431 cm -1 , 590 cm -1 and 1043 cm -1 compared with those in the Raman spectra of sound enamel. The Raman band intensity changes in the spectra of caries lesions were proposed to be due to induced structural changes during the caries formation process. Subsequently, further investigation on early caries lesions was carried out with polarized Raman microspectroscopy and it was found that early caries lesions can be differentiated from sound enamel by monitoring the change in polarization anisotropy or depolarization ratio of the most intense Raman band of hydroxyapatite at 959 cm -1 . Polarized Raman spectra of caries lesions exhibited a lower degree of Raman polarization anisotropy than those of sound enamel. Such decrease in the Raman polarization anisotropy detected in the Raman spectra of carious lesions is believed to be due to structural changes in the enamel rods and/or increased photon scattering resulting from the larger pores within the caries lesions. Based on the detected differences in the Raman polarization anisotropy between sound and carious enamel, ex vivo caries lesions were detected with 97% sensitivity and 100% specificity based on a Bayesian analysis model [15]. Although these previous studies provided an important methodological foundation for the proposed application, further development is still required to advance this lab-bench study to a portable chair-side dental unit. Considering practical clinical requirements, a fibre-optic coupled Raman probe design would likely be the most appropriate configuration to be used clinically. Therefore, in this study we investigate the possibility of acquiring polarizationresolved Raman spectra of tooth enamel through fibre-optics and discuss the characteristics of a new fibre-optics coupled polarization-resolved Raman spectroscopic system for early caries detection. In addition to the requirement of measuring polarized Raman spectra through optical fibres, another area of possible improvement on the current technology is to further reduce measurement time so that it becomes more patient-friendly. Previously with a 24 mW excitation power at 830 nm, ∼ 200 sec was required to acquire both orthogonally polarized spectra with good signal-to-noise ratio from a single sampling spot. This lengthy procedure is not desirable in a clinical setting. Theoretically, the measurement time can be reduced in several different ways. The most straightforward method is to increase the laser power because of a linear dependence of Raman signal intensity on the incident laser power. However high laser powers may damage biological tissues due to intense local heating thus limiting our maximum laser power to approximate 100 mW at the sample. The second option is to use a shorter laser wavelength (λ ex ) for excitation because of the inverse dependence of the Raman signal intensity on the fourth power of the excitation wavelength (i.e. signal ∝ 1/λ ex 4 ). Silicon based CCD detectors also respond more efficiently in the lower wavelength region thus enhancing the detection sensitivity. Although the Raman cross-sections of the detected vibrational modes are higher for shorter excitation wavelength (e.g. 785 nm or 633 nm instead of 830 nm), such wavelengths often generate stronger fluorescence background in the Raman spectra thus hindering the small Raman signals from being observed. This problem is particularly severe for biological tissues due to the presence of many endogenous fluorophores. For tooth enamel, any food debris and extrinsic stains from tobacco usage or other sources also tend to generate strong fluorescence background upon illumination with a shorter-wavelength laser line therefore it is not a preferred alternative for the current application. A third option to improve signal to noise involves re-configuring a conventional single-channel fibre-optic coupled Raman spectroscopic system to collect the two orthogonal components of the polarized Raman spectra simultaneously by using two separate portions on a 2-dimensional (2D) detector to detect each polarization component. This approach was previously reported by Thomson et al [16] in a study to modify a commercial confocal Raman microscope in which a polarization beamsplitter is used to resolve the orthogonal components of Raman spectra. The two spectra were then recorded simultaneously on different locations of the detector. Based on a similar concept, we investigate possible optical configurations for efficient transmission of both laser radiation and polarized Raman signals with fibre-optics. In addition, a special optical coupling between the fibre-optics and a Raman spectrometer will also be presented for simultaneous projection of two orthogonally polarized Raman spectra on the CCD detector. Tooth samples Twenty extracted human premolar teeth were collected at the Oral Surgery Clinic, Faculty of Dentistry, University of Manitoba. These teeth were extracted from consenting patients who were undergoing extractions for orthodontic reasons and the teeth were examined by one dental clinical investigator for signs of early caries at the proximal sites prior to extraction. Remaining soft tissue on extracted teeth was removed by scaling and the samples were thoroughly rinsed with water. Teeth were preserved in sterile filtered de-ionized water until measurement. Each tooth sample was radiographed and independently re-assessed ex vivo by two dental clinical investigators at the University of Manitoba and Dalhousie University. Teeth were used for spectroscopic measurements without further treatment. Spectroscopic measurements on tooth samples were performed under the guidance of the clinical assessments. In total, there were 47 and 27 measurements on sound enamel and carious enamel, respectively. Fibre-optic coupled polarized Raman spectroscopic system A fibre-optic coupled polarization-resolved Raman spectroscopic system optimized for 830 nm excitation for simultaneous measurements of the parallel-and cross-polarized Raman spectra is illustrated in Fig. 1. The laser source is a 300 mW diode laser (Process Instruments Inc., Salt Lake City, UT, USA) operating at 830 nm and the detection system comprises of an Axial Raman spectrograph (HORIBA Jobin Yvon, Edison, NJ, USA) equipped with a CCD camera (Andor Technologies, South Windsor, CT, USA) operating under the LabSpec software (ver. 4.08). The excitation laser beam is delivered through a 2 m long low-OH multi-mode optical fibre (200 μm core dia. N.A. 0.22, Fiberguide Industries, Stirling, NJ, USA) and the emerging laser light is collimated by a fibre-optic collimator (Thorlabs, Newton, NJ, USA) with matching N.A. to that of the delivery fibre. The collimated light is filtered through a laser bandpass filter (Iridian Spectral Technologies, Ottawa, ON, Canada) to remove unwanted side bands and fibre luminescence. Sequentially the laser light is polarized with a broadband polarizing cube beamsplitter (Newport, Irvine, CA, USA) before it is reflected off an 830 nm notch filter (Iridian Spectral Technologies) to the back aperture of a x40 (N.A. 0.65) microscope objective lens (Newport) which re-focuses the laser beam onto the sample. Raman scattered photons are collected with the same microscope objective lens. Orthogonally polarized Raman signals are separated with a broadband polarizing cube beamsplitter (Newport) placed behind the notch filter. Each polarized beam is transmitted through a depolarizer (CVI Melles Griot, Albuquerque, NM, USA) before entering the collection fiber in order to minimize polarization dependence of the spectroscopic grating. The collection fibre is a custom-made bifurcated fibre bundle (FiberTech Optica, Kitchener, ON, Canada). Each arm of the bifurcated bundle is a 400 μm core diameter fiber (N.A. 0.13) and positioned to collect polarized Raman signals through attached fibre-optic collimators (Thorlabs) with matching N.A. to that of the collection fibres. The Raman signals are then transmitted to the Axial Raman spectrometer (N.A. 0.125) via the common arm of the bifurcated assembly. The spectrometer entrance slit width was set at 192 μm for the optimal combination of signal intensity and spectral resolution. The two fibres transmitting the orthogonally polarized Raman signals are positioned vertically at the distal end of the bifurcated fibre bundle with 1 mm apart from each other in order to minimize spectral cross-contamination between the two orthogonally polarized Raman spectra projected on a 2D CCD detector (see inset in Fig. 1). All spectra were acquired using a 1 sec integration time with 10 acquisitions and 90 mW laser illumination power at the sample. Using microscopic examination, no visible damage was observed on the laser illuminated enamel surfaces at this laser power level. Laser illumination power density should be carefully evaluated for future clinical applications. Parallel-polarized Raman spectra were acquired by binning pixel rows 60-90 on the CCD detector whereas the cross-polarized Raman spectra were acquired by binning pixel rows 115-145 on the same CCD detector. Data analysis While the system's optics background was subtracted from each individual spectrum, no baseline correction was necessary as all the spectra acquired were relatively free of strong fluorescence interference. Instrument response correction was performed according to procedures outlined previously with a reference NIST traceable tungsten halogen lamp (The Eppley Laboratory, Inc., Newport, RI, USA) of known temperature. The depolarization ratio ρ 959 and polarization anisotropy A 959 were calculated according to definitions used previously [15]: (1) (2) where I 959(⊥) and I 959(∥) are the integrated peak intensities of the 959 cm -1 peak detected in the cross-polarized and parallel-polarized Raman spectra, respectively. For statistical analyses, ρ 959 values were calculated for 2 data groups, ρ 959(C) for caries lesions and ρ 959(S) for sound enamel. A simple t-test was carried out using the t-critical value at 99.9% confidence interval level with N1+N2-2 degrees of freedom to determine if there is a significant difference between these two data groups, where N1=47 and N2=27 were the number of measurements for sound enamel and caries lesions, respectively, acquired from 20 teeth. A similar statistical analysis was also performed for anisotropy, A 959(C) for caries lesions and A 959(S) for sound enamel. Estimates of sensitivity and specificity of the current method for early caries detection were calculated based on a Bayesian analysis model. Threshold values of depolarization ratio and polarization anisotropy to distinguish carious enamel from sound enamel were first estimated using normal distribution plots produced based on experimentally determined mean and standard deviation values for caries lesion and sound enamel. Once the threshold depolarization ratio and polarization anisotropy values were determined, each experimental ρ 959 and A 959 was then compared with these threshold values to determine the numbers of true-positive (TP), false-positive (FP), false-negative (FN) and true-negative (TN). Sensitivity and specificity were calculated as TP/(TP+FN) and TN/(TN+FP), respectively. Results Polarized Raman spectra of silicon wafer and carbon tetrachloride were first recorded with the system to evaluate its performance. A strongly polarized Si-O vibration band at 520 cm -1 was observed for silicon wafer (Fig. 2(A)) indicating good accuracy and efficiency of polarization intensity measurements using the developed fibre-optic based system. Similarly, in Fig. 2(B), two orthogonally polarized Raman spectra of carbon tetrachloride showed a strongly polarized band at 459 cm -1 and two depolarized bands at 313 cm -1 and 216 cm -1 , respectively. For measuring Raman spectra of carbon tetrachloride, it was found that the x40 objective lens used for acquiring the Raman spectra of tooth and silicon wafer was not practical due to the objective's limited working distance. Such small working distance made it difficult to focus the laser beam past the sample cuvette wall. Therefore, all the polarized Raman spectra shown in Fig. 2(B) were acquired using a x10 objective lens (Newport) with a 16.5 mm focal length. Although multi-mode fibres are known to scramble the polarization while transmitting light, our experiments detected residual polarization dependent response after the Raman signal was transmitted via a 1.5 m long multi-mode fibre as evidenced in Fig. 3(A). In this test, a λ/2 waveplate replaced the polarization scrambler in the setup depicted in Fig. 1 then spectra were taken with rotation of the λ/2 waveplate to change the polarization direction of the incoming Raman photons. Figure 3(A) shows how the intensity of the 459 cm -1 band of carbon tetrachloride changes with varying polarization of the Raman signal. When the polarization scrambler was added back into the Raman collection path after the λ/2 waveplate, such polarization dependence disappeared even with varying polarization of incoming Raman signal (Fig. 3(B)). Therefore, in order to effectively remove such polarization dependence of the spectrometer, two polarization scramblers were placed permanently in both polarization channels of the Raman collection optics-see Fig. 1. Representative parallel-polarized and cross-polarized Raman spectra of sound enamel and caries lesions acquired with the fibre-optic based system are shown in Fig. 4. Raman spectra of sound enamel acquired with the fibre-optic coupled system also show high degree of polarization anisotropy (Fig. 4(A)) consistent with our previous observations using Raman microspectroscopy. Raman peaks at 431 cm -1 , 590 cm -1 , 609 cm -1 , 959 cm -1 and 1070 cm -1 all decrease in intensity in the cross-polarized Raman spectra of sound enamel compared to those measured in the parallel-polarized arrangement. Especially noticeable is the intensity variation of the symmetric P-O stretching vibration band at 959 cm -1 . On the contrary, the polarization anisotropy detected in the polarized Raman spectra of caries lesions is much weaker as illustrated in Fig. 4(B). The representative CCD images where the two orthogonally polarized Raman spectra were obtained are presented in Fig. 5 with Fig. 5(A) highlighting the 959 cm -1 band for the sound enamel and Fig. 5(B) the same Raman band for caries lesions. In order to evaluate the significance in the difference of ρ 959 and A 959 found in the polarized Raman spectra between sound and carious enamel, a t-test was performed based on 47 ρ 959 / A 959 values obtained from the spectra of sound enamel and 27 ρ 959 / A 959 values obtained from those of caries lesions. Figures 6(A) and 6(B) illustrate the mean ± standard deviation values of ρ 959 and A 959 for both sound and carious enamel in bar-graphs. Statistical analyses resulted in a mean ρ 959(S) of 0.31±0.10 and a mean A 959(S) of 0.44±0.11 for sound enamel, along with a mean ρ 959(C) of 0.80±0.09 and a mean A 959(C) of 0.08±0.04 for caries lesions. A t-test performed at 99.9% confidence interval indicates that the differences in ρ 959 and A 959 between sound enamel and caries lesion are statistically significant with p < 0.001 for both ρ 959 and A 959 . For estimating the sensitivity and specificity of the proposed methodology, experimental depolarization ratio data were compared with the threshold depolarization ratio value for distinguishing sound enamel from carious enamel. Assuming normal distribution, threshold values of depolarization ratio at 0.56 and polarization anisotropy value at 0.19 were identified using the normal distribution plots generated from experimental data. Of the measurements obtained from sound enamel, 1 data point was identified as carious, i.e. 1 falsepositive (FP) was identified. No false-negatives (FN) were identified for the carious enamel tests. This Bayesian analysis results in a sensitivity of 100% and specificity of 98% for the methodology based on both depolarization ratio and polarization anisotropy of the 959 cm -1 band obtained by fibre-optic measurements. This result is consistent with our previous analysis obtained from the Raman microspectroscopic study on caries detection. Discussion Following up on our previous efforts in evaluating the strength of polarized Raman spectroscopy for dental caries detection, in this study we investigated the performance of a fibre-optic coupled Raman spectroscopic system for obtaining similar polarization information. Although being a proof-of-concept study, the outcome of this study is significant because demonstration of efficient collection of Raman polarization data through fibre-optics can greatly facilitate further development of this technology into a clinical device which will largely rely on fibre-optics to transmit polarized laser beam and polarization-resolved Raman signals. The first challenge to be overcome was to transmit the laser radiation through an optical fiber while maintaining adequate excitation power and its polarization. One possible solution to these requirements is to use a single-mode polarization-maintaining (PM) fibre to transmit the laser. However, PM fibres are normally very small in diameters (e.g. less than 10 μm) thus making optical coupling between the free-space laser beam and the small fibre not particularly efficient especially when trying to couple a rather large laser beam (∼ 0.5mm) from our existing laser source. Another approach of transmitting polarized laser beam to the sample via fibreoptic involves using a linear polarizing filter. This polarizing filter can be placed after the optical fiber to control the polarization of the emerging light, yet this filter also introduces dramatic power losses. An experiment was performed using this polarizing filter arrangement and the power loss after the filter was found to be around 75% of the incoming laser power. Instead, a design based on a polarizing beamsplitter (PBS) better preserved laser power by polarizing a highly depolarized light emerging from the optical fibre into two orthogonally polarized rays. One ray exiting the PBS at a 90° angle to the incident light was used as the excitation beam for measurements and the other ray was discarded into a beam dump. The power at the sample was increased from ∼ 25 mw to 90 mw compared to the polarizing filter based approach. With this additional laser irradiation power, faster measurements became possible. In our previous Raman microspectroscopic studies, we reported that the Raman spectroscopic data acquired with an x10 objective lens gave the best contrast between sound and carious enamel due to its optimal sampling depth at ∼ 200 μm in tooth enamel and this depth is comparable to the typical depth of a surface carious lesion. In the current work, however, such high spectral contrast between carious and sound enamel was not detected in the spectra acquired using x10 objective lens with lower N.A. (N.A.0.25). Objective lens with higher N.A.s, such as the x40 (N.A.0.65) and x50 (N.A.0.75) objective lens, on the other hand, gave better spectral contrast between sound and carious enamel. This difference between the microspectroscopic and the fibre-optic coupled systems can be attributed to the different laser optics in the 2 systems. For example, in the microspectroscopic system, the laser spot size under a x10 objective lens was measured to be ∼20 μm whereas an x40 objective lens in the fibre-optic setup gave a much wider laser spot close to 200 μm. Such a large increase in the laser spot size with the fibre-optic setup suggested that a much larger volume of the tooth enamel was being interrogated. Since the Raman photons originating from the deeper region of the interrogated volume travel a longer distance before emerging from the tooth surface, it is more likely for these Raman photons to undergo more scattering events before reaching the collection lens thus losing more of their initial polarization. This polarization scrambling effect becomes increasingly dominant with increasing sampling depth, or decreasing N.A. of the objective lens as shown in Fig. 7. Each panel in Fig. 7 shows a pair of the orthogonally polarized Raman spectra of sound enamel collected with a x10 (N.A.0.25), x20 (N.A. 0.50) and x50 (N.A. 0.75) objective lens. A trend of increasing polarization scrambling with decreasing magnification power (or decreasing N.A.) can be seen clearly. Although higher N.A. objective lens was reported to show depolarizing effect [17], it is probably small in this case compared to the polarization scrambling effect caused by multiple scattering events within a highly scattering material such as tooth enamel. Despite providing the greatest spectroscopic contrast between sound and carious enamel, the x50 objective lens was not chosen for the current study due to its extremely small working distance. Instead, a x40 objective lens was used because it provided a slightly larger working distance with little loss of discrimination between sound and carious enamel. The correlation between the laser optics and sampling volume need to be carefully evaluated in the further development of a Raman fibre-optic probe for the detection of dental caries. The data presented demonstrates that polarized Raman spectra of tooth enamel can be measured with high accuracy using a fibre-optic coupled Raman spectroscopic system. Both ρ 959 and A 959 obtained in this study can be used to differentiate early caries from sound enamel. It was also demonstrated that the current fibre-optic based system provides a similar level of spectroscopic contrast to that obtained with Raman microspectroscopy. Although both the fiber-optic coupled system and Raman microspectroscopy record higher ρ 959 values for carious lesions compared to sound enamel, the ρ 959 values of carious lesions and sound enamel obtained with the fibre-optic system were found to be higher than their respective values obtained previously with the microspectroscopic system. This difference can be attributed to the different properties of the lasers employed in both systems as well as the different optics used in the two systems. The microscopic system used previously better preserves the polarization properties of the beam compared to the fibre-optic setup due to the smaller laser beam size and smaller sampling volume. Although this difference in the optics does not undermine the utility of using polarized Raman measurements to detect early caries lesions, it underlines the importance of standardizing the optical platform of any future clinical device in order to maintain high inter-system consistency between the data. In summary, we report a fibre-optic coupled polarized Raman spectroscopic study of dental caries. Combining multi-mode optical fibers and polarizing beamsplitters, efficient transmission of polarized laser radiation to the sample and simultaneous collection of orthogonally polarized Raman spectra of tooth enamel using a single CCD array detector were achieved. Caries lesions can be detected based on a reduced Raman polarization anisotropy or a higher depolarization ratio of the 959 cm -1 peak derived from polarized Raman spectra collected through fibre-optics. Bayesian analyses of the data result in 100% sensitivity and 98% specificity for the methodology based on changes in polarization anisotropy of the 959 cm -1 band obtained by fibre-optic measurements. Such high sensitivity and specificity are consistent with those reported in our previous microspectroscopic study thus making the fibreoptic approach a viable option for conducting polarized Raman spectroscopic studies in vivo. A new probe development based on the proposed technology is currently underway to improve in vivo access to tooth surfaces, including the interproximal surfaces. Results from this investigation will be reported later. Schematic overview of the developed fibre-optic coupled polarization-resolved Raman spectroscopic system. Inset shows a view of the distal end of the bifurcated fibre bundle. These two fibres transmit orthogonally polarized Raman signals simultaneously and separately. Spectra shown are parallel-and cross-polarized Raman spectra of sound enamel. Representative parallel-(upper trace) and cross-polarized (lower trace) Raman spectra of (A) Si wafer acquired with a x40 objective lens and (B) CCl 4 acquired with a x10 objective lens. Spectra were slightly offset for clarity. Spectra acquired from the fibre-optic coupled Raman system as a function of the Raman signal's polarization state (A) before and (B) after a polarization scrambler was inserted into the Raman collection path in between the collection PBS and the collection fibre. Spectra shown are parallel-polarized Raman spectra of CCl 4 . Variation of the Raman signal polarization state was achieved by rotating a λ/2 plate placed immediately after the PBS in the Raman collection path. Representative parallel-(upper trace) and cross-(lower trace) polarized Raman spectra of (A) sound enamel (B) caries lesions acquired with the developed system. Spectra were offset for clarity. Representative CCD image read-outs of the Raman spectra of (A) sound enamel and (B) caries lesions containing both parallel-(upper channel) and cross-(lower channel) polarized components. Only the area on the CCD detector containing the Raman band at 959 cm -1 was shown. Bar graphs of (A) depolarization ratio (ρ 959 ) and (B) polarization anisotropy (A 959 ) obtained from sound enamel versus carious lesion. Mean +/-standard deviation values are shown. N=47 and N=27 for sound enamel and caries lesion, respectively. Orthogonally polarized Raman spectra of sound enamel acquired with (A) x10, N.A.=0.25 (B) x20, N.A.=0.50 and (C) x50, N.A.=0.75 objective lens. Only the spectral region between 700 cm -1 and 1300 cm -1 was shown. Increased spectral contrast between the parallel-and crosspolarized Raman spectra of sound enamel was demonstrated with increasing objective lens magnification power with increasing N.A.
6,207.6
2008-04-28T00:00:00.000
[ "Materials Science", "Medicine" ]
Study on the Relationship between Electron Transfer and Electrical Properties of XLPE/Modification SR under Polarity Reversal The insulation of high-voltage direct-current (HVDC) cables experiences a short period of voltage polarity reversal when the power flow is adjusted, leading to sever field distortion in this situation. Consequently, improving the insulation performance of the composite insulation structure in these cables has become an urgent challenge. In this paper, SiC-SR (silicone rubber) and TiO2-SR nanocomposites were chosen for fabricating HVDC cable accessories. These nanocomposites were prepared using the solution blending method, and an electro-acoustic pulse (PEA) space charge test platform was established to explore the electron transfer mechanism. The space charge characteristics and field strength distribution of a double-layer dielectric composed of cross-linked polyethylene (XLPE) and nano-composite SR at different concentrations were studied during voltage polarity reversal. Additionally, a self-built breakdown platform for flake samples was established to explore the effect of the nanoparticle doping concentration on the breakdown field strength of double-layer composite media under polarity reversal. Therefore, a correlation was established between the micro electron transfer process and the macro electrical properties of polymers (XLPE/SR). The results show that optimal concentrations of nano-SiC and TiO2 particles introduce deep traps in the SR matrix, significantly inhibiting charge accumulation and electric field distortion at the interface, thereby effectively improving the dielectric strength of the double-layer polymers (XLPE/SR). Introduction High-voltage direct-current (HVDC) transmission enables long-distance, high-capacity power transmission, facilitating cross-regional interconnection of large power grids [1][2][3][4].It is an important technology addressing the energy consumption challenge in long-distance transmission.Currently, the main insulation material used for HVDC cables is XLPE (crosslinked polyethylene).In real operation environments, high-voltage power transmission involves not only cables but also numerous cable accessories.These cable accessories are crucial for connecting different cable sections, ensuring both the extension of the cable length and the reliability of insulation at the connection.The insulation accessories are mainly made from rubber, such as silicone rubber (SR) and ethylene propylene diene monomer (EPDM). Currently, HVDC transmissions experience power flow reversal, which is managed using line-commutated converters (LCC-HVDC).During the reversal period, this transmission method generates local transient electric fields with extremely high field strength at cable and cable accessory connections, which may exceed the insulation limits of the materials and lead to destructive damage to the DC cables.Therefore, the insulation properties of a cable under polarity reversal conditions are more important than those under Polymers 2024, 16, 2356 2 of 14 DC electric fields [5][6][7].The accumulation of space charge during operation is commonly identified as a key factor contributing to insulation failure. Current research on space charge accumulation mainly focuses on the M-W polarization theory.Wang Xia et al. from Xi'an Jiaotong University [8] and Lan Li et al. from Shanghai Jiaotong University [9] measured the distribution of space charge and interface charge in double-layer media under different field strengths.In their study, double-layer structures composed of XLPE/SR and XLPE/ethylene propylene rubber (EPR) were selected, respectively.The experimental results indicate that the polarity of the interface charge is consistent with the polarity of the accumulated charge on the SR or EPR side under low field strength, but at high field strength (about 30 kV/mm or above), the polarity of the interface charge reverses and deviates significantly from the theoretical calculation value.At the same time, there have been many studies on the effect of nanoparticle-doping modification on the space charge distribution characteristics.Yang Zhuoran et al. from Tianjin University [10] also studied the effect of the doping SiC concentration on the resistivity, space charge accumulation, and dissipation rules of SR composite insulation media.Their conclusions show that this method can indeed improve the nonlinear conductivity of silicone rubber composite materials, facilitate the dissipation of space charges, and improve the accumulation and distortion of space charges and electric fields.Qin Jun and others from the Harbin Institute of Technology [11] tested the distribution of space charges in nano-TiO 2 /LSR composite materials, and the results showed that the introduction of nano-TiO 2 in the composite material inhibited the accumulation of homopolar charges. However, there is relatively little research on the effect of nano-doping on the space charge of bio-layer polymers under polarity reversal.Chen Xi et al. [12] analyzed the changes in space charge characteristics and maximum transient electric field in low-density polyethylene (LDPE) samples under different polarity reversal times by controlling the voltage polarity reversal time.The experiment found that the maximum transient electric field occurred near the anode during the polarity reversal process, and the maximum transient electric field decreased with increasing reversal time.Dissado L.A. et al. from the University of Leicester in the UK analyzed the location of the maximum field strength of the cable body during the polarity reversal process and compared the space charge distribution before and after the polarity reversal under steady-state conditions.They found that the charge distribution waveform would basically form a charge peak with the same amplitude and opposite polarity at the same position, presenting a "mirror distribution".Nevertheless, the accumulation of space charges is closely related to the breakdown characteristics.Xiang Min from Chongqing University [13,14] also investigated the corresponding breakdown characteristics of oil-paper insulation structures under the conditions of polarity reversal after DC preloading.The conclusion drawn was that during the polarity reversal process, multiple factors, such as the charge inside the insulation paper, interface polarization charge, and capacitance distribution electric field, have a certain impact on the breakdown characteristics of oil-paper insulation. Unfortunately, previous research has only explored the space charge and breakdown characteristics of individual composite materials under polarity reversal.Research on the double-layer XLPE/SR composite insulation structure under polarity reversal conditions, which is reflective of actual operation, is still lacking, especially regarding the mechanism of insulation breakdown induced by electric field distortion caused by changes in interface charge accumulation.Such studies are urgently needed for the industrial manufacturing of cable accessories.This article establishes a PEA space charge measurement platform capable of applying a polarity reversal electric field and a breakdown test platform for thin film sheet insulation samples.SiC-SR and TiO 2 -SR nanocomposites with different doping concentrations were prepared.In this paper, we tested the distribution of space charges in XLPE/nano-doped SR bilayer media under polarity reversal.We built a correlation between the micro-level charge distribution characteristics and macro-level breakdown insulation characteristics of the bilayer composite materials.Additionally, the mechanism of electric field distortion-induced insulation breakdown due to charge accumulation was systematically studied. Preparation of Nano-Doped SiC-SR and TiO 2 -SR Composite Materials We selected the solution blending method for this study.A mechanical stirring method was used to treat the n-hexane solution containing nano-SiC and TiO 2 , with an appropriate amount of silane coupling agent added.Next, the suspension of SiC and TiO 2 was subjected to an ultrasonic dispersion method for further treatment.Finally, the obtained nano-SiC and TiO 2 suspension was thoroughly mixed with the liquid silicone rubber matrix to produce nano-SiC-SR and nano-TiO 2 -SR composite materials.SiC-SR composite materials with mass fractions of 1 wt%, 3 wt%, and 5 wt%, as well as TiO 2 −SR composite materials and undoped pure SR composite materials with mass fractions of 2 wt%, 4 wt%, and 8 wt%, were produced.The preparation flow chart is shown in Figure 1.The ultrasonic dispersion meter was made in Shanghai China, the double plate vulcanizer was made in Guangdong China. Polymers 2024, 16, x FOR PEER REVIEW 3 of 14 correlation between the micro-level charge distribution characteristics and macro-level breakdown insulation characteristics of the bilayer composite materials.Additionally, the mechanism of electric field distortion-induced insulation breakdown due to charge accumulation was systematically studied. Preparation of Nano-Doped SiC-SR and TiO2-SR Composite Materials We selected the solution blending method for this study.A mechanical stirring method was used to treat the n-hexane solution containing nano-SiC and TiO2, with an appropriate amount of silane coupling agent added.Next, the suspension of SiC and TiO2 was subjected to an ultrasonic dispersion method for further treatment.Finally, the obtained nano-SiC and TiO2 suspension was thoroughly mixed with the liquid silicone rubber matrix to produce nano-SiC-SR and nano-TiO2-SR composite materials.SiC-SR composite materials with mass fractions of 1 wt%, 3 wt%, and 5 wt%, as well as TiO2−SR composite materials and undoped pure SR composite materials with mass fractions of 2 wt%, 4 wt%, and 8 wt%, were produced.The preparation flow chart is shown in Figure 1.The ultrasonic dispersion meter was made in Shanghai China, the double plate vulcanizer was made in Guangdong China. Preparation of XLPE Sheet-like Specimens Weigh the appropriate quality of XLPE pellets and place them on a mold lined with polyethylene film.Set the temperature of the vulcanization side of the flat vulcanization machine (Figure 2) to 145 °C and maintain a pre-heating pressure of 10 MPa for 7 min.Once the solid particles have melted, increase the temperature to 180 °C and the pressure to 14 MPa, and maintain these conditions for 9 min.Then, transfer the mold to the cooling side and allow it to cool for 5 min.Considering that the SR insulation in actual cable joint structures is usually thicker than the main insulation XLPE, and taking account the experimental conditions of breakdown tests and PEA tests, the breakdown samples were prepared by adjusting the mass of XLPE particles and the pressure between the upper and lower plates.The required size for PEA samples is 50 mm × 50 mm × 0.22 mm sheet samples, and the required size for breakdown tests is 50 mm × 50 mm × 0.02 mm sheet samples.The Zeta potential/DSC/AFM of the samples are shown in Table 1 and Figures 3 and 4. We Weigh the appropriate quality of XLPE pellets and place them on a mold lined with polyethylene film.Set the temperature of the vulcanization side of the flat vulcanization machine (Figure 2) to 145 • C and maintain a pre-heating pressure of 10 MPa for 7 min.Once the solid particles have melted, increase the temperature to 180 • C and the pressure to 14 MPa, and maintain these conditions for 9 min.Then, transfer the mold to the cooling side and allow it to cool for 5 min.Considering that the SR insulation in actual cable joint structures is usually thicker than the main insulation XLPE, and taking account the experimental conditions of breakdown tests and PEA tests, the breakdown samples were prepared by adjusting the mass of XLPE particles and the pressure between the upper and lower plates.The required size for PEA samples is 50 mm × 50 mm × 0.22 mm sheet samples, and the required size for breakdown tests is 50 mm × 50 mm × 0.02 mm sheet samples.The Zeta potential/DSC/AFM of the samples are shown in Table 1 and Figures 3 and 4. We can see that the dispersion of the particles in the colloid is good, the nano-doping will decrease the melting peak of SR, and the roughness in 5 wt% SiC/SR is 193 nm and the roughness in 8 wt%TiO 2 /SR is 305 nm, a large increase compared with pure SR, which is 51.6 nm.can see that the dispersion of the particles in the colloid is good, the nano-doping will decrease the melting peak of SR, and the roughness in 5 wt% SiC/SR is 193 nm and the roughness in 8 wt%TiO2/SR is 305 nm, a large increase compared with pure SR, which is 51.6 nm.Evenly apply an appropriate amount of vacuum-dried nano-doped SR material onto the mold covered with a polyimide film.Firstly, maintain the temperature at 140 °C and the interpolate pressure at 6 MPa, heating for 8 min.Then, continue heating for an addi- Preparation of SiC-SR and TiO 2 -SR Sheet-like Specimens Evenly apply an appropriate amount of vacuum-dried nano-doped SR material onto the mold covered with a polyimide film.Firstly, maintain the temperature at 140 • C and the interpolate pressure at 6 MPa, heating for 8 min.Then, continue heating for an additional 9-10 min at 145 • C and 10 MPa.For high-concentration doped SR samples, the temperature, pressure, and duration should be adjusted accordingly to ensure the full vulcanization of the sheet-like samples.Afterwards, the samples should undergo the same cooling treatment as previously described.Two types of specimens were prepared, one with dimensions of 50 mm × 50 mm × 0.3 mm for PEA testing, and another with dimensions of 50 mm × 50 mm × 0.08 mm for breakdown testing.Before conducting the experiment, all prepared samples should be placed in a vacuum drying oven at 40 • C for 24 h. Polarity Reversal PEA Testing Platform and Testing Methods A PEA measurement device capable of achieving polarity reversal was built, mainly consisting of four parts, as shown in Figure 5: high-voltage DC power supply, highvoltage pulse system, space charge measurement unit, and waveform automatic recording platform.On the basis of the original PEA measurement platform, the control of polarity reversal was achieved by adding a high-voltage DC power supply equipped with polarity reversal function. Preparation of SiC-SR and TiO2-SR Sheet-like Specimens Evenly apply an appropriate amount of vacuum-dried nano-doped SR material onto the mold covered with a polyimide film.Firstly, maintain the temperature at 140 °C and the interpolate pressure at 6 MPa, heating for 8 min.Then, continue heating for an additional 9-10 min at 145 °C and 10 MPa.For high-concentration doped SR samples, the temperature, pressure, and duration should be adjusted accordingly to ensure the full vulcanization of the sheet-like samples.Afterwards, the samples should undergo the same cooling treatment as previously described.Two types of specimens were prepared, one with dimensions of 50 mm × 50 mm × 0.3 mm for PEA testing, and another with dimensions of 50 mm × 50 mm × 0.08 mm for breakdown testing.Before conducting the experiment, all prepared samples should be placed in a vacuum drying oven at 40 °C for 24 h. Polarity Reversal PEA Testing Platform and Testing Methods A PEA measurement device capable of achieving polarity reversal was built, mainly consisting of four parts, as shown in Figure 5: high-voltage DC power supply, high-voltage pulse system, space charge measurement unit, and waveform automatic recording platform.On the basis of the original PEA measurement platform, the control of polarity reversal was achieved by adding a high-voltage DC power supply equipped with polarity reversal function.For the space charge measurement of double-layer samples using the PEA method, the samples are laid flat in the following order: upper electrode, semi-conductive layer, For the space charge measurement of double-layer samples using the PEA method, the samples are laid flat in the following order: upper electrode, semi-conductive layer, XLPE, undoped pure SR, and the lower electrode.To ensure optimal contact between the layers, the media are coated with a thin layer of silicone oil.The average field strength of the applied polarization electric field is selected as 20 kV/mm, and the thickness of XLPE and nano-doped SR thin film samples is measured with a micrometer, which allows for precise determination of required polarization voltage U.The stages of applying DC voltage are divided into three stages: T 1 , T 2 , and T 3 .T 1 is the polarization time before reversal, T 2 is the polarity reversal process, T 3 is the polarization time after reversal, and the applied voltage waveform is shown in Figure 6.During the T 1 and T 3 stages, the waveform signals on the oscilloscope at 1 min, 60 min, and 180 min after the completion of pressurization (0 s of T 1 ) and the end of voltage reversal (0 s of T 3 ) were recorded, respectively.The selection of polarity reversal time T 2 refers to GB/T 31489.1-2015with rated voltages of 500 kV, and the reversal time should be controlled within 120 s.In this experiment, T 2 is taken as 30 s. Polymers 2024, 16, 2356 6 of 14 T2 is the polarity reversal process, T3 is the polarization time after reversal, and the applied voltage waveform is shown in Figure 6.During the T1 and T3 stages, the waveform signals on the oscilloscope at 1 min, 60 min, and 180 min after the completion of pressurization (0 s of T1) and the end of voltage reversal (0 s of T3) were recorded, respectively.The selection of polarity reversal time T2 refers to GB/T 31489.1-2015with rated voltages of 500 kV, and the reversal time should be controlled within 120 s.In this experiment, T2 is taken as 30 s. Breakdown Testing Platform and Testing Methods The breakdown platform in our experiments is designed in accordance with the International Electrotechnical Commission IEC 60243, which specifies the electrical strength test methods of solid insulation materials.The platform mainly consists of four parts: electrode system, oil tank, DC high-voltage source (capable of achieving polarity reversal function), protection, and measurement device, as shown in Figure 7. Insulating support Copper electrode In Figure 7, R1 and R2 are protective resistors, and the equipment parameters are shown in Table 2: Breakdown Testing Platform and Testing Methods The breakdown platform in our experiments is designed in accordance with the International Electrotechnical Commission IEC 60243, which specifies the electrical strength test methods of solid insulation materials.The platform mainly consists of four parts: electrode system, oil tank, DC high-voltage source (capable of achieving polarity reversal function), protection, and measurement device, as shown in Figure 7. T2 is the polarity reversal process, T3 is the polarization time after reversal, and the applied voltage waveform is shown in Figure 6.During the T1 and T3 stages, the waveform signals on the oscilloscope at 1 min, 60 min, and 180 min after the completion of pressurization (0 s of T1) and the end of voltage reversal (0 s of T3) were recorded, respectively.The selection of polarity reversal time T2 refers to GB/T 31489.1-2015with rated voltages of 500 kV, and the reversal time should be controlled within 120 s.In this experiment, T2 is taken as 30 s. Breakdown Testing Platform and Testing Methods The breakdown platform in our experiments is designed in accordance with the International Electrotechnical Commission IEC 60243, which specifies the electrical strength test methods of solid insulation materials.The platform mainly consists of four parts: electrode system, oil tank, DC high-voltage source (capable of achieving polarity reversal function), protection, and measurement device, as shown in Figure 7. Insulating support Copper electrode In Figure 7, R1 and R2 are protective resistors, and the equipment parameters are shown in Table 2: In Figure 7, R 1 and R 2 are protective resistors, and the equipment parameters are shown in Table 2: The DC breakdown follows the "Fast Boost Test Standard" in GB/T 1408.1-2016.The test involves a voltage ramp rate of 500 V/s, starting from 0 and boosting until the sample breakdown.The breakdown test process under polarity reversal voltage is shown in Figure 8. Firstly, a DC voltage was applied to the double-layer insulation sample, with a preloading time of 10 min to allow for a stable accumulation of space charges inside the sample.The voltage polarity reversal time is set to 30 s.If the sample does not experience breakdown after one complete polarity reversal, the voltage is slightly increased by 1 kV Polymers 2024, 16, 2356 7 of 14 for each subsequent test.This process continues until sample breakdown.During the experiment, room temperature conditions were maintained.To reduce errors, each type of sample underwent 10 breakdown tests with the breakdown voltage recorded.The thickness of each breakdown point was measured using the micrometer.Finally, the breakdown field strength of the sample was calculated by dividing the applied voltage by the total thickness of the double-layer composite sample. The DC breakdown follows the "Fast Boost Test Standard" in GB/T 1408.1-2016.The test involves a voltage ramp rate of 500 V/s, starting from 0 and boosting until the sample breakdown.The breakdown test process under polarity reversal voltage is shown in Figure 8. Firstly, a DC voltage was applied to the double-layer insulation sample, with a preloading time of 10 min to allow for a stable accumulation of space charges inside the sample.The voltage polarity reversal time is set to 30 s.If the sample does not experience breakdown after one complete polarity reversal, the voltage is slightly increased by 1 kV for each subsequent test.This process continues until sample breakdown.During the experiment, room temperature conditions were maintained.To reduce errors, each type of sample underwent 10 breakdown tests with the breakdown voltage recorded.The thickness of each breakdown point was measured using the micrometer.Finally, the breakdown field strength of the sample was calculated by dividing the applied voltage by the total thickness of the double-layer composite sample.U1 is the set voltage for the first breakdown experiment, and U2 is the set voltage raised after the failure of the first breakdown experiment. Space Charge Distribution under Polarity Reversal The space charge distribution of nanocomposite SR/XLPE bilayer samples with different doping concentrations, both during and after voltage polarity reversal, is shown in Figure 9. U 1 is the set voltage for the first breakdown experiment, and U 2 is the set voltage raised after the failure of the first breakdown experiment. Space Charge Distribution under Polarity Reversal The space charge distribution of nanocomposite SR/XLPE bilayer samples with different doping concentrations, both during and after voltage polarity reversal, is shown in Figure 9.By comparing the space charge distribution before and after polarity reversal in Figure 9, it was found that the nanocomposite particle did not alter the general trend of the space charge density curve.However, doping particles with different concentrations significantly affect the amplitude and rate of changes in space charge distribution.Figure 9h shows the space charge density of different samples at the SR ground electrode and the interface before and after inversion.From the changes in space charge density at the SR before polarity reversal, it can be seen that doping SiC and TiO2 significantly reduces the space charge density at the ground electrode at the SR, mitigating the phenomenon of By comparing the space charge distribution before and after polarity reversal in Figure 9, it was found that the nanocomposite particle did not alter the general trend of the space charge density curve.However, doping particles with different concentrations significantly affect the amplitude and rate of changes in space charge distribution.Figure 9h shows the space charge density of different samples at the SR ground electrode and the interface before and after inversion.From the changes in space charge density at the SR before polarity reversal, it can be seen that doping SiC and TiO 2 significantly reduces the space charge density at the ground electrode at the SR, mitigating the phenomenon of charge accumulation.The space charge density at the ground electrode of undoped SR is 13.5 C/m 3 , while the space charge density at the SR electrode doped with 3 wt% SiC and 4 wt% TiO 2 is 9.1 C/m 3 and 10 C/m 3 , reducing the accumulation of space charge by 32.6% and 25.9%, respectively. From Figure 9h, it can be observed that the space charge at the interface decreased with the doping of SiC and TiO 2 , the moment before and after polarity reversal at the interface.The peak charge density at the undoped SR/XLPE interface before inversion was 6.3 C/m 3 .In contrast, the space charge density was reduced to 5.6 C/m 3 for samples doped with 3 wt% SiC and 5.4 C/m 3 for those doped with 4 wt% TiO 2 .After the 30 s reversal at the interface, it can be seen that the space charge density at the undoped SR interface decreased to 3.5 C/m 3 .In comparison, the space charge density at the interface of samples doped with 3 wt% SiC and 4 wt% TiO 2 decreased to 3.3 C/m 3 and 3.2 C/m 3 , respectively. The changes in space charge density of different samples before and after inversion at the interface are listed in Table 3.The peak charge density at the undoped SR/XLPE interface is 2.8 C/m 3 , both before and after inversion, which is higher than all the doped SR samples.In these two cases, the decrease in interface charge accumulation upon completing polarity reversal is more constrained in comparison to undoped SR, indicating a more pronounced "hysteresis effect".That means that the time required for the space charge at the interface transition from its original state to a new steady state increases as the doping concentration rise from 0 wt% to "optimal concentration".For samples doped with 3 wt% SiC and 4 wt% TiO 2 , the time required to reach a stable state is the longest.However, when the doping concentration exceeds these levels, the time needed for the accumulation of interface charges is shorter.This is because the interface charges are formed through the migration of free charges which then accumulate at the interface.However, these migrating charges are more readily trapped by internal traps within the sample.The charge transfer process initially requires overcoming trap constraints, which leads to a much lower rate of dissipation and migration compared to polarity reversal itself.The introduction of nanoparticles reduces the accumulation of charges both at the electrode and interface under a steady-state electric field during the reversal.Moreover, the presence of deep traps slows down the polarity change of interface charges, further influencing the reversal process. The accumulation of space charges can cause local distortion of the electric field.The electric field superimposed by space charges follows a Poisson distribution [15].Based on the density of space charges, the distribution of electric field E(x) in the sample can be calculated using the corresponding formula, as shown in Equation (1): Polymers 2024, 16, 2356 9 of 14 where E(x) is the electric field strength (kV/mm), ρ(x) is the space charge density (C/m 3 ) at the thickness direction ×, ε is the dielectric constant of the insulation material. Based on the space charge measurement results and electric field calculation formula mentioned above, the electric field distribution of SiC and TiO 2 nanocomposite SR/XLPE double-layer samples with different doping concentrations, both before and after voltage polarity reversal, is plotted as shown in Figure 10: electric field superimposed by space charges follows a Poisson distribution [15].Based on the density of space charges, the distribution of electric field ( ) E x in the sample can be calculated using the corresponding formula, as shown in Equation ( ,0 where ( ) E x is the electric field strength (kV/mm), ( ) x ρ is the space charge density (C/m 3 ) at the thickness direction ×, ε is the dielectric constant of the insulation material. Based on the space charge measurement results and electric field calculation formula mentioned above, the electric field distribution of SiC and TiO2 nanocomposite SR/XLPE double-layer samples with different doping concentrations, both before and after voltage polarity reversal, is plotted as shown in Figure 10: The accumulation of interface charges can cause serious distortion in the electric field distribution within the double-layer sample.However, the doping of nanoparticles effectively reduces the accumulation of these interface charges.As shown in Figure 10a,b, the introduction of 3 wt% SiC and 4 wt% TiO 2 into SR can effectively reduce the distortion of the electric field within the double-layer medium.However, exceeding this concentration leads to an increase in the overall electric field distribution within the sample.Figure 10c shows the electric field distribution after polarity reversal.After the voltage polarity reversal is completed, the SR side sample experiences the maximum distorted electric field.The maximum field strength of the undoped SR sample reaches 24.56 kV/mm, whereas the maximum field strength of the 3 wt% SiC-SR sample is 22.08 kV/mm, and for the 4 wt% TiO 2 -SR sample, it is 20.94 kV/mm, representing a decrease of 10.01% and 14.74%, respectively.At doping concentration of 3 wt% for SiC-SR and 4 wt% for TiO 2 -SR, both types of nanoparticles not only significantly inhibit the accumulation of space charges inside the double-layer medium, but also effectively reduce the maximum distorted electric field that may occur in the SR sample. Macroscopic Electrical Properties under Polarity Reversal The conditions of polarity reversal and the doping of nanoparticles indeed influence the accumulation of the space charge at the interface of the double-layer dielectric, which in turn affects the degree of electric field distortion inside the insulation.The direct manifestation of this phenomenon at the macro level is the breakdown field strength.Figure 11 shows the Weibull distribution of DC breakdown.Tables 4 and 5 show the characteristic breakdown field strength and shape parameters under DC. The conditions of polarity reversal and the doping of nanoparticles indeed influence the accumulation of the space charge at the interface of the double-layer dielectric, which in turn affects the degree of electric field distortion inside the insulation.The direct manifestation of this phenomenon at the macro level is the breakdown field strength.Figure 11 shows the Weibull distribution of DC breakdown.Tables 4 and 5 show the characteristic breakdown field strength and shape parameters under DC.From Figure 11, it can be observed that, compared with the DC breakdown field strength of undoped pure SR, the DC breakdown field strength of nanocomposite SR samples with different concentrations shows a noticeable decrease.Moreover, TiO 2 /SR nanocomposites with different concentrations showed a clear trend of "the higher the doping concentration of nanoparticles, the greater the decrease in breakdown field strength".This occurs because SiC and TiO 2 are semiconductor materials, and the higher concentrations reduce the average distance between the nanoparticles, increasing the likelihood of particle aggregation in the sample [16].Furthermore, the presence of semiconductor materials can result in overlapping interaction areas between the interfaces of nanoparticles, forming a path similar to the conductive channels in polymer.This increased connectivity accelerates the migration rate of charge carriers.Consequently, it can lead to the easier formation of discharge channels inside the nanocomposite SR material, thereby increasing the probability of local breakdown, and reducing the overall voltage resistance performance [17][18][19][20]. However, for the nanocomposites with 3 wt% SiC/SR and 4 wt% TiO 2 /SR, which exhibit better inhibitory effects on the accumulation of interfacial charges, the breakdown field strengths are 123.73kV/mm and 111.14 kV/mm, respectively.Compared to the breakdown field strength of pure SR, which is 125 kV/mm, the nanocomposite SR materials only exhibit a slight decrease.Therefore, under typical operating conditions of DC cables (field strength within 20 kV/mm), nanocomposite SR insulation materials can still provide a sufficient insulation margin [21][22][23][24][25]. Figure 12 shows the Weibull distribution of polarity reversal, and Tables 6 and 7 show the characteristic breakdown field strength and shape parameters. However, for the nanocomposites with 3 wt% SiC/SR and 4 wt% TiO2/SR, which exhibit better inhibitory effects on the accumulation of interfacial charges, the breakdown field strengths are 123.73kV/mm and 111.14 kV/mm, respectively.Compared to the breakdown field strength of pure SR, which is 125 kV/mm, the nanocomposite SR materials only exhibit a slight decrease.Therefore, under typical operating conditions of DC cables (field strength within 20 kV/mm), nanocomposite SR insulation materials can still provide a sufficient insulation margin [21][22][23][24][25]. Figure 12 shows the Weibull distribution of polarity reversal, and Tables 6 and 7 show the characteristic breakdown field strength and shape parameters.In Figure 12, the breakdown field strength of the double-layer sample composed of SR and XLPE doped with nano-SiC shows an initial increase followed by a decrease as the nanoparticle concentration increases under the polarity reversal condition.Notably, the breakdown field strength is significantly higher than that of the sample without doping [26][27][28].When the concentration of SiC particles reaches 3 wt%, the breakdown field strength of the double-layer sample peaks at 181.07 kV/mm.This represents a significant improvement of 29.43% compared to the 139.89 kV/mm breakdown field strength of undoped SR/XLPE under the same conditions.For the double-layer composite insulation composed of XLPE and SR doped with TiO 2 particles, the breakdown field strength increases with doping concentration from 2 wt% to 4 wt%, reaching 169.8 kV/mm, which is a 21.38% improvement compared to pure SR.However, when the concentration of nano-TiO 2 particles exceeds 8 wt%, the breakdown field strength significantly decreases to 126.52 kV/mm.This phenomenon corresponds to the maximum distortion of the electric field observed on the SR side during the polarity reversal process, as shown in Figure 10c.The maximum distortion of the electric field of 4 wt% TiO 2 is the smallest.A smaller distortion of the electrical field requires a higher applied voltage to achieve the breakdown field strength of SR locally, resulting in a higher breakdown field strength of the double-layer sample [29,30]. Conclusions This article establishes a correlation between composite nanoparticles and the electrical properties of XLPE/SR.It systematically studied the accumulation of space charge and local electric field distortion in the composite polymer structure under polarity reversal using nano-SiC and TiO 2 of high-voltage DC cables.At the same time, the polarity reversal breakdown field strength of nanocomposite SR/XLPE double-layer dielectric was tested using a self-built breakdown test platform.Consequently, a correlation between the micro charge distribution characteristics and the macro insulation properties of the double-layer composite material was established.The main conclusions are as follows: (1) Doping nanoparticles does not alter the overall trend of space charge distribution in double-layer media under polarity reversal conditions.(2) Doping nanoparticles reduces the variation in space charge at the interface of the double-layer SR/XLPE under polarity reversal, making the "hysteresis effect" more pronounced.Specifically, the time required for space charge to accumulate at the interface after polarity reversal is shorter.(3) Within the doping concentration limits, increased doping concentration leads to a decrease in interface charge accumulation.SiC particles at 3 wt% exhibit a significant inhibitory effect on interface charge accumulation, while the TiO 2 at 4 wt% concentration effectively reduces the maximum distorted electric field intensity, which correlates with enhanced macroscopic breakdown strength.(4) Doping nanoparticles improved the breakdown strength of the double-layer SR/XLPE, with notable improvements under polarity reversal conditions.Specifically, doping concentration with 4 wt% TiO 2 increased the breakdown strength of the double-layer medium under polarity reversal by 21.38%. Figure 1 . Figure 1.Preparation process diagram of nano-doped composite materials. Figure 1 . Figure 1.Preparation process diagram of nano-doped composite materials. Figure 5 . Figure 5. Schematic diagram of PEA space charge measurement platform. Figure 5 . Figure 5. Schematic diagram of PEA space charge measurement platform. Figure 6 . Figure 6.Polarity reversal voltage waveform applied during space charge testing. Figure 6 . Figure 6.Polarity reversal voltage waveform applied during space charge testing. Figure 6 . Figure 6.Polarity reversal voltage waveform applied during space charge testing. Figure 7 . Figure 7. Diagram of breakdown test device for sheet-like specimens. Figure 8 . Figure 8. Polarity reversal voltage waveform applied during breakdown test. Figure 8 . Figure 8. Polarity reversal voltage waveform applied during breakdown test. Figure 10 .Figure 10 . Figure 10.Electric field distribution of nanocomposite SR/XLPE bilayer samples with different doping concentrations.(a) Electric field distribution of SiC-SR/XLPE double-layer samples with different doping concentrations; (b) electric field distribution of TiO2-SR/XLPE double-layer samples with different doping concentrations; (c) comparison of 30 s reversal end electric field distribution between 3 wt% SiC-SR/XLPE double-layer samples and 4 wt% TiO2-SR/XLPE double-layer samples.The accumulation of interface charges can cause serious distortion in the electric field distribution within the double-layer sample.However, the doping of nanoparticles effectively reduces the accumulation of these interface charges.As shown in Figure10a,b, the introduction of 3 wt% SiC and 4 wt% TiO2 into SR can effectively reduce the distortion of Figure 11 . Figure 11.Weibull distribution of DC breakdown field strength in different nanocomposite materials.(a) Weibull distribution of DC breakdown field strength in nanocomposite SiC-SR.(b) Weibull distribution of DC breakdown field strength in nanocomposite TiO2-SR. Figure 11 . Figure 11.Weibull distribution of DC breakdown field strength in different nanocomposite materials.(a) Weibull distribution of DC breakdown field strength in nanocomposite SiC-SR.(b) Weibull distribution of DC breakdown field strength in nanocomposite TiO 2 -SR. Figure 12 . Figure 12.The 30 s polarity reversal breakdown of SR/XLPE double-layer dielectric with different doping: (a) 30 s polarity reversal breakdown characteristics of SiC-SR/XLPE double-layer dielectric; (b) 30 s polarity reversal breakdown characteristics of TiO 2 -SR/XLPE double-layer dielectric. Table 2 . Parameters of breakdown testing device. Table 3 . Changes in peak interface charge density of different SiC−SR/TiO 2 −SR samples over the same time period. Table 4 . Parameters and characteristic breakdown field strength of DC breakdown in nanocomposite SiC/SR materials. Table 4 . Parameters and characteristic breakdown field strength of DC breakdown in nanocomposite SiC/SR materials. Table 6 . Parameters and characteristic breakdown field strength of 30 s polarity reversal breakdown in SiC−SR/XLPE double-layer dielectric. Table 7 . Parameters and characteristic breakdown field strength of 30 s polarity reversal breakdown in TiO2−SR/XLPE double-layer dielectric. Table 6 . Parameters and characteristic breakdown field strength of 30 s polarity reversal breakdown in SiC−SR/XLPE double-layer dielectric. Table 7 . Parameters and characteristic breakdown field strength of 30 s polarity reversal breakdown in TiO 2 −SR/XLPE double-layer dielectric.
8,481.2
2024-08-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Intonation processing deficits of emotional words among Mandarin Chinese speakers with congenital amusia: an ERP study Background: Congenital amusia is a disorder that is known to affect the processing of musical pitch. Although individuals with amusia rarely show language deficits in daily life, a number of findings point to possible impairments in speech prosody that amusic individuals may compensate for by drawing on linguistic information. Using EEG, we investigated (1) whether the processing of speech prosody is impaired in amusia and (2) whether emotional linguistic information can compensate for this impairment. Method: Twenty Chinese amusics and 22 matched controls were presented pairs of emotional words spoken with either statement or question intonation while their EEG was recorded. Their task was to judge whether the intonations were the same. Results: Amusics exhibited impaired performance on the intonation-matching task for emotional linguistic information, as their performance was significantly worse than that of controls. EEG results showed a reduced N2 response to incongruent intonation pairs in amusics compared with controls, which likely reflects impaired conflict processing in amusia. However, our EEG results also indicated that amusics were intact in early sensory auditory processing, as revealed by a comparable N1 modulation in both groups. Conclusion: We propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This, in turn, could reflect a disconnection between low-level and high-level processing. Introduction Congenital amusia is a disorder that impacts individuals' ability to discriminate musical pitch. This impairment cannot be explained by hearing or neurological problems, low intelligence, or lack of exposure to music . Instead, it has been linked to a neurodevelopmental failure that renders amusic individuals unable to form stable mental representations of pitch (Patel, 2003(Patel, , 2008. An important question is whether the pitch deficit accompanying congenital amusia is specific to music or extends to speech perception. Though individuals with amusia rarely report language problems in everyday life (Jiang et al., 2010;Liu et al., 2010) and show normal intonation processing when pitch contrasts are large Peretz et al., 2002), evidence suggests that amusia does have an effect on individuals' language abilities to some degree. For example, studies have shown that amusics exhibit deficits in processing of lexical tone (Nan et al., 2010;Liu et al., 2012). Additionally, they are reported to have difficulties processing linguistic and emotional prosody in speech Jiang et al., 2010;Liu et al., 2010;Thompson et al., 2012). Speech prosody refers to the meaningful and sometimes paralinguistic acoustic attributes of speech, including pitch, timing, timbre, and intensity. Intonation-the pitch contour of a spoken utterance or "tone of voice"-is one aspect of speech prosody (Selkirk, 1995). When intonation is used to make linguistic distinctions such as the distinction between a question and a statement, it is also referred to as linguistic pitch. The finding that amusic individuals are impaired at processing linguistic pitch suggests that pitch processing is a domain-general function that is engaged when perceiving both music and speech. This possibility aligns with results from studies showing that musical training can lead to enhanced performance on speech perception tasks, including phonological processing (Anvari et al., 2002), speech prosody perception (Thompson et al., 2004; see also Musacchia et al., 2007), linguistic pitch encoding (Wong et al., 2007), and lexical tone identification (Lee and Hung, 2008). It has been argued that such positive "transfer effects" are possible because the brain networks involved in speech and music processing overlap (Patel, 2011). The ability to process speech prosody is important in daily human communication. Not only does prosody convey linguistic information, it enables listeners to infer a speaker's emotional state. Thompson et al. (2012) found that individuals with amusia exhibit reduced sensitivity to emotional prosody (e.g., happy, sad, and irritated). Nonetheless, such deficits in intonation processing and emotional prosody recognition may not pose a significant problem for amusic individuals when contextual, facial, and linguistic cues are available. As such, impairments to speech perception exhibited by amusic individuals that have been observed in laboratory conditions may disappear in naturalized settings. Indeed, Ayotte et al. (2002) observed that amusic participants were able to discriminate spoken sentences with statement and question intonation, yet showed difficulties processing non-speech analogs in which all linguistic information was filtered out (see also Patel et al., 2005;Hutchins et al., 2010). One interpretation of this finding is that without linguistic information, prosodic information is processed via the (compromised) music mode, resulting in reduced sensitivity; in contrast, the presence of linguistic information might encourage processing via an intact speech mode, preserving sensitivity to speech prosody. It is unclear, however, whether the content of that linguistic information is relevant to this effect. In view of these findings, we examined whether explicit emotional (semantic) cues influence the ability of individuals with amusia to detect subtle pitch changes in speech. Emotional linguistic information has been shown to facilitate stimulus processing. For example, in the so-called "emotional Stroop" task, in which perceivers are required to name the color of an emotional versus a non-emotional printed word, the former usually gives rise to faster reaction times than the latter (for a review see, e.g., Williams et al., 1996). These results align with a number of findings showing that affective stimuli, such as facial expressions and dangerous animals (e.g., snakes, spiders, etc.), speed up reaction times in visual search tasks (e.g., Fox et al., 2000). Emotional information is generally thought to "grab" perceivers' attention, leading to greater allocation of resources to the stimulus, which, in turn, leads to deeper stimulus processing (for reviews see Compton, 2003;Vuilleumier, 2005). Although some evidence suggests that negative emotional information leads to greater behavioral facilitation than positive emotional information (e.g., Hansen and Hansen, 1988;Öhman et al., 2001; for a review on "negative bias" see Rozin and Royzman, 2001), other evidence indicates that positive stimuli (e.g., "kiss") can improve performance as effectively as negative stimuli (e.g., "terror") in tasks, such as the "flanker" and "Simon task" (e.g., Kotz, 2010, 2011a,b,c). The Stroop, Simon, and flanker tasks all induce a response conflict which typically elicits a negative-going ERP component, namely the N2, that peaks between 200 and 350 ms after stimulus onset (for a review see Folstein and Van Petten, 2008). This component has also been shown to be elicited by conflicts between stimulus representations (Yeung et al., 2004). Source localization of the N2 points to neural generators within the anterior cingulate cortex (ACC; Van Veen and Carter, 2002), an area that has been implicated in "conflict monitoring" (Carter, 1998;Botvinick et al., 1999Botvinick et al., , 2004. In addition to faster reaction times, Kanske and Kotz (2011a,b) observed a conflictrelated negativity peaking around 230 ms after stimulus onset that was enhanced for both positive and negative words when compared with neutral words. The time window and characteristic of this conflict-related negativity resembles closely that of the N2. Findings by Peretz et al. (2005) indicate that brain activity within the N2 time window appears to be impaired in amusia. More specifically, amusics showed a normal N2 response to unexpected small pitch changes (e.g., 25 cents), but they "overreacted" to large pitch changes (e.g., 200 cents) by eliciting an abnormally enlarged N2 when compared to control participants. Nonetheless, Peretz et al. (2005) interpreted amusics' ability to track the quarter-tone pitch difference as indicative of functional neural circuitry underlying implicit perception of fine-grained pitch differences. The observed pitch impairment in amusics arises, according to Peretz et al. (2005Peretz et al. ( , 2009, at a later, explicit stage of processing, as suggested by a larger P3 (Peretz et al., 2005) and the absence of P600 in response to pitch changes in amusics in comparison with controls. This view has received further support from studies showing normal auditory N1 responses to pitch changes in amusics (Peretz et al., 2005;Moreau et al., 2009). The N1 is a negativegoing ERP component that arises between 50 and 150 ms after stimulus onset (e.g., Näätänen and Picton, 1987;Giard et al., 1994;Woods, 1995). Its neural generators have been localized within the auditory cortex (Näätänen and Picton, 1987), suggesting that this component reflects relatively early auditory processing. In contrast to the earlier findings on N1 responses, recent results by Jiang et al. (2012) and Albouy et al. (2013) indicate that pitch processing in amusics may indeed be impaired at early stages of processing, in that the N1 amplitude was significantly smaller for amusics than controls during intonation comprehension and melodic processing (Albouy et al., 2013). Impairments at such an early stage may have consequences for subsequent processes. However, it is unclear whether the pitch deficit exhibited by amusics may be compensated for with linguistic (semantic) cues, where processing takes place relatively late (i.e., ∼300-400 ms; for reviews see Pylkkänen and Marantz, 2003;Kutas and Federmeier, 2011). However, findings from ERP research suggest that the emotional content of a (visually presented) word is accessed very early, within 100-200 ms after stimulus onset (e.g., Ortigue et al., 2004;Scott et al., 2009;Palazova et al., 2011;Kissler and Herbert, 2013). Such early processing is thought to be possible via a fast subcortical (thalamao-amygadala) pathway (Morris et al., 1999). Therefore, the early access of emotional semantic information and its facilitative effect on conflict processing could help amusic perceivers overcome any difficulty in discriminating linguistic pitch. To address this question, we presented emotional words spoken with intonation that indicated either a statement or a question, and recorded EEG responses in individuals with and without amusia. The linguistic content of the words had either a positive valence, such as "joy, " or a negative valence, such as "ugly." The task was to judge whether two successively presented words were the same in intonation. If amusics make use of linguistic information to compensate for any impairment in intonation processing, they should perform as well as control participants on the intonation-matching task. However, emotional semantic cues may be insufficient to facilitate subsequent processing in amusic individuals. In this case, we would expect to see differences in brain activity between amusic and control participants within an early time window, such as that of the N1 component. Alternatively, early, implicit auditory processes may be intact in amusics and the observed pitch impairment may arise only at a later, explicit processing stage (e.g., N2). In this case, amusic participants should show comparable brain activity to normal controls within the early but not late time window. year of education: M = 14.32 years, SD = 1.25 years) were tested. All participants were Mandarin native speakers and righthanded. None reported any auditory, neurological, or psychiatric disorder. No one had taken private music lessons or other extracurricular music training beyond basic music education at school. All participants gave written informed consent prior to the study. The Ethics Committee of the Second Xiangya Hospital approved the experimental protocol. Participants with a mean global percentage correct lower than 71.7% in the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003) were classified as amusic, corresponding to 2SD below the mean score of the Chinese norms (Nan et al., 2010). The MBEA consists of three melodic pitch-based tests (Scale, Contour and Interval), two time-based tests (Rhythm and Meter) and one memory test (Memory). For the first four subtests, listeners are presented with pairs of melodies and asked to judge whether they are the "same" or "different." For the last two subtests, listeners are presented with a single melody on each trial. For the Meter subtest, participants are required to judge whether the presented melody is a "March" or a "Waltz." In the Memory subtest, participants are required to judge whether they have heard the presented melody in the preceding subtests. The results of the MBEA and its subtests for both groups are shown in Table 1. Stimuli The stimulus material consisted of a set of 40 disyllabic words from the Chinese Affective Words Categorize System (CAWCS; Xu et al., 2008), which comprises 230 positive (e.g., "joy, " "happy, " and "excited") and negative (e.g., "ugly, " "depressed, " and "poor") words. All words from the CAWCS were recorded by an adult male Mandarin native speaker who spoke each word as a statement and as a question. Seven Mandarin native speakers (5 females) were asked to rate on a five-point scale how well the intonations were recognized as a statement or a question (1 = definitely a statement, 5 = definitely a question). Twenty positive and twenty negative words, whose rating scores were equal to or lower than 2 in statement-intonation and equal to or higher than 3.5 in question-intonation were selected. This corresponds approximately to the 30 and 70 percentiles of the ratings respectively. Independent-samples t-tests confirmed that the selected negative and positive words yielded similar mean rating scores in both statement and question conditions (ps > 0.35, see Table 2). Additional one-sample t-tests indicated that the Individuals with amusia scored significantly lower than control participants on all subtests of the MBEA (ps < 0.01). mean valence, arousal, and familiarity scores for the 40 selected words were not significantly different than that of the 230 words from the CAWCS (ps > 0.1). However, a comparison of the selected positive and negative words revealed that the former were rated as more arousing and more familiar than the latter (ps < 0.01, see Table 3) 1 Using a cross-splicing technique (for more details see Patel et al., 1998), we ensured that the first syllables were acoustically identical and the durations of the second syllables were roughly equal. Figure 1A shows the spectrogram and pitch contours of a negative word spoken with a statement-intonation and a question-intonation. As in Jiang et al. (2010), each word was set to be 850 ms, that is, each syllable lasted 400 ms and there was a 50 ms silence between the two syllables. Procedure Participants were seated in an electrically shielded and soundattenuated room with dimmed light. They were asked to fixate on a white cross on a black CRT monitor screen. As illustrated in Figure 1B, each trial began with a warning tone (2000 Hz sinusoidal) of 500 ms. Subsequently, a comparison word was presented, followed by an inter-stimulus interval (ISI) of 300 ms. Thereafter, participants heard the probe word. They were asked to judge whether the intonation of the probe word was the same as that of the comparison word by pressing one of two response keys. The auditory stimuli were presented binaurally at a comfortable listening level via earphones. Experimental Design The experiment consisted of 2 blocks separated by a break. All trials were presented in a pseudo-randomized order. Each block consisted of 80 congruent or incongruent trials (20 statementstatement pairs, 20 statement-question pairs, 20 questionquestion pairs, and 20 question-statement pairs). Prior to the testing, participants completed 4 practice trials to familiarize themselves with the stimuli and task. Feedback was provided in the practice but not the experimental trials. For stimulus presentation and data collection, we employed the software Stim2 (Compumedics Neuroscan, USA). EEG Recording and Pre-Processing The EEG was recorded from 32 electrodes Quick-cap (standard 10-10 electrode system) with a SynAmps RT amplifier and the SCAN software from NeuroScan System (Compumedics Neuroscan, USA). The average of the left and right mastoids served as the reference during recording. Vertical and horizontal eye movements and blinks were monitored with 4 additional electrodes. All electrode impedances were kept below 5 k during the experiment. An online bandpass filter of 0.05-50 Hz was used during the recording. The sampling rate was 500 Hz. The EEG was processed in MATLAB (Version R2013b; Math-Works, USA) using the EEGLAB toolbox (Delorme and Makeig, 2004). The data were first highpass filtered with a Windowed Sinc FIR Filter (Widmann and Schröger, 2012) from the EEGLAB plugin firfilt (Version 1.5.3). The cutoff frequency was 2 Hz (Blackman window; filter order: 2750). An independent component analysis (ICA) was performed using the runica algorithm. Subsequently, an ICA based method for identifying ocular artifacts, such as eye movements and blinks were used (Mognon et al., 2011). Artifactual components were rejected and a lowpass Windowed Sinc FIR Filter with a 20 Hz cutoff frequency (Blackman window; filter order: 138) was applied. Epochs of -500 to 1450 ms from the onset of probe words were extracted and baseline corrected using the 500 ms pre-stimulus time period. ERP Data Analyses Visual inspection of the grand averages revealed two pronounced negative ERP deflections in the following time windows: 120-180 ms and 250-320 ms after the onset of the second syllable of the probe word. These negativities likely reflect the N1 and N2 components, which typically peak within similar time windows (e.g., Pérez et al., 2008;Peretz et al., 2009;Astheimer and Sanders, 2011). For statistical analysis, except for four outer scalp electrodes (T7, T8, O1, O2), all other electrodes were grouped into four regions of interest (ROI): left-anterior (FP1, F3, FC3, F7, FT7), right-anterior (FP1, F4, FC4, F8, FT8), leftposterior (C3, CP3, P3, P7, TP7), and right-posterior (C4, CP4, P4, P8, TP8). The midline electrodes were analyzed separately and grouped into mid-anterior (FZ, FCZ, CZ) and mid-posterior electrodes (CPZ, PZ, OZ). Mean amplitudes were computed for each region of interest and time window (Luck, 2005). Separate repeated-measures ANOVAs were conducted on the N1 and N2 time windows. The factors entered into the ANOVAs were: Group (control/amusic), Emotion (positive/negative), Congruence (congruent/incongruent intonation), LR (left/right), and AP After the presentation of the comparison word (two 400 ms syllables with a 50 ms silence between them), a 300 ms silence was presented, followed by the probe word lasting for 850 ms. During the task, participants were asked to fixate on a white cross on a black screen. At the end of each trial, they were required to make a non-speeded response to indicate whether the intonation of the comparison and probe words was the same or different by pressing one of two response keys. (anterior/posterior). The factor LR was excluded from the analyses of the midline electrodes. The statistical results for the N1 and N2 time window are summarized respectively (see Supplementary Material). Partial eta squared and cohen's d were used to evaluate the effect size for the ANOVAs and t-tests, respectively. Below, we will only report in detail main effects and interactions of interest (see the Supplementary Tables 1 and 2 for full results). Task Performance Participants' task performance was evaluated using d-prime (d')-a measure of discriminability or sensitivity (Macmillan and Creelman, 2005). D-prime scores were calculated by subtracting the z-score that corresponds to the false-alarm rate from the z-score that corresponds to the hit rate. A standard correction was applied to hit and false-alarm rates of 0 or 1 by replacing them with 0.5/n and (n-0.5)/n, respectively, where n is the number of incongruent or congruent trials (Macmillan and Kaplan, 1985). A repeated-measures ANOVA was conducted on the d' scores with two factors: Group (control/amusic) and Emotion (positive/negative). The results revealed a significant main effect of Group, F (1, 40) = 11.05, p < 0.01, η 2 = 0.22, but no significant main effect of Emotion, F (1, 40) = 0.02, p > 0.90, η 2 < 0.01, nor a significant interaction between Emotion and Group, In summary, our main findings showed that amusic participants made more errors compared with control participants in the intonation matching task, despite the emotional content of the words presented. In terms of brain activities, both groups exhibited similar N1 response to the conflicting intonations as hypothesized (Peretz et al., 2005;Moreau et al., 2009). However, the N1 elicited by negative words was marginally larger in amusics than in controls at posterior electrode sites. Finally, when compared to controls, amusics showed a significantly reduced N2 amplitude in response to incongruent intonation. Discussion The present study investigated three related questions. First, do individuals with congenital amusia show impairment in processing speech prosody? Second, can amusic participants make use of emotional information to compensate for any impairment in speech prosody processing? Third, does the impairment in pitch processing in amusia arise from an early or late stage of processing? To address these questions, we measured the brain activities of participants with and without congenital amusia using EEG. Participants were presented with pairs of positive (e.g., "joy") or negative spoken words (e.g., "ugly") successively. The pairs were congruent or incongruent in terms of speech intonation, which could indicate a statement or a question. Participants were asked to indicate whether the word pairs had the same or different intonation. As speakers of a tone language, Mandarin Chinese amusics may be sensitive to linguistic pitch owing to constant exposure to daily communication with small changes in pitch (for a discussion, see Stewart and Walsh, 2002;Stewart, 2006). However, the present results indicate that amusic participants had difficulty discriminating between statements and questions. This finding is consistent with other evidence that Mandarin amusics exhibit mild deficits in intonation identification and discrimination in comparison with controls (Jiang et al., 2010). More generally, the failure in linguistic pitch discrimination among tone language speakers with amusia challenges the view that amusia is a disorder specific to musical pitch perception Peretz et al., 2002), as the musical pitch impairment extended to the domain of language (see also, Patel et al., 2008;Nguyen et al., 2009;Jiang et al., 2010;Liu et al., 2010;Nan et al., 2010;Tillmann et al., 2011). It should be emphasized, however, that there is considerable debate concerning the degree to which musical pitch impairment negatively impacts upon linguistic pitch perception. A number of studies have shown that linguistic pitch discrimination is significantly worse among amusics when semantic information is artificially removed (i.e., when only prosody is presented in non-speech analogs) than when natural speech is presented (e.g., Ayotte et al., 2002;Patel et al., 2005). This finding implies that amusic individuals can make use of semantic cues to compensate for their pitch deficit, as shown in Liu et al. (2012). In the present study, participants were provided with emotional semantic cues and were asked to match the intonation of negatively or positively valenced words. In order to perform this task successfully, the participants needed to be able to detect the conflict in intonations of comparison and probe words. Although it has been suggested that both positive and negative words can ease conflict processing Kotz, 2010, 2011a,b,c), thereby facilitating behavioral performance, our behavioral results revealed that the impairment of linguistic intonation discrimination among amusic individuals was still observed when intonation was applied to words with positive or negative emotional valence. This finding suggests that emotional valence failed to facilitate pitch processing in individuals with amusia. Correspondingly, we found the N2 elicited in conflict trials to be significantly reduced in amusics as compared with controls. As the amplitude of the N2 is typically larger in conflict than non-conflict trials (Nieuwenhuis et al., 2003), this finding further suggests that conflict processing was virtually absent in the amusic group. On the other hand, our ERP results revealed no impaired emotion processing in amusic individuals. Both amusic and control groups exhibited a larger N2 amplitude for positive words as compared with negative words, which likely reflects the higher arousal level ascribed to the positive than negative words employed in the experiment (see Table 3 and Supplementary Table 2). These findings suggest that amusics' failure to discriminate between question and statement intonation arises from an impairment related to conflict processing, rather than from an inability to process emotional information. The abnormal N2 observed in the amusic group is in part consistent with the results by Peretz et al. (2005) who also reported abnormal brain activity within the N2 time window in amusic as compared with control participants. However, in contrast to the present study, Peretz et al. (2005) employed an oddball paradigm and found that the amusic brain "overreacted" to unexpected (infrequent) pitch changes by eliciting a larger N2 response than normal controls. Internally generated expectancy caused by stimulus probability has been shown to contribute to the N2 response (see Folstein and Van Petten, 2008 for a review). Therefore, the greater N2 amplitude in the amusic group observed by Peretz et al. (2005) may partially reflect processes related to expectancy. When, in a later study, the conflicting pitch (an out-of-key note) occurred more frequently and, hence, less unexpectedly, Peretz et al. (2009) observed, similar to the present findings, that controls but not amusics elicited a large N2 response to the conflicting pitch. Contrary to our results for the N2 response, the reduction in N1 in response to incongruent intonation was similar in amusic and control participants. These results corroborate earlier finding by Jiang et al. (2012), in which participants judged whether aurally-presented discourse was semantically acceptable. The same pattern of N1 in two groups suggested that the underlying process is normal in the amusic group (see also Peretz et al., 2005Peretz et al., , 2009Moreau et al., 2009). However, other studies have reported an abnormal N1 response in amusics during intonation comprehension and melodic processing (Albouy et al., 2013). To reconcile these contradictory findings, Albouy et al. (2013) proposed that whether amusic participants show a normal or abnormal N1 may depend on task difficulty. Studies that reported a normal N1 used tasks that were relatively easy, such as a deviant tone detection task (Peretz et al., 2005 or no task at all (Moreau et al., 2009). In contrast, Albouy et al. (2013) and Jiang et al. (2012) employed tasks in which participants had to match two melodies, and judge whether a speech intonation was appropriate or inappropriate given a certain discourse, respectively. These authors found the N1 in individuals with amusia to be abnormal. Our behavioral results suggest that the task we used was difficult for the amusic participants (see the above discussion). Yet, we found a normal N1 for the amusic group. One explanation is that the emotional words used in the present study led to enhanced attention which, in turn, improved pitch processing in amusic participants. This gave rise to a relatively normal N1 response, despite the observed task difficulty in the amusic group. It should be noted that as neutral words were not included in this study, it is not possible to assess whether emotional valence benefited performance behaviorally. Nonetheless, for reasons that we will elucidate below, it is possible there was a small effect of emotional valence that was insufficient to boost amusic participants' task performance to the level of controls. As suggested by our ERP results, amusic participants were affected by negative words differently than normal controls at an early processing stage, i.e., the N1 time window. More specifically, we observed a larger N1 amplitude in the amusic group in comparison to the control group; however, this difference was only marginally significant and restricted to the posterior electrode sites. The auditory N1 has been shown to be modulated by selective attention and to increase in amplitude when perceivers direct their attention to the stimulus (e.g., Woldorff et al., 1993;Alho et al., 1994; for a review see, e.g., Schirmer and Kotz, 2006). Thus, the larger N1 response displayed by the amusic participants could reflect enhanced attention to the negative words. No significant group difference at either anterior or posterior electrode sites was found in the positive word condition. Negative stimuli have been shown to lead to better performance than positive stimuli (e.g., Hansen and Hansen, 1988;Öhman et al., 2001), which suggests that negative stimuli are more effective in capturing attention than positive stimuli. This has often served as an argument in support of the "negativity bias" hypothesis according to which we may have developed some adaptive mechanisms to deal with negative emotions (for a review see Rozin and Royzman, 2001). It should be noted that when examining the N1 response at anterior and posterior electrode sites within each group, we found in the control group a significantly larger N1 response to negative words at posterior than at anterior electrodes. In contrast, the amusic group showed comparable N1 amplitudes at both electrode sites. The broad scalp distribution of the N1 response displayed by amusic participants could indicate some additional activation of posterior brain areas that were not present in normal participants. Consistent with the notion of enhanced attention in the amusic group, these additional areas may be linked to attentional processes. In short, our results suggest that amusics may process emotional words (negative valenced in the present study) in a manner that differs from individuals without this impairment, potentially compensating for their disorder. However, this enhanced processing may not have been sufficient to improve the amusic participants' performance. Our failure to find a clear emotion effect in the behavioral and ERP data may be due to the low arousal level of the emotional words we used, e.g., "ugly." In comparison, Kanske and Kotz (2011c), for instance, used words such as "terror, " which elicited clear emotion effects. This may also explain why we did not observe a "negativity bias" in our control group, as the negative words were lower in arousal when compared with positive words. To interpret the N1 and N2 results together, we propose that the impairment in discriminating speech intonation observed among amusic individuals may arise from an inability to access information extracted at early processing stages. This inability, in turn, could reflect some disconnection between low-level and high-level processing. Conflict detection is generally thought to play a pivotal role in cognitive control. Following the detection of a conflict, perceivers presumably increase their attention and make "strategic adjustments in cognitive control" (Botvinick et al., 2001, resulting in reduced interference in subsequent trials (Kerns et al., 2004). Therefore, a deficit in conflict detection can have severe consequences on behavior. Many of the cognitive and social deficits associated with schizophrenia are believed to arise from impairments in conflict detection and cognitive control (Carter, 1998). Typically, the activation of ACC is only affected by conflicting stimuli perceived consciously but not subliminally in normal perceivers, whereas individuals with schizophrenia exhibit impaired conscious but normal subliminal priming (Dehaene et al., 2003). But the situation for amusia is unlike schizophrenia for whom the ACC is considered to be dysfunctional and the conscious control network is affected (Alain et al., 2002;Kerns et al., 2005). If a conflict in pitch cannot even be detected, amusic perceivers would not have an opportunity to become aware of the conflict, even though at a lower processing level, pitch discrimination is intact, as suggested by our N1 findings. A recent study reported a similar dissociation between lexical tone identification and brainstem encoding of pitch in speech (Liu et al., 2015), which suggests that high-level linguistic pitch processing deficits in amusia operate independently of low-level brainstem functioning. We can only speculate that access to this low-level information is limited in individuals with amusia. Dehaene et al. (2006) have usefully distinguished "accessibility" from "access, " whereby some attended stimuli have the potential to gain access to conscious awareness (accessibility), but they are nonetheless not consciously accessed (access). Thus, it is possible that pitch information processed at an early stage is potentially accessible, but amusic individuals do not have conscious access to that information. In conclusion, the present investigation provides further evidence that the pitch deficit associated with congenital amusia extends to the domain of language, corroborating the hypothesis that music and language processing share common mechanisms. Speaking a tone language, such Mandarin Chinese, does not compensate for this deficit. However, in daily life, amusic perceivers may make use of other cues, such as linguistic information, to compensate for their impairment. Our results suggest that individuals with amusia are more sensitive to linguistic emotional information than normal participants and that this sensitivity has some influence on early stages of pitch processing (i.e., in the N1 time window). However, emotional modulations appear to be restricted to this early processing stage. At a later processing stage (i.e., in the N2 time window), amusic participants still exhibit impairments in detecting conflicting intonation. We suggest that this impairment stems from an inability to access information extracted at earlier processing stages (e.g., the N1 time window), reflecting a disconnection between low-level and high-level processing in this population. It should be noted that the effect sizes of the findings here are small, owing to the nature of the linguistic stimuli and a low EEG signal-to-noise ratio (20 trials per condition). Future investigations of these questions may benefit from a larger number of trials in each condition to increase the signal-to-noise ratio.
7,525
2015-04-09T00:00:00.000
[ "Linguistics", "Medicine" ]
Silage production technology using grass thickening vibratory method The article presents silage production technology using vibratory method. The objective of this research was to quantify the selected theoretical model by comparison to experimental results and to determine more reasonable calculations methods for practical issues. Based on the classical theory of mechanical oscillations a numerical analysis of silages thickening by vibrator was developed. Using the developed numerical model, the vibrator operational characteristics were obtained. Numerical and experimental investigation was performed using a direct-action vibrator to compact chopped maize. A comparison between the experimental and numerical results is presented. The results can be used to determine technical characteristics of the vibrator for silage thickening dynamics. The obtained system parameters can be used to prepare technical tasks for selection of vibratory silages thickening equipment. Introduction In agriculture, when preparing forage from grassy plants, we face the problem of how to thicken materials effectively and economically.When preparing forage, grass thickening mainly affects quality of the prepared forage.When thickening grass mass in a trench by a tractor, energy-consuming effect of thickening is rather high and reaches 18 kJ/kg [1,2].When pressing grass mass by various wheel tractors, in the first pressing stage grass mass deforms up to about 275 mm [2].It has also been determined that, thickening with tractors, vertically operating pressure force increases under certain tractor's engine rotation frequency.In the frequency range of 20-26 Hz, the measured speed of vibrations, if pressing grass by caterpillar tractor, increases by 5 dB, and by wheeled tractors up to 13 dB in reference to the static gravity force of these tractors.Research results indicate that during grass thickening stages, deformation decreases and the resonant frequency of the pressed grass increases [2]. In the case of vibratory thickening, the thickened mass is not polluted with petroleum products, and the danger of traumas is decreased.One of grass thickening method is the vibrating compaction using vibrating rollers.The rollers with vibratory rolling elements are used to achieve higher compaction rates when storing wilted grass or chopped maize in clamp silos.So, this method is suitable only for compacting in trenches and clamps rather than containers [3,4]. Performed scientific research [1,5] shows, that experimental vibrators with constant current engine and indirect vibrators with constant current engine are a good alternative for grass thickening.These procedures thicken effectively only bulk stem well chopped grassy plants and partly their mixtures.However, it is not recommended to thicken red clovers and Galega Orientalis with this equipment.When using vibratory thickeners, it is required to set the frequency correctly by which plant mass is pressed better and to determine what dynamic requirements of the equipment should be and how large an area of the silage it could press [6]. One of the indexes that characterises pressure of fibred-plant mass is its density.When performing research of the thickening process of fibred-plant mass, many authors adhere to the following preconditions: 1) quantity of the static load force does not depend on grass deformation speed; 2) pressure derivative according to mass density is a function of the added pressure.These preconditions are used in description of simplified grass mass pressure models [7].Separate grass characteristics are described applying idealistic real rheological elements of materials and their various combinations [8].In many cases, deformation and relaxation of elements in making a model are described by differential equations [5,6].These equations are usually complicated because elements of the pressed mass systems contain many degrees of freedom and many corresponding natural frequencies. Many models were developed to analyse a viscous elastic material: Maxwell; Kelvin; Biurgers, Herschel-Bulkley, Bingham etc.In applying the Maxwell model, the advantage is that the volumetric and tangential modules can be assessed independently.Many authors used the equivalent mechanical components to model the stress-deformation relation of biological materials.The spring stiffness is shown to be non-linear and a friction element is recommended to model a fully plastic strain [9,10].It is possible to achieve good agreement between theoretical and experimental results [6].But for practical issues these models are very complicated.This compaction task is simplified if a model of the vibrating system is constructed using a single degree of freedom according to the classical theory of mechanical oscillations [11,12]. The objective of this research is to evaluate silage preparation in a container storage by inertia directional vibrator with vibratory thickening approach as well as to determine the vibrator movement characteristics in thickening silage by using a calculation model and then to verify with experimental data. Peculiarities of fluctuations arisen by directional vibrator Thickening the silage mass involves dealing with an elastically tenacious and partly coherent material with a relaxing characteristic.The elastic and viscous resistance forces are acted upon the vibrator that is thickening the silage mass.The analysis of the thickening process of fibrous plant mass is based on the following assumptions: the initial density of the silage is the same in cross-section around the bunker, stresses in all directions are equal to zero in the absence of external load forces, the silage density increases continuously during thickening process, static load forces do not affect the velocity of deforming silage and the pressure derivative according to mass density is the added pressure function [5].In this environment, the vibrator movement is difficult to describe mathematically and it is not possible in many of the cases. According to the classic vibration theory, it is possible to scrutinise the system of plant thickening by vibratory method as a concentrated mass on a spring when it's other end is steadily set [13].One of the most common dynamic inputs is excitation by a centrifugal force of nonweighted masses [11,12].By requiring the excitation force to be oriented in a certain direction, an appropriate vibrator construction is needed.In the present study, a direct-action vibrator, in which the excitation force is directed to a vertical straight line as shown in Fig. 1, has been chosen.The excitation force in vibration vertical direction is [12,13]: where an is the unbalance system mass (kg), is the radius (m), Ω is the frequency of excitation force (rad/s).Amplitude of excitation force is not constant since it depends on rotational frequency.Denote the frequency ratio = Ω/ .Then, the amplitude of excitation force can be expressed as: where 0 = ⁄ is the natural angular frequency (rad/s) of the vibrator. In this system, resistance elements of mass elasticity and its inner friction are separated from each other.Then, the governing differential equation of motion can be written as [13]: where is the vibrator mass (kg), is the stiffness of the spring (N/m), is the displacement of the spring (m); = / is the velocity of vibration (m/s); = / is the acceleration of vibration (m/s 2 ), is the constant of mechanical resistance (Ns/m), is the time (s).The total solution of this equation has two parts: the first part corresponds to free vibrations of the system which, in this case decays quickly due to internal friction of the system; the second part is response due to dynamic force action and in this case, is the dominant vibration.Having expressed the vibration response in a complex form, we can find the relation between vibration velocity and excitation force.By knowing the mechanical resistance of the grass mass and having measured the vibration speed, it is possible to set values of excitation force, and then compare these equations to the calculated ones. Numerical analysis of grass thickening using vibratory method In developing the numerical models, which do not seek to replicate a complex heterogeneous material, reasonable simplifications can be implemented.Then, the analysis task becomes much simpler.This method gives a higher error, but it is sufficient to describe the silage compaction process by a vibrator for practical engineering applications.Despite the fact that silage does not have a homogeneous mass, reasonable simplifications can be implemented.Due to the fact that silage mass internal stresses are relatively low, the homogeneity assumption can be implemented to model silage mass.Then, a linear relationship between stress and deformation can be obtained with sufficient accuracy [14,15]. For the numerical analysis, the initial phase of the excitation force magnitude, frequency, stiffness coefficient of the surrounding (silage) and damping coefficient need to be chosen.While evaluating work of the vibrator, initially the silage stiffness coefficient (e.g.300 N/m) has been chosen and then it was increased at every second (in real conditions, silage stiffness coefficient changes continuously) [16].Silage damping coefficient can be chosen from the information in literature.In the present study, silage damping coefficient is taken as 7 s -1 [16].The vibration frequency was increased from 1 Hz to 30 Hz in increments of 5 Hz. Vibrator sinking depth into the silage mass schematically is shown in Fig. 2. Solid body (1) (vibrator) is pressed among two slabs (silage mass) (2).Springs (3) model the silage mass stiffness characteristics.Relation between the body (1) and slabs ( 2) is described only by cohesion and friction forces ′.The concepts of the present study are that lateral frictional forces to the surrounding are primary and resistance force on the base of the vibrator is insignificant.Depending on the vibrator sinking depth, surrounding reaction is shown graphically in outline ABCDE (Fig. 3).Vibrator sinking into the surrounding during one excitation force cycle is shown at segment AE.Here is the direction of a movement generated by the centrifugal force.If external excitation force , which influences body 1, does not exceed ′ -coupling and friction forces to the silage mass are in quiescent state -i.e.< ′ -the vibrator moves only in the frame of silage mass thickening deformation that is imitated by springs 3 (Fig. 2).It is observed along sides AB and CD of the outline (Fig. 3).If excitation force F exceeds force ′ ( ′ = constant), movement of the vibrator (1) in respect of slabs (2) (Fig. 2).It corresponds to movement of the vibrator in respect to the silage mass along segments BC and DE (Fig. 3).The specified condition ′ = ( -constant vibrator friction force to the silage mass ( )), signifies the temporary silage mass resistance [5].At some time instant, velocity of the vibrator equals to zero (before reaching point B 1 ) at the segment AB (Fig. 3).Then, depending on the vibrator's pressure on the grass, grass reaction is represented by the outline AB 1 AE.If the amplitude of vibration is less than plastic deformation limit, vibrator's sinking into the grass should stop.Movement is considered to be steady since temporary resistance during the period of the excitation force action is decreasing (external excitation force have vertical direction in y axis, and axis of vibrator -in axis).The theoretical model of vibrator movement for the case shown in Fig. 3 was deeply analysed in ref. [5].The Eq. ( 4) is a more complete representation of vibrator movement in silage grass thickening that includes the characteristics of silage stiffness and motion suppression [5]: where is the displacement of the vibrator base (m), is the velocity of the vibrator (ms -1 ), is the acceleration of the vibrator (ms -2 ), is the damping coefficient (s -1 ), is the system vibration frequency (Hz or s -1 ), is the gravity acceleration, is the frequency of the excitation force (Hz or -1 ), is the initial phase of the excitation force (rad), = / , is the weight force ( ), = / , is the excitation force ( ), is time ( ).Satisfying the initial conditions at point A ( = = 0, = 0) the velocity was obtained: Using the Eq. ( 5) the velocity of the vibrator sinking into the silage mass is plotted in Fig. 4. The graph was obtained using the programme package Mathcad.We can observe that velocity amplitude of the vibrator sinking into the silage is 0.34 m/s (Fig. 4) when vibration frequency was increased till 27 Hz. The performed theoretical study of the vibratory thickening process of silage and similar other resilient materials allows to describe the process by analytical means where approximate values of system coefficients can be chosen and reasonable simplifications can be introduced.It is possible to set the influence of the vibrator's technical parameters to the silage or grass thickening dynamics.The developed theoretical model can be used for preparation of a technical task when designing a vibratory thickening installation.The theoretical solutions can be improved by selecting more accurate values for the various system parameters. Experimental investigation of silage thickening using vibratory method For experimental study, the centrifugal direct-action vibrator with two masses rotating in counter-clockwise directions was used for silage preparation.Direct-action vibrators have two weights rotating in different directions at the same angular frequency.Direct-action vibrators are designed in the Institute of Agricultural Engineering at Aleksandras Stulginskis University (Lithuania) and suitable for silage compaction.The vibrator was rotated by an asynchronous motor of 1.1 kW, 1500 min -1 .Weights of 7.0 kg mass were fitted to rollers in the vibrator and the total mass of this vibrator was 125 kg (Figs.5-6).The container was filled by chopped maize mass as shown in Fig. 6.The dimensions of container storage; width is 0.75 m; length is 1.20 m; height is 0.95 m).The plant mass has a moisture content of 70-80 % and chaff length of 15-45 mm.The vibrator was installed on the top and continuously operated for 10-20 minutes.While the vibrator was working, every 5 minutes the thickness of the mixture was recorded by measuring instruments.The plant mass volume in the container was calculated using measurements by a ruler (accuracy ±0.001 m) at four selected points [1].The plant mass in the container was determined by scales, having the range of 0-100 kg, and accuracy of ±0.1 kg. The vibrator accelerations on the vibration plane of the direct-action vibrator were measured.Acceleration and velocity of the direct-action vibrator was measured according to the widely accepted methodology by using a components PULSE software developed by Brüel&Kjaer was used along with PULSE data acquisition module 3580, and type 4370 piezoelectric charge accelerometers for recording vibration data (see Fig. 5).The characteristics of accelerometer type 4370: frequency range is 0.1-4800 Hz; sensitivity is 100 pC/g.The location of accelerometer is presented in Fig. 6.Each experiment was repeated at least 3 times at the 1-27 Hz range.The test results were analyzed by statistical methods where the arithmetic means, the standard deviation were calculated and the error was estimated by selecting Student distribution coefficient with a probability of 0.95.It was estimated that better silage is obtained when the plant mass has a higher density.In previous studies, it has been established that silage was compacted better by loading a thicker layer [1].The influence of maize mass density variation to thickening time has been determined (see Fig. 7).The first 200 kg corn layer corresponds to 10 minutes thickening period and the second additional 200 kg layer corresponds to a repeated 10 minutes thickening.After 20 minutes of thickening, a density of 510 kg/m 3 has been obtained for both layers, while thickening the first 200 kg mass mixture layer after 10 minutes the density was 571 kg/m 3 .Dry matter (DM) densities correspond respectively to 143 and 161 kg/m 3 .For this test, the resulting silage density was sufficient.When analysing the homogeneous differential equation of a single degree freedom of a linear stationary system, it is known that the resonant properties depend on the system mass, stiffness of the spring and amplitude characteristics depend on the mechanical resistance coefficient (i.e., internal friction of silage mass) [10].In order to confirm these observations, the vibration acceleration of direct-action vibrator was measured on the vibration plane.Previous results showed that a high acceleration is one of the most important settings for providing the highest density in a reasonable amount of time [1,6].The amplitude of vibratory device vibration depends on the device design and the position of eccentric installation.After the evaluation of herbal plant specific characteristics it was selected the vibratory device design, set the frequency range and amplitude.It was found that for maize thickening by direct-action vibrator the greatest acceleration was obtained at the frequency of 15 Hz. In Figs.With the increase of silage thickness, vibration velocity and acceleration of the vibrator sinking into the grass do not decrease significantly (see Figs. 8-9).Velocity was varied from 0.3 till 0.2 m/s.It was observed that with the increase of silage density, velocity decreases insignificantly.Acceleration was about 3 m/s 2 at all work cycle.At the end of the work cycle, velocity of the vibrator movement is decreased by minimal value (about 0.1 m/s).This can be explained by the fact that velocity of the vibrator movement is quite high since the vibrator is constantly under excitation force.It can be seen from Fig. 8, the maximum value of velocity amplitude is 0.32 m/s 2 .The theoretical results were 0.34 m/s 2 .The difference between the simulation and experimentally established results was less than 10 %.Thus, the percentage errors are within the accepted ranges.The validity of the simulation data, after performing the simplified dynamic analysis of the silage thickening by vibrator, has been confirmed. Evaluated simulation method can be recommended for development of the herbaceous plants silage thickening process by vibratory method.Based on the presented results, it is possible to determine practically operational regimes of direct-action vibrator, movement velocity, acceleration and other operational characteristics. The role of each co-author to the paper preparation.Eglė Jotautienė was one of supervisors of this work, performed part of Information sources review, analyzed vibratory thickening equipment, prepared part of theoretical models and analytical tests methodology of vibratory thickening equipment and together with co-authors performed their experimental tests, managed research data in the graphs and conducted the analysis of these results and formulation of conclusions.Algirdas Jasinskas was the research supervisor of this work, he performed part of Information sources review, prepared article methodical part, has organized and together with coauthors carried out the experimental and analytical studies with inertia vibrators, investigated the silo compaction process in a container storage, managed research data and presented research results in graphs, as well as performed their discussion, analysis and the formulation of conclusions.The author's great contribution of paper layout design according to requirements, correcting, etc. Vytautas Kučinskas investigated vibratory compaction equipments, prepared their work and silage compaction research methodology, together with co-authors performed their analytical and experimental studies, analyzed vibratory compaction unit working process and conducted research results analysis and formulation of conclusions.Dainius Steponavičius analyzed the sources of information, presented theoretical assumptions of vibrational compaction equipment working process, analyzed vibratory thickening equipment, prepared part of theoretical models and analytical tests methodology, presented part of survey data of vibratory compaction equipment and conducted their analysis and formulation of conclusions. Conclusions A numerical and experimental analysis was performed to analyse the silage vibration thickening process.According to a classic vibration theory, it is possible to model the system of silage thickening by vibratory method as a linear stationary system of one degree of freedom. The performed numerical research of silage thickening indicates that velocity amplitude of the vibrator sinking into the silage is 0.34 m/s.Experimentally obtained results of vibrator movement velocity was varied from 0.32 till 0.2 m/s and it show a decrease of a minimal value (about 0.1 m/s).This difference is due to the fact that the numerical analysis of this process was performed by choosing approximate values of some coefficients and introducing various simplifications. The difference between the numerical simulation and experimentally obtained results was less than 10 %.This reasonable calculations method for practical issues can be recommended for the development of the silage thickening process.Based on the given results, it is possible to determine practical operational regimes of direct-action vibrator as movement velocity, acceleration and other operational characteristics. Fig. 7 . Fig. 7.The influence of maize mass density and thickening time during compaction by direct-action vibrator 8 - 9 presented vibration movement velocity and acceleration of direct-was measured by accelerometer fitted on the vibration plate, and all experimental data were fixed by PULSE software.As this PULSE software uses Excel program the data was exported and analysed by Excel software.The duration of separate experiments was 5 min., but results in Figs.8-9 are presented only for 2 s range, because all research results on all working period were very similar.
4,643.8
2017-08-15T00:00:00.000
[ "Engineering" ]
Measuring high-speed train delay severity: Static and dynamic analysis This paper focuses on optimizing the management of delayed trains in operational scenarios by scientifically categorizing train delay levels. It employs static and dynamic models grounded in real-world train delay data from high-speed railways. This classification aids dispatchers in swiftly identifying and predicting delay extents, thus enhancing mitigation strategies’ efficiency. Key indicators, encompassing initial delay duration, station impacts, average station delay, delayed trains’ cascading effects, and average delay per affected train, inform the classification. Applying the K-means clustering algorithm to standardized delay indicators yields an optimized categorization of delayed trains into four levels, reflecting varying risk levels. This static classification offers a comprehensive overview of delay dynamics. Furthermore, utilizing Markov chains, the study delves into sequential dynamic analyses, accounting for China’s railway context and specifically addressing fluctuations during the Spring Festival travel rush. This research, combining static and dynamic approaches, provides valuable insights for bolstering railway operational efficiency and resilience amidst diverse delay scenarios. Introduction Efficient and reliable train operations are fundamental for sustainable transportation systems.However, train delays can dramatically impact the performance and satisfaction of both passengers and railway operators [1].Precise measurement of train delay severity is essential to identify critical areas, devise mitigation strategies, and enhance overall railway performance.The insights provided will aid researchers and railway authorities in developing effective strategies to reduce delays and improve service quality [2,3]. Understanding and measuring train delay severity is crucial for maintaining operational efficiency in railway systems.By identifying areas with severe delays, railway authorities can implement targeted strategies to minimize disruptions, improve scheduling, and enhance overall service reliability.Train delays can lead to passenger dissatisfaction and decreased ridership.Accurate measurement of delay severity helps in addressing the root causes of delays and devising solutions to improve the passenger experience, ultimately increasing customer satisfaction.In addition, Severe train delays can have safety implications, leading to overcrowding, increased risk of accidents, and potential security concerns.Proper measurement of delay severity enables early intervention and ensures a safer and more secure transportation environment.Efficient resource allocation is essential for maintaining optimal railway operations.By analyzing delay severity, authorities can allocate resources more effectively to address critical areas and mitigate the impact of severe delays.Studying train delay severity aids in identifying areas with frequent or prolonged disruptions.This knowledge can guide infrastructure planning and maintenance efforts to address vulnerabilities and enhance system resilience.Understanding delay severity trends helps in implementing predictive maintenance strategies, minimizing downtime, and reducing the occurrence of severe delays due to equipment failures.Dynamic measurement of train delay severity provides real-time insights into current disruptions.These real-time data allow railway operators to make informed decisions and adjust operations promptly.In summary, the study of train delay severity measurement, both in static and dynamic ways, carries immense significance for the efficient operation, safety, and customer satisfaction of railway systems.Accurate and timely measurement of delay severity facilitates informed decision-making, resource allocation, infrastructure planning, and emergency response.Additionally, it contributes to the overall optimization of railway services, network resilience, and the improvement of passenger experiences. This paper investigates the indicators used to gauge the severity of train delays in highspeed railways.A train delay classification model is developed employing the K-means clustering method.By identifying the optimal number of clusters, the model effectively visualizes the delay levels in high-speed railway trains and establishes the relevant classification rules.Additionally, a dynamic analysis model is proposed to assess the severity of train delays in highspeed railways, utilizing a Markov Chain approach to examine the delays from the viewpoints of train stations and trains.The combination of the K-means clustering method with a Markov Chain model in the paper enables a synergistic analysis of train delay severity, where K-means provides insights into static delay patterns and severity levels, while the Markov model offers a dynamic perspective on delay transitions and probabilistic forecasting.This integrated approach enhances the understanding of delay dynamics in high-speed railways and supports informed decision-making for efficient operation and service improvement. Literature review (1) Factors influencing train delays Train delays result from a complex interplay of internal and external variables.In an insightful study [4], risk factors affecting disruptions in Shanghai's metro operation were explored, emphasizing the role of internal factors like equipment failures and operational issues in exacerbating delay severity.Similarly, in the context of benchmarking and evaluating railway operations performance, another study [5] delved into internal elements such as infrastructure conditions and station facilities, shedding light on their substantial impact on railway efficiency.Meanwhile, a notable contribution came from Ref. [6], which introduced a big data analysis approach to assess rail failure risk, with a particular focus on equipment-related factors affecting the reliability of the railway system.Ref. [7], on the other hand, employed predictive models based on high-speed train operation records to delve into the influence of primary delays, with a specific concentration on internal factors that contribute to delay severity. Furthermore, external factors, especially weather-related issues, have been thoroughly explored in various studies.For example, a study in China [8] concentrated on predicting weather-induced delays in high-speed rail and aviation, underscoring the significance of external factors such as weather conditions in disruption management.Additionally, the impact of weather conditions on rail delays was meticulously analyzed, particularly in the context of Dublin's metropolitan rail system, highlighting the pivotal role of weather as an external factor affecting rail operations [9]. These studies contribute significantly to understanding the multifaceted nature of train delays.Moreover, these delays and the disruptions they cause have been thoroughly studied in terms of their cost and potential mitigation strategies.A notable study [10] emphasized the financial consequences of equipment-related disruptions, delving into the economic impact of in-service failures of railroad rolling stock.In addition, challenges in railway disruption management and potential solutions have been explored, encompassing strategies for mitigating various factors contributing to disruptions in railway services based on the factors that lead to train delays [11]. This section explores the internal and external variables that contribute to train delays, such as equipment failures, operational issues, infrastructure conditions, and weather-related disruptions.Understanding these factors is essential for identifying the root causes of delays, which is crucial for accurate measurement of delay severity and effective delay management strategies in railway systems.By considering these influencing factors, the paper can establish a comprehensive framework for analyzing and categorizing delay severity levels based on the identified variables. (2) Delay prediction methods Efforts to predict train delays have witnessed a proliferation of diverse methodologies, each enhancing prediction accuracy and operational efficiency.For instance, Ref. [12] introduced a multi-output deep learning model based on Bayesian optimization for sequential train delay prediction, thereby improving the assessment of delay severity and prediction precision.In a similar vein, deep learning approaches were used to predict departure delays at original train stations, combining route conflicts and rolling stock connections to achieve substantial improvements in prediction accuracy [13].Machine learning techniques have also gained prominence in predicting train delays, with a study introducing network delay measurement using machine learning, successfully transitioning from controlled laboratory settings to realworld deployment in railway systems, thus highlighting the adaptability and practicality of machine learning techniques [14].Furthermore, resilience planning took center stage in a study by Ref. [15], which developed a multistage decision optimization approach for train timetable rescheduling under uncertain disruptions in high-speed railway networks, holding promise for enhancing the resilience of railway operations and reducing the severity of delays during unexpected events. The discussion on various methodologies for predicting train delays, including deep learning models, machine learning techniques, and data-driven approaches, highlights the importance of accurate and timely prediction of delays in railway operations.Effective delay prediction methods are essential for anticipating and mitigating delays, which are key components in measuring delay severity and implementing proactive strategies to minimize disruptions.By leveraging advanced prediction techniques, the paper can enhance the dynamic analysis model for assessing delay severity and improving the overall reliability of high-speed railway systems. (3) Severity of delays Understanding the severity of train delays is critical for effective management and mitigation strategies.Ref. [16] significantly contributed to this understanding by analyzing train delays caused by extreme weather events from 2001 to 2020.Their research shed light on the influence of weather-related external factors on the severity of delays in the rail system.Furthermore, Ref. [17] introduced a Markov chain model for delay distribution in train schedules, assessing the effectiveness of time allowances in managing delays.This study aids in comprehending the potential ripple effect of delays throughout schedules.Additionally, the severity of delays can be quantified through delay indices, providing standardized metrics for evaluating train delays and their impact on rail operations [18].Operations affected by delays in predictive train timetables were the focus of Ref. [19], aiding in schedule optimization and the mitigation of delay severity.The causes of delays in railway passenger transportation were analyzed in another study, providing insights into the factors affecting passenger train delays and the associated severity of these disruptions [20].Lastly, a combined simulation-optimization approach for robust timetabling on main railway lines was introduced, optimizing timetables for reliability and thereby mitigating the severity of delays in critical railway corridors [21]. The section on the severity of delays emphasizes the significance of measuring delay severity for effective management and mitigation strategies in railway operations.By quantifying delay severity through delay indices and predictive models, the paper can provide standardized metrics for evaluating train delays and their impact on rail operations, aligning with the objective of the study to assess delay severity in high-speed railways.Analyzing the root causes of delays and understanding their severity levels are essential steps in devising targeted interventions to minimize disruptions and enhance service reliability, as highlighted in the literature review. (4) Enhancing railway performance while conserving punctuality Several studies have contributed to the measurement and management of train delays.For instance, Ref. [22] proposed a stochastic model for reliability analysis of periodic train timetables, aiding in timetable optimization and better management of delays.Moreover, a machine learning approach was adopted to predict near-term train schedule performance and delay using bi-level random forests, showcasing the potential of advanced data-driven methods in delay management [23].Furthermore, delays for passenger trains on a regional railway line were explored, providing a regional perspective on delay factors and their management [24].Punctuality development and delay explanation factors on Norwegian railways were analyzed, offering a historical perspective on delay trends and contributing to the understanding of delay management strategies [25].Focusing on determining operations affected by delay in predictive train timetables, a multistage decision optimization approach for train timetable rescheduling under uncertain disruptions in a high-speed railway network was proposed, which has significant implications for enhancing the resilience of railway operations and reducing the severity of delays during unexpected events [26].These studies collectively emphasize the importance of robust delay measurement and management strategies, showcasing various approaches, including data-driven methods and optimization techniques, to better understand, predict, and mitigate the impact of train delays on rail networks.Some studies underscore the importance of robust timetabling and optimization strategies in reducing the severity of train delays and enhancing railway performance.For example, the disturbance robustness of railway schedules was evaluated, considering various external factors that can disrupt rail services and emphasizing the need for resilient scheduling strategies [27].Focusing on enhancing railway performance in Norway, factors influencing punctuality and system reliability in the Norwegian railway network were examined [28].Resilience-based approaches and disturbance robustness are essential for mitigating the impact of train delays.A quantitative analysis approach for resilience-based urban rail systems was introduced, utilizing a hybrid knowledge-based and data-driven method to assess and enhance system resilience [29].This section underscores the importance of robust delay measurement and management strategies in enhancing railway performance and conserving punctuality.By focusing on optimizing timetables, enhancing system resilience, and reducing the severity of delays during unexpected events, the paper can contribute to the overall improvement of railway operations and service reliability, aligning with the goals of measuring high-speed train delay severity.The literature review provides a foundation for understanding the challenges in delay management and the potential solutions for enhancing railway performance, which can inform the development of effective delay severity measurement models and strategies in the study. The measurement of delay severity plays a crucial role in effective delay management within railway systems.Understanding the severity of train delays is essential for identifying critical areas, devising targeted mitigation strategies, and enhancing overall railway performance.By quantifying the severity of delays, railway authorities can prioritize resources, optimize scheduling, and improve service reliability to minimize disruptions and enhance passenger satisfaction. However, there is often a lack of comprehensive delay severity measurement in railway operations, leading to challenges in identifying and addressing the root causes of delays.Without accurate measurement of delay severity, railway operators may struggle to allocate resources efficiently, resulting in prolonged disruptions, decreased operational efficiency, and potential safety risks.The absence of robust delay severity measurement can also hinder the development of proactive maintenance strategies and the timely response to disruptions, impacting the overall resilience and reliability of railway systems. To address the lackage of delay severity measurement, it is essential for railway authorities to implement systematic approaches for quantifying and categorizing delay severity levels.By utilizing advanced data analysis techniques, such as clustering models and Markov chains, railway operators can gain insights into the impact of delays on operational efficiency and passenger experience.Incorporating delay severity measurement into decision-making processes enables proactive intervention, resource optimization, and the development of targeted strategies to mitigate the impact of delays on railway operations. In conclusion, the importance of delay severity measurement cannot be overstated in the context of effective delay management in railway systems.By enhancing the measurement and analysis of delay severity, railway authorities can improve operational efficiency, enhance passenger satisfaction, and ensure the safety and reliability of railway services.Addressing the lackage of delay severity measurement through systematic approaches and advanced data analysis techniques is essential for optimizing railway performance and resilience in the face of diverse delay scenarios. Data overview All operational data of the trains in this study were obtained from the CTC system of the High-Speed Railway Dispatching Center of Guangzhou Railway Group.The data pertains to the operational records of trains on the Wuhan-Guangzhou High-Speed Railway for the period from January to November 2016, comprising a total of 437,689 original records.This dataset covers 14 stations along the Wuhan-Guangzhou line, including High-speed train delay classification criteria The crux of grading the extent of high-speed train delays lies in quantifying the scope and consequences of these delays, and assessing potential safety risks in train operations.To achieve this, it is necessary to establish a comprehensive, scientific, and reasonable train delay classification evaluation index system.This system should adhere to principles of comprehensiveness, relative independence, reasonableness, and ease of acquisition for each indicator.The indicators should objectively and sensitively reflect the actual state of train delays while complementing and coordinating with each other to depict the delay situation from various perspectives. In previous research, some scholars have qualitatively measured the degree of delays, such as using expert scoring methods for quantification, which is subjective and lacks analysis of actual operational data of trains.Other scholars have only selected a single feature variable, such as the number of delayed trains or the duration of delays, which fails to comprehensively and systematically reflect the delay conditions and evolutionary patterns. In this section, three dimensions are chosen to form the evaluation index system for delay classification.The first dimension relates to the inherent status of delayed trains, described by the time when the delay first occurs.The second dimension pertains to the impact of delayed trains on stations, specifically including the number of stations affected by delays and the average delay per station.The third dimension concerns the impact on trains, encompassing the number of trains affected by delays and the average delay per delayed train. Initial Delay (t 1 ): This indicator represents the arrival delay of a train at a station due to external disturbances or interference from other trains.It reflects to some extent the delay situation of the train itself and the impact of disturbances on the normal operation of the train.A longer first delay time may have a greater influence on subsequent stations and trains. Number of Stations Affected by Delays (s 1 ): By setting the delay threshold to 1 minute, this indicator calculates the total number of stations where the arrival time of the train exceeds the threshold.It can be used to assess the longitudinal propagation of train delays. Average Delay per Station (t 2 ): This indicator is the ratio of the total delay time of the train at all stations to the number of stations affected by delays.It measures the average impact of delayed trains on each station.A longer average delay indicates a more severe delay event. Number of Trains Affected by Delays (s 2 ): After a train is affected by external disturbances, it may cause delays for subsequent trains, known as chain delays.In this chapter, the number of trains affected by delays includes both the trains initially affected and the subsequent trains experiencing chain delays.This indicator assesses the lateral propagation of train delays, and a higher value indicates a broader range of train delays. Average Delay per Delayed Train (t 3 ): This indicator is the ratio of the total delay time of affected trains at stations to the number of delayed trains.A longer average delay per delayed train indicates a more severe level of delays. According to the original data, the five aforementioned delay classification evaluation indicators are calculated for each train, as shown in Table 2. Due to the different attributes of the selected indicators, there are differences in their units and values, making them unsuitable for direct use in clustering algorithms.Therefore, to ensure the application of subsequent models and algorithms, it is necessary to normalize these indicators before modeling.This aims to eliminate differences between indicators as much as possible while retaining the internal information of the indicators to the maximum extent. The study found that standardized dimensionless methods resulted in more concentrated feature parameter distributions for the delay classification indicator data.Therefore, in this section, the standardized method is chosen to dimensionless the original data, as shown in Table 3. High-speed train delay clustering classification model 5.1 Selection of clustering method.This method exhibits superior performance for data analysis with fewer requirements for the number of clusters and has found extensive applications in fields such as petrochemicals, biomedicine, environmental science, and transportation research.In this section, the K-means clustering algorithm is chosen to achieve the classification of high-speed train delay levels.The K-means is computationally efficient and can handle large datasets, making it suitable for analyzing the extensive real-world train delay data from high-speed railways used in the paper.The method can quickly cluster the data points into groups based on similarity, facilitating the classification of delayed trains into different severity levels.In the context of the paper, where various key indicators such as initial delay duration, station impacts, average station delay, and cascading effects of delayed trains are considered, the scalability of K-means enables a comprehensive analysis of delay dynamics.The results produced by K-means clustering are easy to interpret, as each cluster represents a distinct group of data points with similar characteristics.This interpretability is crucial for classifying delayed trains into different severity levels based on standardized delay indicators, as done in the paper.Dispatchers can swiftly identify and predict delay extents, enhancing mitigation strategies' efficiency.K-means is a flexible clustering algorithm that allows for the adjustment of the number of clusters based on the data.In the paper, the optimal number of clusters for categorizing delayed trains into severity levels can be determined using the K-means algorithm, providing a customized and optimized classification approach tailored to the characteristics of the train delay data from high-speed railways.K-means is robust to noise and outliers in the data, ensuring that the clustering results are reliable and stable.In the context of analyzing train delay severity, where the data may contain variations and anomalies, the robustness of K-means helps in generating consistent and meaningful clusters that reflect the varying risk levels of delayed trains accurately.By applying the K-means clustering algorithm to standardized delay indicators, the paper effectively classifies delayed trains into severity levels, offering valuable insights for optimizing delay management strategies in high-speed railways. Step 4: Calculate distances and update centroids.Compute the Euclidean distance between sample point x i and each centroid u j , d ij ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi . Assign x i to the category λ i corresponding to the smallest d ij and simultaneously update the cluster partitioning, Calculate the sample mean within the new cluster and use it as the new centroid, u j 0 ¼ 1 jc j j P x i 2c j x i . Step 5: Determine if iteration is complete.Compare u j with u j 0 to determine if they are equal.If not, return to Step 4; otherwise, stop the calculation, resulting in the final cluster partitioning u j 0 .5.2 Method for optimal cluster number selection.During K-means clustering analysis, it is necessary to predefine a value for K, which directly impacts the subsequent clustering results.Therefore, it's essential to evaluate and select the optimal K value based on the clustering performance under different K values.As clustering algorithms are a type of unsupervised classification with generally no predefined category labels, external evaluation methods, internal evaluation methods, and relative evaluation methods are commonly employed to scientifically assess their effectiveness. Given that external information for the classification of high-speed train delay levels is unavailable, this section combines internal evaluation methods (Elbow method) and relative evaluation methods (Silhouette Coefficient, Calinski-Harabaz Index, and Davies-Bouldin Index) to select the relatively optimal number of clusters. (1) Elbow method.The Elbow method is an evaluation method for clustering results based on the Sum of Squared Errors (SSE) within the dataset.It can be used for selecting the optimal number of clusters.SSE changes as K varies.When K is less than the actual number of clusters, SSE decreases rapidly as K increases until reaching the ideal number of clusters.When K exceeds the actual number of clusters, SSE decreases gradually and approaches a flat value as K increases.It's important to note that there is an extreme case where, when K equals n, each data point corresponds to a separate cluster, resulting in the minimum SSE.However, this is not the desired outcome.The relationship between SSE and K resembles the shape of an elbow, hence the name "Elbow Method."The formula for calculating SSE is as follows: In Eq (1), C i represents the i th cluster, p is a sample point in cluster C i , and c i � is the mean of all samples in cluster C i . (2) Silhouette Coefficient (SC).The SC evaluates the quality of clustering results by comparing the dissimilarity between data within clusters and between clusters.A lower dissimilarity within clusters and higher dissimilarity between clusters indicate better clustering results.The formula for calculating SC is: In Eq (2), a(i) represents the average distance of sample i to other samples within the same cluster, and b(i) is the average distance of sample i to samples in other clusters.The SC(i) value ranges from -1 to 1.A value closer to 1 indicates tighter clustering, signifying better results.A value closer to -1 indicates poorer clustering, while a value of 0 means that the sample lies on the boundary between two clusters. (3) Calinski-Harabaz Index (CHI).The CHI measures the compactness of points within clusters by calculating the sum of squared distances between points and cluster centers.It also measures the separation of the dataset by calculating the sum of squared distances between cluster centers and the dataset center.CHI is the ratio of separation to compactness.The formula for CHI is: In Eq (3), n represents the number of clusters, K is the current cluster, Tr(S W ) is the trace of the within-cluster scatter matrix, and Tr(S B ) is the trace of the between-cluster scatter matrix. The formula for calculating Tr(S W ) is: The formula for calculating Tr(S B ) is: The larger the calculated CHI value, the better the clustering results. (4) Davies-Bouldin Index (DBI).DBI is an index that reflects the average similarity between clusters.The smaller the value, the better the clustering results, with the minimum value being 0. The formula for DBI is: In Eq (6), s i represents the average distance of all samples in class i to the class center, and d ij represents the distance between the centers of classes i and j. High-speed train delay static classification results (1) Optimal cluster number selection results.Based on Eq (1), the elbow plot as shown in Fig 1 is generated.The results indicate that by using the elbow method, the optimal values for the number of clusters K could be 4, 5, or 6. Using Eqs (2) to ( 6), we evaluate the clustering results using the relative evaluation methods SC, CHI, and DBI.This allows us to generate coefficient tables for different values of K, as presented in Table 4. From Table 4, it can be observed that when evaluating using the Silhouette Coefficient, K = 2 is optimal, while for the other two evaluation indices, K = 4 is optimal.Although K = 2 yields the highest average Silhouette Coefficient, it's important to note that the SSE is excessively large at this point, as indicated in Fig 1 .Therefore, K = 2 is not suitable as the optimal cluster count. Taking all of the above into consideration, the optimal cluster count for train delays is determined as 4 clusters, meaning that the severity of high-speed train delays will be classified into four levels: A, B, C, and D. ( Additionally, the centroid of each cluster and the distances between them were computed, as shown in Tables 5 and 6. (3) Visualization of high-speed train delay clustering and classification results.Having selected the scientific train delay classification indicators and performed clustering analysis to obtain reasonable results, further processing may not be necessary from a computational and data mining perspective.However, considering the direction of smart and data-driven highspeed train operations, it's necessary to visually represent the train delay clustering and classification results to facilitate clear understanding for relevant personnel.Upon analyzing Table 5, it's evident that among the five selected train delay classification indicators, the initial delay (t 1 ), average delay per station (t 2 ), and average delay per train (t 3 ) are the three most significant indicators that affect the train delay classification from the most to the least.This analytical result aligns with the actual operational requirements of high-speed train scheduling: In railway operation scheduling, the initial delay is the most intuitive indicator to assess the severity of train delays.A longer initial delay indicates more severe delays.Subsequently, the impact of train delays on stations along the route is a crucial consideration, as overall system safety and stability are paramount in railway scheduling.Finally, the effect of train delays on subsequent trains should be minimized to ensure high-quality passenger services on high-speed railways. In many fields, there is a need to categorize risks into different levels.For instance, in meteorology, weather warning levels are divided into four categories: very high risk, high risk, moderate risk, and low risk, based on the potential harm, urgency, and development trend of meteorological disasters.Similarly, the response levels for accidents are categorized into four classes: extremely significant, significant, major, and ordinary, following the requirements of the "National Overall Emergency Plan for Public Emergencies (China)" Generally, these levels are represented by colors such as red, orange, yellow, and blue, reflecting the severity from high to low.In this section, to ensure accurate judgment by train dispatchers, the visualization of train delay clustering and classification results is based on three dimensions: initial delay, average delay per station, and average delay per train.The results are depicted using colors: red (very high risk), orange (high risk), yellow (medium risk), and blue (low risk), as shown in Fig 3. It is evident that there are significant differences in the distribution of different categories of delayed trains across the three dimensions.For example, Class A delayed trains exhibit noticeable differences from Classes B, C, and D in terms of average delay per station. Analysis of high-speed train delay static grading results and grading rules (1) Analysis of high-speed train delay static grading results. From Fig 3, several characteristics of the four classes of delayed trains can be observed: A-class Trains: These trains have the longest first delay time, average delay per station, and average delay per train.This indicates that A-class trains have the most significant impact on the stations along the route and subsequent trains.They are typically the initial delayed trains.In terms of train operation scheduling, the occurrence of delays in A-class trains should be considered as an extremely high-risk situation that can seriously disrupt the transportation organization at various stations. B-class Trains: While these trains have shorter average delays per station compared to Aclass trains, they have the longest average delay per train.This suggests that existing transportation organization capacity along the route can alleviate delays for B-class trains, but they can still have a significant impact on subsequent trains.In terms of train operation scheduling, the occurrence of delays in B-class trains should be treated as high-risk situations. C-class Trains: These trains have relatively shorter average delays per station and per train, but their first delay times are generally longer than those of D-class trains.This indicates that the delays in C-class trains are mainly influenced by A-class and B-class trains.In terms of train operation scheduling, the occurrence of delays in C-class trains should be treated as moderate-risk situations. D-class Trains: These trains have the shortest first delay times, average delays per station, and average delays per train.This suggests that the delay severity of D-class trains is the lowest.In terms of train operation scheduling, D-class trains can be considered as low-risk situations. In the selected dataset, the number of delayed trains for each risk level is as follows: 9731 trains in Level I (Super High Risk), 363 trains in Level II (High Risk), 129 trains in Level III (Medium Risk), and 42 trains in Level IV (Low Risk).It's noteworthy that the number of trains categorized as high risk might be higher than those in the medium-risk category.Therefore, more effective train organization patterns need to be adopted to minimize the impact of delayed trains on stations and subsequent trains. (2) Grading rules for high-speed train delay static grading.Based on the clustering results of train delay data and considering practical operational circumstances, targeted grading rules for high-speed train delay can be developed.These rules need to provide clear distinctions between different risk levels while also being smooth in their classification.Considering the three primary indicators that affect train delay grading, a descriptive statistical analysis is conducted on the first delay time (t 1 ), average delay per station (t 2 ), and average delay per train (t 3 ) for different categories of delayed trains.The results are presented in Table 7. By comprehensively analyzing Tables 5 Based on these findings, a gradual decrease in delay severity from Class A to Class D delayed trains is evident.The corresponding delay levels can be set as follows: • Level I: Super High Risk Based on the average delay per train (t 3 ): • Level II: This comprehensive analysis and establishment of grading rules will help improve train scheduling and operation management by providing a systematic approach to categorizing and addressing train delays.By applying these grading rules to all trains' delay data, a sample of the resulting classification is shown in Table 8. Markov model for dynamic analysis of high-speed train delays To understand the variations in delay levels during the train operation and the changes in station delay responses at different time intervals, this section performs a dynamic analysis of train delay levels in both the order of stations and the sequence of time.A Markov chain-based model is established for the dynamic analysis of high-speed train delays as follows: (1) Markov chain state space.Let {X(t), t2T} be a random Markov process, representing the processes of train delay levels during operation and station delay responses adjustment.Let E be the state space, representing the state set of the propagation chain for train delay levels in station order and the state set of the propagation chain for station delay responses in chronological order. For 8n 2 , the conditional probability distribution of X(t) satisfies Eq (7): In Eq (7), t 1 , t 2 ,. ...,t n−1 represent "past" states, t n represents the "current" state, and t represents the "future" state.Through the above process, it is observed that under the condition of knowing states at t 1 , t 2 ,. ...,t n and their corresponding states fXðt 1 Þ; Xðt 2 Þ; . . .:; Xðt n Þg; the next state X(t) only depends on the previous state X(t n ) and is independent of t 1 , t 2 ,. ...,t n−1 . Based on the characteristics of the state space, Markov chains can be classified into various types.For example, if the state space is countable, like {1, 2, 3, . .., 100}, it is known as a countable state Markov chain.If the state space is finite, it is referred to as a finite state Markov chain. Among various types of Markov chains, there is a special type that excludes the influence of time and is solely related to the current state on the basis of the Markov chain {X(n), n = 0,1,2,. ..}.It is referred to as a homogeneous Markov chain, and Eq (8) illustrates this: In Eq (8), for all k2T, p ij represents the transition probability from state i to state j. (2) Markov state transition probabilities.Calculating state transition probabilities is a crucial step in applying Markov chains to model and solve practical problems.It represents the probability of a Markov chain transitioning from its current state to another state.Let {X(m), m = 0,1,2,. ..} be a Markov chain with a state space E = {i,j,. ..}, and calculate conditional probabilities: Eq (9) represents the one-step transition probability of the Markov chain transitioning from state i to state j at time k.In the model, this corresponds to the process of a delayed train directly arriving at another station and transitioning from one delay level to another, as well as the process of a station adjusting its delay response from one time interval to another in the presence of delayed trains. Similarly, Eq (10) represents the n-step transition probability of the Markov chain transitioning from state i to state j after n steps at time k.In the model, this corresponds to the process of a delayed train arriving at another station through n stations and transitioning from one delay level to another, as well as the process of a station adjusting its delay response from one time interval to another in the presence of delayed trains. From the formulas of conditional probabilities, it is evident that the n-step transition probability p ðnÞ ij ðtÞ is not only dependent on states i and j, but also on the starting point k.State transition probabilities exhibit the following properties: The matrix composed of n-step transition probabilities is called the n-step transition probability matrix.The one-step transition matrix for a Markov chain can be represented as Eq (13). It's worth noting that when the Markov chain is a homogeneous Markov chain, we have p (1) (k) = p (1) = p.Therefore, the one-step transition matrix of a homogeneous Markov chain can be represented as Eq (14). The n-step transition matrix p^((n)) in this section's model can be calculated using Eq (15). (3) Markov initial distribution.The initial probabilities {p i (0), i2E} of the Markov chain at the initial time t 0 are represented as Eq (16): For 8i2E, the following conditions hold: After n steps of transition, the state distribution p i (n) of the Markov chain at time t n can be calculated as follows: And for 8i2E, the following conditions hold: 6.2 Dynamic analysis of train delay levels according to station sequence (1) Determination of state set for train delay levels.In this section, the state set of the Markov chain is artificially divided into four train delay states: R 1 , R 2 , R 3 , R 4 , which correspond to the four delay levels I, II, III, and IV in terms of train delay levels according to the station sequence. For this analysis, actual operational data from delayed trains running between Guangzhou North Station and Yingde West Station during the months of January to November 2016 are used.By calculating the state transition probabilities for train delay levels, the aim is to understand the variations in train delay levels.This approach serves the purpose of validating the model's reliability and avoiding redundant computations.The train delay states are determined according to the train delay classification rules, as illustrated in Table 9. (2) Calculation of train delay level transition probabilities.Before calculating the transition probability matrix, it's necessary to count the frequencies of transitions between different delay level states and obtain an n×n frequency matrix f, where n is the size of the discrete state set of the Markov chain.The corresponding transition frequency matrix can be obtained from Eq (20). Firstly, based on the actual train performance data (with a sample size of 8146 records), the overall transition probabilities of train delay level states are calculated.Table 10 presents the statistics of transition frequencies between different delay level states for train journeys between Guangzhou North and Qingyuan stations, and between Guangzhou North and Yingde West stations.It is evident that the transition frequencies between delay level states differ within these two intervals. This table provides the transition frequencies between different train delay level states for journeys between stations.The frequencies are categorized based on the origin and destination stations.For example, "R 1 to R 2 " indicates a transition from delay level R 1 to R 2 for the specified station interval. Based on the statistical results from Tables 4-10, we can obtain a 4x4 frequency transition matrix, denoted as f( 1), for the train delay grade states between Guangzhou North Station and Qingyuan Station.In this matrix, each row vector represents the frequencies of transitions from a specific state to other states, including itself.This matrix is then used to calculate the corresponding one-step transition probability matrix, denoted as p (1) (1), as further described Similarly, the frequency transition matrix f(2) and one-step transition probability matrix p (1) (2) for the train delay grade states between Qingyuan Station and Yingde West Station can be obtained.Additionally, the intermediate frequency transition matrix f 0 (1) and the corresponding one-step transition probability matrix p 0 (1) (1) for the transition from Guangzhou North Station to Yingde West Station via Qingyuan can also be derived.These matrices would provide further insights into the transition patterns and probabilities of train delay grades between the specified station pairs.f ð2Þ ¼ p ð1Þ ð2Þ ¼ 1:000 0:000 0:000 0:000 0:014 0:966 0:014 0:006 0:000 0:015 0:723 0:252 0:000 0:001 0:008 0:991 In the same way, one can compute the one-step transition probability matrices for any other pair of stations.The one-step transition probability matrix p (1) represents the probability that a train transitions from a certain delay state at station 1 to another delay state after traveling to station 2 with one interval.For instance, in p (1) (1), the element in the 2 nd row and 4 th column indicates the probability that a train transitioning from station Guangzhou North to Qingyuan changes its delay state from R 2 to R 4 , which is 26.8%. The Mean Absolute Error (MAE) value (Eq (21)) can be utilized to quantify the differences in the transition probabilities between the three one-step transition probability matrices: p (1) (1), p (1) (2), and p 0 (1) (1).Calculating MAE p yields a value of 0.18, indicating that there is relatively little difference in the transition probabilities of delay states between different station pairs. Hence, the propagation chain of train delay states in sequence can be treated as a homogeneous Markov chain.Further, Eq (15) can be applied to calculate the two-step transition probability matrix.This matrix represents the probability that a train, starting from a certain delay state at station 1, transitions to another delay state after traveling to station 3 with two intervals.Calculating it yields the two-step transition probability matrix from Guangzhou North station to Yingde West station: p ð2Þ ð1Þ ¼ ½p ð1Þ ð1Þ� 2 ¼ 1:000 0:000 0:000 0:000 0:007 0:376 0:200 0:417 0:000 0:129 0:598 0:273 0:000 0:019 0:186 0:795 The calculation can be continued to derive the three-step transition probability matrix: p ð3Þ ð1Þ ¼ ½p ð1Þ ð1Þ� 3 ¼ 1:000 0:000 0:000 0:000 0:009 0:248 0:245 0:499 0:000 0:136 0:498 0:366 0:000 0:033 0:233 0:734 Due to China's unique circumstances, the impact of the return of students and migrant workers to their hometowns results in a significant population movement nationwide within around 40 days centered around the Chinese New Year, known as the Spring Festival travel rush or "Chunyun" period.During this time, there is a substantial increase in national passenger traffic, and the probability of unexpected incidents rises significantly, placing greater pressure on the punctual operation of high-speed trains and the adjustment and recovery of delayed trains.Therefore, in this section, a specific focus is placed on calculating the transition probabilities of train delay levels using actual train performance data during the 2016 Spring Festival travel rush period (from January 24 th to March 3 rd ), comprising a dataset of 3433 samples.The aim is to explore differences in the dynamic changes of train delay levels during this period compared to overall patterns. The transition frequencies between delay level states for trains running from Guangzhou North to Qingyuan stations during the Spring Festival travel rush period are calculated and summarized in Table 11. From Table 11, the 4x4 frequency transition matrix f s (1) of train delay level states from Guangzhou North Station to Qingyuan Station during the Spring Festival travel rush period can be obtained.Based on Eq (9), the corresponding one-step transition probability matrix p ð1Þ s ð1Þ can be derived.The aforementioned study based on the actual train operation data from January to March 2016 revealed that the propagation chain of train delay levels based on station order can be regarded as a Markov chain.Since this study's time frame includes data from the Spring Festival travel rush period, the propagation chain of train delay levels during State Transition Frequency (Guangzhou North to Qingyuan) https://doi.org/10.1371/journal.pone.0301762.t011 this period can also be considered a homogeneous Markov chain.This enables us to further utilize Eq (15) to compute the two-step and three-step transition probability matrices: An overall study of the one-step transition probability matrices p (1) (1), p (2) (1), and p (3) (1) reveals the following: • Level I delayed trains departing from Guangzhou North Station, even after adjustments over three station intervals, still struggle to reduce their delay levels.This indicates a severe level of delay for Level I trains, requiring more potent measures for rectification. • Level II delayed trains passing through three station intervals from Guangzhou North Station have their probabilities of transitioning to Level III and Level IV delays increased from 12.5% and 26.8% to 24.5% and 49.9%, respectively.However, it's important not to disregard that there is still a probability, albeit less than 0.1%, of Level II delayed trains transitioning back to Level I. • For Level III delayed trains passing through three station intervals, the probability of transitioning to Level IV delays has also significantly increased, rising from 15.1% to 36.6%.However, due to factors like insufficient attention from dispatch personnel to Level III delayed trains, the probability of transitioning back to Level II has increased by 4.2 percentage points. • Level IV delayed trains passing through three station intervals still have a probability of over 70% to remain at Level IV delays. A comparison between the one-step transition probability matrices p (1) (1) and p (2) (1) for the segments between Guangzhou North Station and Qingyuan Station and between Qingyuan Station and Yingdexi Station reveals: • Neither segment effectively mitigates Level I delayed trains. • For Level II delayed trains, departing from Guangzhou North Station, the probability of a delay level reduction after reaching Qingyuan Station is 39.3%.Conversely, for Level II delayed trains departing from Qingyuan Station, the probability of delay reduction after reaching Yingdexi Station is only 2%.This suggests that the capacity between Qingyuan Station and Yingdexi Station might be lower than that between Guangzhou North Station and Qingyuan Station. • For both segments, there is effective mitigation for Level III and IV delayed trains, with increased probabilities of delay reduction. An overall study of the one-step transition probability matrices p (1) (1), p (2) (1), and p (3) (1) during the Spring Festival travel rush period shows: • Level I delayed trains still struggle to reduce their delay levels even after passing through three station intervals. • For Level II, III, and IV delayed trains, the probabilities of delay reduction increase as they travel through station intervals.For instance, a Level II delayed train departing from Guangzhou North Station and passing through Qingyuan Station, Yingdexi Station, and Shaoguan Station in sequence has a gradually changing probability of 34.7%, 50.1%, and 56.4%, respectively.It's also important to note that the incremental increase in the probability of delay level reduction diminishes as the train continues its journey, indicating a upper limit on the probability of delay level reduction during travel. For Level II and III delayed trains, the probability of their delay levels decreasing during the Spring Festival travel rush period is lower than the overall probability of delay level reduction.For instance, during this period, when a Level III delayed train departs from Guangzhou North Station and undergoes a three-station interval journey to arrive at Shaoguan Station, the probability of its delay level decreasing to Level IV is 34.8%.However, when considering the entire dataset, the probability of a train departing from Guangzhou North Station and arriving at Shaoguan Station shifting from Level III to Level IV delay is 36.6%,which is 1.8 percentage points higher than during the Spring Festival travel rush period.Moreover, for Level IV delayed trains, the probability of their delay levels increasing to Level III during the Spring Festival travel rush period is even higher, at 26.1%, which is 2.8 percentage points more than the overall dynamic analysis results. The calculation of p (n) (1) can be continued to acquire the transition probabilities of train delay levels after passing through n stations when departing from Guangzhou North Station.Based on this, the prediction of train delay levels can be achieved.This enables station dispatch personnel to make effective predictions about the delay level upon train arrival at the station, based on its delay level before reaching the station.This proactive approach facilitates the preparation for the arrival of delayed trains and ensures efficient measures for delay mitigation are in place beforehand. Conclusions In conclusion, this paper has investigated the indicators used to gauge the severity of train delays in high-speed railways and has proposed a train delay classification model and a dynamic analysis model to assess the severity of train delays.The train delay classification model employs the K-means clustering method to effectively visualize the delay levels in highspeed railway trains and establish the relevant classification rules.The dynamic analysis model utilizes a Markov Chain approach to examine the delays from the viewpoints of train stations and trains.Overall, this paper highlights the importance of measuring train delay severity and provides valuable insights into the static and dynamic analysis of train delays in high-speed railways.The main contributions of this paper are: 1. Integration of Static and Dynamic Analysis.One of the key innovations of the paper is the integration of static and dynamic analysis approaches to measure high-speed train delay severity.By combining these two methodologies, the paper provides a comprehensive understanding of delay dynamics in railway systems, offering insights into both infrastructure-related delays and real-time disruptions. 2. Application of K-means Clustering Algorithm.The utilization of the K-means clustering algorithm to categorize delayed trains into four levels based on standardized delay indicators represents an innovative approach to classifying delay severity.This method enables efficient identification and prediction of delay extents, enhancing the effectiveness of mitigation strategies for railway operators. 3. Markov Chain Approach for Dynamic Analysis.The paper introduces a dynamic analysis model utilizing a Markov Chain approach to assess delay severity from the perspectives of train stations and trains.This innovative methodology provides real-time insights into current disruptions, enabling prompt decision-making and adjustment of operations to minimize delays and improve railway efficiency. 4. Focus on Predictive Maintenance Strategies.By emphasizing predictive maintenance strategies based on trends in delay severity and equipment failures, the paper offers an innovative solution to reduce downtime, enhance system reliability, and improve service quality in high-speed railways.This proactive approach contributes to the optimization of railway operations and the mitigation of severe delays. 5. Real-world Application and Practical Implications.The paper's focus on optimizing the management of delayed trains in operational scenarios using real-world train delay data from high-speed railways demonstrates a practical application of innovative methodologies in railway delay severity measurement.The insights provided in the paper have direct implications for enhancing railway operational efficiency and resilience in the face of diverse delay scenarios. The innovation in this paper lies in the novel approach of categorizing train delay severity using a combination of static and dynamic models based on real-world data from high-speed railways.By integrating static analysis with the K-means clustering algorithm and dynamic analysis using Markov Chain, the study offers a unique methodology for understanding and managing train delays.This innovative approach provides valuable insights for optimizing railway operations and enhancing resilience in the face of diverse delay scenarios.These innovations contribute to advancing the field of railway management and delay severity analysis, providing valuable insights for railway authorities and researchers in optimizing delay mitigation strategies and improving overall railway performance. Building on the current method, we plan to explore advanced predictive modeling techniques and incorporate artificial intelligence algorithms to further enhance the accuracy and efficiency of delay severity measurement.By delving into these cutting-edge technologies, we aim to push the boundaries of innovation in railway delay management and contribute to the advancement of the field. ) High-speed train delay clustering and classification results based on K-means algorithm.With K = 4, employing the K-means clustering algorithm, the normalized values of the five delay classification indicators were computed.This clustering was performed on the complete set of high-speed train delay data from the Wuhan-Guangzhou.The total sample count was 10,265.Based on the clustering results, the train delay data was classified into clusters A, B, C, and D, as depicted in Fig 2. Fig 3 . Fig 3. Visualization of classified train delay K-means clustering analysis.https://doi.org/10.1371/journal.pone.0301762.g003 -7 and comparing the grading indicators for different categories of delayed trains, the following observations can be made Class A delayed trains have the highest maximum and mean values for all indicators.Class A delayed trains exhibit the highest standard deviation and variance, indicating relatively large and dispersed distributions of their grading indicator data.The kurtosis and skewness values for the grading indicators of A, B, C, and Class D delayed trains show an increasing trend, with Class A trains demonstrating negative kurtosis and skewness values, indicating non-normal distribution. Changsha South Wuguang Plaza Station, Miluo East Station, Yueyang East Station, and Chibi North Station.There are 13 operational sections, with 70 pairs of highspeed trains operating daily on this line.The shortest headway between stations is 5 minutes, and the shortest headway between operational sections can reach 3 minutes.Table1illustrates an example of the raw data, which includes the date of operation, train number, station, actual arrival time, actual departure time, scheduled arrival time, scheduled departure time (with a time precision of 1 minute), and track occupancy information for each train. Table 1 . Example of raw train operational data. Date Train No. Station Actual Arrival Time Actual Departure Time Scheduled Arrival Time Scheduled Departure Time Occupied Track https://doi.org/10.1371/journal.pone.0301762.t001
12,059.6
2024-04-10T00:00:00.000
[ "Engineering" ]
Piecewise Testable Tree Languages This paper presents a decidable characterization of tree languages that can be defined by a boolean combination of Sigma1 formulas. This is a tree extension of the Simon theorem, which says that a string language can be defined by a boolean combination of Sigma1 formulas if and only if its syntactic monoid is J-trivial. Introduction Logics for expressing properties of labeled trees and forests figure importantly in several different areas of Computer Science.This paper is about logics on finite trees.All the logics we consider are less expressive than monadic second-order logic, and thus can be captured by finite automata on finite trees.Even with these restrictions, this encompasses a large body of important logics, such as variants of first-order logic, temporal logics including CTL* or CTL, as well as query languages used in XML. One way of trying to understand a logic is to give an effective characterization.An effective characterization for a logic L is an algorithm which inputs a tree automaton, and says if the language recognized by the automaton can be defined by a sentence of the logic L.Although giving an effective characterization may seem an artificial criterion for understanding a logic, it has proved to work very well, as witnessed by decades of research, especially into logics for words.In the case of words, effective characterizations have been studied by applying ideas from algebra: A property of words over a finite alphabet A defines a set of words, that is a language L ⊆ A * .As long as the logic in question is no more expressive than monadic second-order logic, L is a regular language, and definability in the logic often boils down to verifying a property of the syntactic monoid of L (the transition monoid of the minimal automaton of L).This approach dates back to the work of McNaughton and Papert [11] on first-order logic over < (where < denotes the usual linear ordering of positions within a word).A comprehensive survey, treating many extensions and restrictions of first-order logic, is given by Straubing [16].Thérien and Wilke [20,18,19] similarly study temporal logics over words. An important early discovery in this vein, due to Simon [14], treats word languages definable in first-order logic over < with low quantifier complexity.Recall that a Σ 1 sentence is one that uses only existential quantifiers in prenex normal form, e.g.∃x∃y x < y.Simon proved that a word language is definable by a boolean combination of Σ 1 sentences over < if and only its syntactic monoid M is J -trivial.This means that for all m, m ′ ∈ M, if MmM = Mm ′ M, then m = m ′ .(In other words, distinct elements generate distinct two-sided semigroup ideals.)Thus one can effectively decide, given an automaton for L, whether L is definable by such a sentence.(Simon did not discuss logic per se, but phrased his argument in terms of piecewise testable languages which are exactly those definable by boolean combinations of Σ 1 sentences.) There has been some recent success in extending these methods to trees and forests.(We work here with unranked trees and forests, and not binary or ranked ones, since we believe that the definitions and proofs are cleaner in this setting.)The algebra is more complicated, because there are two multiplicative structures associated with trees and forests, both horizontal and a vertical concatenation.Benedikt and Segoufin [1] use these ideas to effectively characterize sets of trees definable by first-order logic with the parent-child relation.Bojańczyk [2] gives a decidable characterization of properties definable in a temporal logic with unary ancestor and descendant operators.Similarly Bojańczyk and Segoufin [3] and Place and Segoufin [13] provided decidable characterizations of tree languages definable in ∆ 2 (<) and FO 2 (<, < h ) where < denotes the descendant-ancestor relationship while < h denotes the sibling relationship.The general theory of the 'forest algebras' that underlie these studies is presented by Bojańczyk and Walukiewicz [6]. In the present paper we provide a further illustration of the utility of these algebraic methods by generalizing Simon's theorem from words to trees.In fact, we give several such generalizations, differing in the kinds of atomic formulas we allow in our Σ 1 sentences. In Section 2 we present our basic terminology concerning trees, forests, and logic.Initially our logic contains two orderings: the ancestor relation between nodes in a forest, and the depth-first, left-first, total ordering of the nodes of a forest.In Section 3 we describe the algebraic apparatus.This is the theory of forest algebras developed in [6]. In Section 4 we give our main result, an effective test of whether a given language is piecewise testable (Theorem 4.) The test consists of verifying that the syntactic forest algebra satisfies a particular identity.While we have to some extent drawn on Simon's original argument, the added complexity of the tree setting makes both formulating the correct condition and generalizing the proof quite nontrivial.We give a quite different, equivalent identity in Proposition 18, which makes clear the precise relation between piecewise testability for forest languages and J -triviality. In Section 5, we study in detail a variant of our logic in which the binary ancestor relation is replaced by a ternary closest common ancestor relation, and prove a version of our main theorem for this case.Section 6 is devoted to other variants: the far simpler case of languages defined by Σ 1 sentences (instead of boolean combinations thereof); the logics in which only the ancestor relation is present, and in which the horizontal ordering on siblings is present; and, since our algebraic formalism concerns forests rather than trees, the modifications necessary to obtain an effective characterization of the piecewise testable tree languages.We discuss some directions for further research in the concluding Section 7. An earlier, much abbreviated version of this paper, without complete proofs, was presented at the 2008 IEEE Symposium on Logic in Computer Science. Notation Trees, forests and contexts.In this paper we work with finite unranked ordered trees and forests over a finite alphabet A. Formally, these are expressions defined inductively as follows: for any a ∈ A, a is a tree.If t 1 , ... , t n is a finite sequence of trees, then t 1 + • • • + t n is a forest.If s is a forest and a ∈ A, then as is a tree.It will also be convenient to have an empty forest, that we will denote by 0, and this forest is such that a0 = a and 0+t = t +0 = t.Forests and trees alike will be denoted by the letters s, t, u, ... For example, the forest that we conventionally draw as When there is no ambiguity we use as instead of a(s).In particular bc stands for the tree whose root has label b and has a unique child of label c. The notions of node, child, parent, descendant and ancestor relations between nodes are defined in the usual way.We write x < y to say that x is a strict ancestor of y or, equivalently, that y is a strict descendant of x.We say that a sequence y 1 , ... , y n of nodes forms a chain if we have y i < y i+1 for all 1 ≤ i < n.As our forests are ordered, each forest induces a natural linear order on its set of nodes that we call the forest-order and denote by < dfs , which corresponds to the depth-first left-first traversal of the forest or, equivalently, to the order provided by the expression denoting the forest seen as a word.We write < h for the horizontal-order, i.e. x < h y expresses the fact that x is a sibling of y occurring strictly before y in the forest-order.Finally, the closest common ancestor of two nodes x, y is the unique node z that is a descendant of all nodes that are ancestors of both x and y. If we take a forest and replace one of the leaves by a special symbol , we obtain a context.This special node is called the hole of the context.Contexts will be denoted using letters p, q, r .For example, from the forest t given above, we can obtain, among others, the context A forest s can be substituted in place of the hole of a context p; the resulting forest is denoted by ps.If we take the context p above and if s = (b + ca), then This is depicted in the figure below.There is a natural composition operation on contexts: the context qp is formed by replacing the hole of q with p.This operation is associative, and satisfies (pq)s = p(qs) for all forests s and contexts p and q. We distinguish a special context, the empty context, denoted .It satisfies s = s and p = p = p for any forest s and context p. Regular forest languages.A set L of forests over A is called a forest language.There are several notions of automata for unranked ordered trees, see for instance [8, chapter 8].They all recognize the same class of forest languages, called regular, which also corresponds to definability in MSO as defined below. Piecewise testable languages.We say that a forest s is a piece of a forest t if there is an injective mapping from nodes of s to nodes of t that preserves the label of the node together with the forest-order and the ancestor relationship.An equivalent definition is that the piece relation is the reflexive transitive closure of the relation {(pt, pat) : p is a context, a is a node, t is a forest or empty} In other words, a piece of t is obtained by removing nodes from t while preserving the forest-order and the ancestor relationship.We write s t to say that s is a piece of t.In the example above, a(a + b) + c is a piece of t. We extend the notion of piece to contexts.In this case, the hole must be preserved while removing the nodes: The size of a piece is the size of the corresponding forest, i.e. the number of its nodes.The notions of piece for forests and contexts are related, of course.For instance, if p, q are contexts with p q, then p0 q0.Also, conversely, if s t, then there are contexts p q with s = p0 and t = q0. A forest language L over A is called piecewise testable if there exists n ≥ 0 such that membership of t in L is determined by the set of pieces of t of size n or less.Equivalently, L is a finite boolean combination of languages {t : s t}, where s is a forest.Every piecewise testable forest language is regular, since given n ≥ 0, a finite automaton can calculate on input t the set of pieces of t of size no more than n. Logic.Regularity and piecewise testability correspond to definability in a logic, which we now describe.A forest can be seen as a logical relational structure.The domain of the structure is the set of nodes.The signature contains a unary predicate P a for each symbol a of the label alphabet A, plus possibly some extra predicates on nodes, such as the descendant relationship, the forest-order or the closest common ancestor.Let Ω be a set of predicates.The predicates Ω that we use always include (P a ) a∈Σ and equality, hence we do not explicitly mention them in the sequel.We use the classical syntax and semantics for first-order logic, FO(Ω), and monadic second order logic, MSO(Ω), building on the predicates in Ω.Given a sentence φ of any of these formalisms, the set of forests that are a model for φ is called the language defined by φ.In particular a language is definable in MSO(<, < h ) iff it is regular [8, chapter 8]. A where the formula γ is quantifier-free and uses predicates from Ω. Initially we will consider two predicates on nodes: the ancestor order x < y and the forest-order x < dfs y.Later on, we will see other combinations of predicates, for instance when the closest common ancestor is added, and the forest-order is removed. It is not too hard to show that a forest language L can be defined by a Σ 1 (<, < dfs ) sentence if and only if it is closed under adding nodes, i.e. holds for all contexts p, q and forests t.Moreover this condition can be effectively decided given any reasonable representation of the language L. We will carry out the details in Section 6.1. We are more interested here in the boolean combinations of properties definable in Σ 1 (<, < dfs ).It is easy to see that: Proposition 2.1.A forest language is piecewise testable iff it is definable by a boolean combination of Σ 1 (<, < dfs ) sentences. One direction is immediate as for any forest s, the set of forests having s as a piece is easily definable in Σ 1 (<, < dfs ).For instance the sentence ∃x, y, z, u P a (x) ∧ P a (y) defines the language of forests having a(a + b) + c as a piece. For the other direction, notice that for any language definable in Σ 1 (<, < dfs ), by disambiguating the relative positions between each pair of variables, one can compute a finite set of pieces such that a forest belongs to the language iff it has one of them as a piece.For instance the sentence ∃x, y, z, u P a (x) ∧ P a (y) defines the language of forests having a(a + b) + c, c + a(a + b) or ca(a + b) as a piece.This result does not address the question of effectively determining whether a given regular forest language admits either of these equivalent descriptions.Such an effective characterization is the goal of this paper: The problem.Find an algorithm that decides whether or not a given regular forest language is piecewise testable. As noted in the introduction, the corresponding problem for words was solved by Simon, who showed that a word language L is piecewise testable if and only if its syntactic monoid M(L) is J -trivial [14]; that is, if distinct elements m, m ′ always generate distinct two-sided ideals.Note that one can test, given the multiplication table of a finite monoid M, whether M is J -trivial in time polynomial in |M|: for each m = m ′ ∈ M, one calculates the ideals MmM and Mm ′ M and then verifies that they are different.Therefore, it is decidable if a given regular word language is piecewise testable.We assume that the language L is given by its syntactic monoid and syntactic morphism, or by some other representation, such as a finite automaton, from which these can be effectively computed. We will show that a similar characterization can be found for forests; although the characterization will be more involved.For decidability, it is not important how the input language is represented.In this paper, we will represent a forest language by a morphism into a finite forest algebra that recognizes it.Forest algebras are described in the next section. Forest algebras Forest algebras.Forest algebras were introduced by Bojańczyk and Walukiewicz as an algebraic formalism for studying regular tree languages [6].Here we give a brief summary of the definition of these algebras and their important properties.A forest algebra consists of a pair (H, V ) of monoids, subject to some additional requirements, which we describe below.We write the operation in V multiplicatively and the operation in H additively, although H is not assumed to be commutative.We denote the identity of V by and that of H by 0. We require that V act on the left of H.That is, there is a map such that w (vh) = (wv)h for all h ∈ H and v, w ∈ V .We further require that this action be monoidal, that is, for all h ∈ H, and that it be faithful, that is, if vh = wh for all h ∈ H, then v = w . We further require that for every g ∈ H, V contains elements ( + g) and (g + ) such that ( + g)h = h + g, (g + )h = g + h for all h ∈ H. Observe, in particular, that for all g, h ∈ H, (g + )(h + ) = (g + h) + , so that the map h → h + is a morphism embedding H as a submonoid of V . Let A be a finite alphabet, and let us denote by H A the set of forests over A, and by V A the set of contexts over A. Clearly H A forms a monoid under +, V A forms a monoid under composition of contexts (the identity element is the empty context ), and substitution of a forest into a context defines a left action of V A on H A .It is straightforward to verify that this action makes (H A , V A ) into a forest algebra, which we denote A ∆ .If (H, V ) is a forest algebra, then every map f from A to V has a unique extension to a forest algebra morphism α : A ∆ → (H, V ) such that α(a ) = f (a) for all a ∈ A. In view of this universal property, we call A ∆ the free forest algebra on A. We say that a forest algebra (H, V ) recognizes a forest language L ⊆ H A if there is a morphism α : A ∆ → (H, V ) and a subset X of H such that L = α −1 (X ).We also say that the morphism α recognizes L. It is easy to show that a forest language is regular if and only if it is recognized by a finite forest algebra. Given L ⊆ H A we define an equivalence relation ∼ L on H A by setting s ∼ L s ′ if and only if for every context p ∈ V A , ps and ps ′ are either both in L or both outside of L. We further define an equivalence relation on V A , also denoted ∼ L , by p ∼ L p ′ if for all s ∈ H A , ps ∼ L p ′ s.This pair of equivalence relations defines a congruence of forest algebras on A ∆ .The quotient (H L , V L ) is called the syntactic forest algebra of L. The projection morphism of A ∆ onto (H L , V L ) is denoted α L and called the syntactic morphism of L. α L always recognizes L and it is easy to show that L is regular iff (H L , V L ) is finite. Idempotents and aperiodicity.We recall the well known notions of idempotent and aperiodicity.If M is a finite monoid and m ∈ M, then there is a unique element e = m n , where n > 0, such that e is idempotent, i.e., e 2 = e.If we take a common multiple of these exponents n over all m ∈ M, we obtain an integer ω > 0 such that m ω is idempotent for every m ∈ M. Observe that while infinitely many different values of ω have this property with respect to M, the value of m ω is uniquely determined for each m ∈ M. Let (H, V ) be a forest algebra.Since we write the operation in H additively, we denote powers of h ∈ H by n • h, where n ≥ 0. As noted above, H embeds in V , so any ω > 0 that yields idempotents for V serves as well for H.That is, there is an integer ω > 0 such that v ω is idempotent for all v ∈ V , and ω • h is idempotent for all h ∈ H. We say that a finite monoid M is aperiodic if it contains no nontrivial groups.Since the set of elements of the form m ω m k for k ≥ 0 is a group, aperiodicity is equivalent to having m ω = m ω+1 for all m ∈ M. In this case we can take ω = |M|.All the finite monoids that we encounter in this paper are aperiodic.In particular, every J -trivial monoid is aperiodic, because all elements of a group in a finite monoid generate the same two-sided ideal. Pieces.Recall that in Section 2, we defined the piece relation for contexts in the free forest algebra.We now extend this definition to an arbitrary forest algebra (H, V ).The general idea is that a context v ∈ V is a piece of a context w ∈ V , denoted by v w , if one can construct a term (using elements of H and V ) which evaluates to w , and then take out some parts of this term to get v. Let (H, V ) be a forest algebra.We say v ∈ V is a piece of w ∈ V , denoted by v w , if α(p) = v and α(q) = w hold for some morphism and some contexts p q over A. The relation is extended to H by setting g h if g = v0 and h = w 0 for some contexts v w . As we will see in the proof of Lemma 3.1, in the above definition, we can replace the term "some morphism" by "any surjective morphism".The following example shows that although the piece relation is transitive in the free algebra A ∆ , it may no longer be so in a finite forest algebra. Example: Consider the syntactic algebra of the language {abcd }, which contains only one forest, which in turn has just one path, labeled by abcd .The context part of the syntactic algebra has twelve elements: an error element ∞, and one element for each infix of abcd .We have a aa = ∞ = bd bcd but we do not have a bcd . We will now show that in a finite forest algebra, one can compute the relation in time polynomial in |V |.The idea is to use a different but equivalent definition.Let R be the smallest relation on V that satisfies the following rules, for all v, v ′ , w , w ′ ∈ V : Over any finite forest algebra the relations R and are the same. In any finite algebra, the relation R can be computed by applying the rules until no new relations can be added.This gives the following corollary: Corollary 3.2.In any given finite forest algebra, the relation on contexts (also on forests) can be calculated in polynomial time. Proof of Lemma 3.1.We first show the inclusion of R in .Let α : A ∆ → (H, V ) be any surjective morphism.A simple induction on the number of steps used to derive v R w , produces contexts p q with α(p) = v and α(q) = w .The surjectivity of α is necessary for starting the induction in the case R v. For the opposite inclusion, suppose v w .Then there is a morphism α : A ∆ → (H, V ) and contexts p q such that v = α(p), w = α(q).We will show that α(p) R α(q) by induction on the size of p: • If p is the empty context, then the result follows thanks to the first rule in the definition of R. If p = a then from p q it follows that q = q 1 aq 2 for some contexts q 1 , q 2 and using the first three rules in the definition of R we get that and hence p R q. • If there is a decomposition p = p 1 p 2 where p 1 and p 2 are not empty contexts, then from p q there must be a decomposition q = q 1 q 2 with p 1 q 1 and p 2 q 2 .By induction we get that α(p 1 ) R α(q 1 ) and α(p 2 ) R α(q 2 ).Then α(p) R α(q) follows by using the third rule in the definition of R. • Suppose now p = s + or p = + s.We can assume that s is a tree, since otherwise the context p can be decomposed as (s 1 + )(s 2 + ).Since s is a tree, it can be decomposed as a(p ′ 0), with a being a context with a single letter and the hole below and p ′ a context smaller than p.By inspecting the definition of , there must be some decomposition q = q 0 (a(q ′ 0) + q 1 ) or q = q 0 (q 1 + a(q ′ 0)), with p ′ q ′ .By the induction assumption, α(p ′ ) R α(q ′ ).From this the result follows by applying rules three, four and five in the definition of R.This argument shows that if v w with respect to a particular morphism α, then v R w and consequently v w with respect to every morphism.Thus we have also established the claim made above that the relation on H is independent of the underlying morphism. Piecewise Testable Languages The main result in this paper is a characterization of piecewise testable languages: Theorem 4.1.A forest language is piecewise testable if and only if its syntactic algebra satisfies the identity The identity (4.1) is illustrated in Figure 1.In view of Corollary 3.2, an immediate consequence of Theorem 4.1 is that piecewise testability is a decidable property.Corollary 4.2.It is decidable if a regular forest language is piecewise testable. Proof.We assume the language is given by its syntactic forest algebra, which can be computed in polynomial time from any recognizing forest algebra.The new identities can easily be verified in time polynomial in |V L | by enumerating all the elements of V L . The above procedure gives an exponential upper bound for the complexity in case the language is represented by a deterministic or even nondeterministic automaton, since there is an exponential translation from automata into forest algebras.We do not know if this upper bound is optimal.In contrast, for languages of words, when the input language is represented by a deterministic automaton, there is a polynomial-time algorithm for determining piecewise testability [15]. In Sections 4.1 and 4.2, we prove both implications of Theorem 4.1.Finally, in Section 4.3, we give an equivalent statement of Theorem 4.1, where the relation is not used.But before we prove the theorem, we would like to show how it relates to the characterization of piecewise testable word languages given by Simon. Let M be a monoid.For m, n ∈ M, we write m ⊑ n if m is a-not necessarily connected-subword of n, i.e. there are elements n 1 , ... , We claim that, using this relation, the word characterization can be written in a manner identical to Theorem 4.1: Theorem 4.3.A word language is piecewise testable if and only if its syntactic monoid satisfies the identity Proof.Recall that Simon's theorem says a word language is piecewise testable if and only if its syntactic monoid is J -trivial.Therefore, we need to show J -triviality is equivalent to (4.2).We use an identity known to be equivalent to J -triviality (see, for instance, [9], Sec.V.3.): 3) Since the above identity is an immediate consequence of (4.2), it suffices to derive (4.2) from the above.We only show n ω m = n ω .As we assume m ⊑ n, there are decompositions By induction on i, we show The result then follows immediately.The base i = 0, is immediate.In the induction step, we use the induction assumption to get: By applying (4.3), we have and therefore Note that since the vertical monoid V in a forest algebra is a monoid, it would make syntactic sense to have the relation ⊑ instead of in Theorem 4.1.Unfortunately, the "if" part of such a statement would be false, as we will show in Section 4.3.That is why we need to have a different relation on the vertical monoid, whose definition involves all parts of a forest algebra, and not just composition in the vertical monoid. 4.1. Correctness of the identities.In this section we show the easy implication in Theorem 4.1. Proof.Fix a language L that is piecewise testable and let n be such that membership of t in L only depends on the pieces of t with at most n nodes. We will use the following simple fact: Fact 4.5.If r is any context, p q are contexts and t is a forest, then rpt rqt. We only show the first part of the identity, i.e. Fix v u as above.By definition of ω, we can write the identity as an implication: Let k be as above.Let p q be contexts that are mapped to v and u respectively by the syntactic morphism of L. By unraveling the definition of the syntactic algebra, we need to show that holds for any context r and forest t.Consider now the forests rq ik t and rq ik pt for i ∈ N . As p q, thanks to Fact 4.5, we get When i is increasing, the number of pieces of size n of rq ik t is increasing.As there are only finitely many pieces of size n, for i sufficiently large, the two forests rq ik t and rq (i+1)k t have the same set of pieces of size n.Therefore, for sufficiently large i, the two forests rq ik t and rq ik pt have the same set of pieces of size n, and either both belong to L, or both are outside L.However, since α L (q k ) = α L (q k q k ), we have which gives the desired result. 4.2. Completeness of the identities.This section is devoted to showing completeness of the identities: an algebra that satisfies identity (4.1) in Theorem 4.1 can only recognize piecewise testable languages.We fix an alphabet A, and a forest language L over this alphabet, whose syntactic forest algebra (H L , V L ) satisfies the identity.We will write α rather than α L to denote the syntactic morphism of L, and sometimes use the term "type of s" for the image α(s) (likewise for contexts). We write s ∼ n t if the two forests s, t have the same pieces of size no more than n.Likewise for contexts.The completeness part of Theorem 4.1 follows from the following two results. Lemma 4.6.Let n ∈ N.For k sufficiently large, if two forests satisfy s ∼ k s ′ , then they have a common piece t in the same ∼ n -class, i.e. t s, t s ′ , t ∼ n s, and t ∼ n s ′ .Proposition 4.7.For n sufficiently large, pat ∼ n pt entails α(pat) = α(pt). Proof of the completeness part of Theorem 4.1.Take n as in Proposition 4.7, and then apply Lemma 4.6 to this n, yielding k.We show that s ∼ k s ′ implies s ∈ L ⇐⇒ s ′ ∈ L, which immediately shows that L is piecewise testable, by inspecting pieces of size k.Indeed, assume s ∼ k s ′ , and let t be their common piece as in Lemma 4.6.Since t is a piece of s with the same pieces of size n, it can be obtained from s by a sequence of steps where a single letter is removed in each step without affecting the ∼ n -class.Each such step preserves the type thanks to Proposition 4.7.Applying the same argument to s ′ , we get which gives the desired conclusion. We begin by showing Lemma 4.6, and then the rest of this section is devoted to proving Proposition 4.7, the more involved of the two results. Proof of Lemma 4.6.We begin with the following observation.Fact 4.8.Let n ∈ N and let K be a regular language.There is some constant k, such that every t ∈ K contains a piece s ∈ K of size at most k such that s ∼ n t. Proof of Fact 4.8.Let β : A ∆ → (H, V ) be a morphism into a finite forest algebra.Let m = |H|.There is a k such that every forest s of size greater than k can be written as s = q 0 q 1 • • • q m s ′ where s ′ is a forest and the q i are nonempty contexts: this is because every large enough forest contains either a collection of m siblings or a chain of length m.It follows that the sequence of values β(s ′ ), β(q m s ′ ), β(q m−1 q m s ′ ), ... , β(q 1 • • • q m s ′ ) contains a repeat, and so we can remove a subsequence of the q i and obtain a proper piece t of s such that β(s) = β(t).Thus every forest s has a piece t of size at most k such that β(s) = β(t). Now let (H, V ) be the direct product of the syntactic algebra (H K , V K ) and the quotient algebra A ∆ / ∼ n , and let β be the product of the syntactic moprhism of K and the natural projection onto the quotient by ∼ n .If s ∈ K then there is a piece t of s of size at most k such that β(s) = β(t).Thus t ∈ K and s ∼ n t, proving the Fact. We are now ready to prove Lemma 4.6.Fix n ∈ N. Notice that each ∼ n class is a regular language and ∼ n has finitely many classes.For each ∼ n -class K , Fact 4.8 gives a constant k K .Let k be the maximum of n and all these k K ; we claim the lemma holds for k.Indeed, take any two forests s ∼ k s ′ .Let t be a piece of s of size at most k with s ∼ n t, as given by Fact 4.8.Since s ∼ k s ′ , the forest t is also a piece of s ′ .Furthermore since ∼ k implies ∼ n (by k ≥ n), we get s ′ ∼ n s ∼ n t, which implies s ′ ∼ n t by transitivity of ∼ n . We now show Proposition 4.7.Let us fix a context p, a label a and a forest t as in the statement of the proposition.The context p may be empty, and so may be the forest t.We search for the appropriate n; the size of n will be independent of p, a, t.We also fix the types v = α(p), h = α(t) for the rest of this section.In terms of these types, our goal is to show that vh = vα(a)h.To avoid clutter, we will sometimes identify a with its image α(a), and write vh = vah instead of vh = vα(a)h. Let s be a forest and X be a set of nodes in s.The restriction of s to X , denoted s[X ], is the piece of s obtained by only keeping the nodes in X . Let s be a forest, X a set of nodes in s, and x ∈ X .We say that x ∈ X is a vahdecomposition of s if: a) if we restrict s to X , remove descendants of x, and place the hole in x, the resulting context has type v; b) the node x has label a; c) if we restrict s to X and only keep nodes in X that are proper descendants of x, the resulting forest has type h.Definition 4.9.A fractal of length k inside a forest s is a sequence A subfractal is extracted by only using a subsequence of the vah-decompositions.Such a subsequence is also a fractal.Proof.The proof is by induction on k.The case k = 1 is obvious.Assume the lemma is proved for k and n and consider the case k + 1. The set of forests which have a fractal of length k is a regular language, call it K .By Fact 4.8 applied to K , there is some constant m such that every forest in K has a piece that is also in K , and whose size is bounded by m.(In this reasoning, we do not use the parameter n of Fact 4.8, so we can call Fact 4.8 with n = 0).We can assume without loss of generality that m > n.In other words, if a forest has a fractal of length k, then it has a piece of size at most m which has a fractal of length k.This means that if a forest has a fractal of length k, then it has a fractal of length k which has at most m nodes (the number of nodes in a fractal is the number of nodes in the largest of its vah-decompositions). Assume now that pat ∼ m pt.By the induction assumption, as m > n, we have a fractal of length k inside pat.From the previous observation, this fractal can be assumed to be of size smaller than m.Hence we obtain a piece of pt which is a fractal of length k inside pt.Clearly, this resulting fractal can be extended to a fractal of length k + 1 by taking for X k +1 all the nodes of pat and for x k +1 the node a. ... The rest of this section is devoted to a proof of this proposition.The general idea is as follows.Using some simple combinatorial arguments, and also Ramsey's Theorem, we will show that there is also a large subfractal whose structure is very regular, or tame, as we call it.We will then apply identity (4.1) to this regular fractal, and show that a node with label a can be eliminated without affecting the type. A fractal such that for each i = 1, ... , k, the node x i is part of the context q i , see Fig. 2.This does not necessarily mean that the nodes x 1 , ... , x k form a chain, since some of the contexts q i may be of the form + t.Lemma 4.12.Let k ∈ N.For n sufficiently large, if there is a fractal of length n inside pat, then there is a tame fractal of length k inside pat. Proof.The main step is the following claim.Claim 4.13.Let m ∈ N.For n sufficiently large, for every forest s, and every set X of at least n nodes, there is a decomposition s = qq 1 • • • q m s ′ where every context q i contains at least one node from X . Proof.Let Y be the smallest set of nodes that contains X and is closed under closest common ancestors.If n is chosen large enough, either s[Y ] consist of more than m trees, or it contains a node having more than m children, or s[Y ] contains a chain of length bigger than m.We are thus left with three cases: • In the set Y , there is a path y 1 < • • • < y m+1 .For i ∈ {1, ... , m + 1}, consider the set of nodes Each set Y i contains at least one node of X , by definition of the set Y .The decomposition in the statement of the lemma is chosen so that context q i corresponds to the set Y i .The context q corresponds to all nodes that are not descendants of y 1 , and the forest s ′ corresponds to all descendants of y m+1 .• There is a node y ∈ Y such that at least m + 1 children of y have some node from Y (and therefore also X ) in their subtree.Let t be the forest containing all proper descendants of y.By assumption on y, the forest t can be decomposed as t = t 1 + • • • + t m+1 so that each of the forests contains at least one node from X .For the decomposition in the statement of the lemma, we define q to be the set of nodes outside t, which includes y, and we define q i to be t i + and s ′ as t m+1 .• The forest s can be decomposed as t = t 1 + • • • + t m+1 so that each of the forests contains at least one node from X .We conclude as in the previous case but with an empty q. We now come back to the proof of the lemma.For k ∈ N let n be the number defined by Claim 4.13 for m = k 2 .Let We apply Claim 4.13, with X = {x 1 , ... , x n } and obtain a decomposition s = qq 1 • • • q m s ′ .For each i = 1, ... , m the context q i contains at least one node of X .We chose arbitrarily one of them and denote it by x n i .Unfortunately, the function i → n i need not be monotone, as required in a tame fractal.However, we can always extract a monotone subsequence, since any number sequence of length k 2 is known to have a monotone subsequence of length k [10] We now assume there is a tame fractal with the node x i belonging to the context q i .The dual case when the decomposition is s = qq k • • • q 1 s ′ , corresponding to a decreasing sequence in the proof of Lemma 4.12, is treated analogously. The general idea is as follows.We will define a notion of monochromatic tame fractal, and show that vah = vh follows from the existence of large enough monochromatic tame fractal.Furthermore, a large monochromatic tame fractal can be extracted from any sufficiently large tame fractal thanks to the Ramsey Theorem. Let i, j, l be such that 0 ≤ i < j ≤ l ≤ k.We define u ijl to be the image under α of the context obtained from q i+1 • • • q j by only keeping the nodes from X l (with the hole staying where it is).We define w ijl to be the image under α of the context obtained from q i+1 • • • q j by only keeping the nodes from X l \ {x l }.Straight from this definition, as X l ⊆ X l+1 we have w ijl u ijl and u ijl u ij(l+1) (4.4) A tame fractal is called monochromatic if for all i < j < l and all i ′ < j ′ < l ′ taken from {1, ... , k}, we have Note that in the above definition, we require j < l, even though u ijl is defined even when j ≤ l. We apply the following form of Ramsey's Theorem (see, for example, Bollobas [7]): Let c, r , k be positive integers.Then there exists an integer N with the following property.Let |S| ≥ N, and suppose that the subsets of S of cardinaility r are colored with c colors.Then there exists a subset T of S with |T | ≥ k such that all subsets of T with of cardinality r have the same color. Let ω be the exponent associated to the syntactic forest algebra (H L , V L ) as defined in Section 3. If there is a tame fractal of size N inside s, then the map {i, j, l} → u ijl gives us a coloring of the cardinality 3 subsets of {1, ... , N} with |V L | colors.By Ramsey's Theorem, if N is sufficiently large, there is a monochromatic fractal of length k = ω + 1 inside s. We conclude by showing the following result: Lemma 4.14.If there is a monochromatic tame fractal of length Proof.Fix a monochromatic tame fractal The type of s[X k \ {x k }] is decomposed the same way, only u (k −1)kk is replaced by w (k −1)kk .Therefore, the lemma will follow if Since the fractal is monochromatic, and since k = ω + 1 the above becomes kk .By (4.4) and monochromaticity we have Therefore identity (4.1) can be applied to show that both sides are equal to u ω 01k .Note that we use only one side of identity (4.1), u ω v = u ω .We would have used the other side when considering the case when s = qq k • • • q 1 s ′ .The first reason is that identity (4.1) refers to the relation v w .One consequence is that we need to prove Corollary 3.2 before concluding that identity (4.1) can be checked effectively. The second reason is that we want to pinpoint how identity (4.1) diverges from Jtriviality of the context monoid V .Consider the forest language "all trees in the forest are of the form aa". It is easy to verify that the syntactic forest algebra of this language is such that V is J -trivial.But this language is not piecewise testable, since for any k > 0, the forests k • aa and k • aa + a contain the same pieces of size at most k, but the first of these forests is in the language, while the second is not. The proposition below identifies an additional condition (depicted in Figure 3) that must be added to J -triviality.Proposition 4.15.Identity (4.1) is equivalent to J -triviality of V , and the identity Proof.One implication is obvious: both J -triviality and (4.5) follow from (4.1).For the other implication, we assume V is J -trivial and that (4.5) holds.We must show that if v u, then We will only show the first equality, the other is done the same way.By unraveling the definition of v u, there is a morphism and two contexts p q over A such that α(p) = v and α(q) = u. The proof goes by induction on the size of p. If p can be decomposed as p 1 p 2 with p 1 , p 2 nonempty, then we have p 1 q and p 2 q and, by induction, α(q) ω • α(p 1 ) = α(q) ω , α(q) ω • α(p 2 ) = α(q) ω .Hence we get: If p consists of single node with a hole below, then we have q = q 0 pq 1 for some two contexts q 0 , q 1 , and therefore also u = u 0 vu 1 for some u 0 , u 1 .The result then follows by J -triviality of V (recall that J -triviality implies identity (4.3)): In the above, we used twice identity (4.3):Once when adding u 0 to u ω , and then when removing u 0 v from after u ω . The interesting case is when p = + s for some tree s.In this case, the context q can be decomposed as q 1 ( + t)q 2 , with s t.We have Thanks to identity (4.3), the above can be rewritten as It is therefore sufficient to show that s t implies The proof of the above equality is by induction on the number of nodes that need to be removed from t to get s.The base case s = t follows by aperiodicity of H, which follows by aperiodicity of V , itself a consequence of J -triviality.Consider now the case when t is bigger than s.In particular, we can remove a node from t and still have s as a piece.In other words, there is a decomposition t = q 0 q 1 t ′ such that s q 0 t ′ .Applying the induction assumption, we get Furthermore, applying identity (4.5), we get Combining the two equalities, we get the desired result. Closest common ancestor According to the definition of piece in Section 2, t = d (a + b) is a piece of the forest s = dc(a + b).In this section we consider a notion of piece which does not allow removing the closest common ancestor of two nodes, in particular removing the node c in the example above.The logical counterpart of this notion is a signature where the closest common ancestor (a three argument predicate) is added. Recall that in a forest s we say that a node z is the closest common ancestor of the nodes x and y, denoted z = x ⊓ y, if z is an ancestor of both x and y and all other nodes of s with this property are ancestors of z.Note that the ancestor relation can be defined in terms of the closest common ancestor, since a node x is an ancestor of y if and only if x is the closest common ancestor of x and y.We now say that a forest s is a cca-piece of a forest t, and write this as s t, if there is an injective mapping from nodes of s to nodes of t that preserves the label of the node together with the forest-order and the closest common ancestor relationship (the ancestor relationship is then necessarily preserved).An equivalent definition is that the cca-piece relation is the reflexive transitive closure of the relation {(pt, pat) : p is a context, a is a node, t is a tree or empty} Notice the difference with the notion of piece as defined in Section 2, where t could be an arbitrary forest.Similarly we say that a context p is a cca-piece of the context q, p q, if there is an injective mapping from p to q as above that also preserves the hole. A forest language L is called cca-piecewise testable if there exists n > 0 such that membership of t in L depends only on the set of cca-pieces of t of size n. As before, every cca-piecewise testable language is regular and an analogue of Proposition 2.1 holds as well. Recall that the ancestor relation can be expressed using the closest common ancestor relation hence Σ 1 (⊓, < dfs ) could be replaced by Σ 1 (⊓, < dfs , <) in the statement of Proposition 5.1.A first remark is that there are more cca-piecewise testable languages than there are piecewise testable ones.Hence the identities that characterize piecewise testable languages are no longer valid.In particular, in the syntactic algebra of a cca-piecewise testable language, the context monoid V may no longer be J -trivial.To see this consider the language L of forests over {a, b, c} that contain the cca-piece a(b + c).This is the language "some a is the closest common ancestor of some b and c".Then, for all n, the context p = (ab) n is not the same as the context q = (ab) n a as p(b + c) ∈ L while q(b + c) ∈ L. Hence the identity (uv) ω = (uv) ω u does not hold in the syntactic context monoid of L. However as we noted earlier, any J -trivial monoid satisfies this identity.Note however that p and q satisfy the equivalence pt ∈ L iff qt ∈ L for all trees t.The characterization below is a generalization of this idea of distinguishing trees from forests. We call a context a tree-context if it is nonempty and has one node that is the ancestor of all other nodes, including the hole. In the presence of the closest common ancestor, the situation is more complicated as well: cca-piecewise testability of a forest language L is not determined by the syntactic forest algebra alone.To obtain an algebraic characterization of this class of languages, it is necessary to look at the syntactic morphism α L : A ∆ → (H L , V L ) that maps each (h, v) to its ∼ L -class, and not just the the image of this morphism.(We can be considerably more precise about this: The distinction is that the cca-piecewise testable languages do not form a variety of languages in the sense described by Eilenberg [9].In particular, this family of languages lacks the crucial property of being closed under inverse images of morphisms between free forest algebras; this fails if the morphism maps some generator a to the empty context, or to a context of the form p + s, where p is a context and s is a nonempty forest.However cca-piecewise testable languages satisfy all the other properties of varieties of languages and in particular they are closed under inverse images of homomorphisms that are "tree-preserving", i.e., the image of a is a tree-context p for all a. Varieties of forest languages are discussed in [4].) We extend the cca-piece relation to elements of a forest algebra (H, V ) in the presence of a morphism α : A ∆ → (H, V ) as follows: we write v w if there are contexts p q that are mapped to v and w respectively by the morphism α.There is a subtle difference here with the definition of defined in Section 2: the relation on V depends on the morphism α! Similarly we define the notion of g h for g, h ∈ H. The elements of V that are images under the morphism α of a tree-context are called tree-context-types.Similarly, the elements of H that are images of a tree are called treetypes (it is possible for an element to be an image of both a tree and a non-tree, but it is still called a tree-type here).Note that the notions of tree-type and of tree-context-type are relative to α. Theorem 5.2.A forest language L is cca-piecewise testable if and only if its syntactic algebra and syntactic morphism satisfy the following identities: whenever h is a tree-type or empty, and v u are tree-context-types, and Because of the finiteness of the syntactic forest algebra (H L , V L ) one can effectively decide whether an element of one of these monoids is the image of a tree-context or of a tree.Whether or not v u or g h holds can be decided in polynomial time using an algorithm as in Corollary 3.2 based on the following equivalent definition of : Let (H, V ) be a forest algebra and α a surjective morphism from A ∆ → (H, V ).Let then R be the smallest relation on V that satisfies the following rules, for all v, v ′ , w , w ′ ∈ V : For any finite (H, V ) and surjective morphism α, the relations R and are the same. Proof.We first show the inclusion of R in .A simple induction on the number of steps used to derive v R w , produces contexts p q with α(p) = v and α(q) = w .Moreover p (q) is a tree-context whenever u (v) is a tree-context-type.The surjectivity of α is necessary for starting the induction in the case R v. For the inclusion of in R, we show that α(p) R α(q) holds for all contexts p q.The proof is by induction on the size of p: • If p is the empty context, then the result follows thanks to the first rule in the definition of R. If p = a then from p q it follows that q = q 1 aq 2 for some contexts q 1 , q 2 and using the first and second rule in the definition of R we get that R α(q 1 ), R α(q 2 ), and α(a)Rα(a)α(q 2 ).Hence using the third rule in the definition of R we get the desired result by composition.• If there is a decomposition p = p 1 ap 2 where p 1 , p 2 are contexts, then from p q there must be a decomposition q = q 1 aq 2 with p 1 q 1 and p 2 q 2 .By induction we get that α(p 1 ) R α(q 1 ) and α(p 2 ) R α(q 2 ).Applying the second rule to the latter we get that α(ap 2 ) R α(aq 2 ).We can now apply the third rule to derive α(p) R α(q).• If there is a decomposition p = p 1 p 2 where p 1 , p 2 are non empty contexts and p 1 is of the form (s + + t), then from p q there must be a decomposition q = q 1 q 2 with p 1 q 1 and p 2 q 2 and where q 1 is of the form (s ′ + + t ′ ).We conclude by induction and using the fourth rule in the definition of R. • The remaining case is when p = (t + ) (or p = + t) where t is a tree of the form ap ′ 0 for some context p ′ .Then from p q we have q = aq ′ 0 + q 1 for some contexts q 1 , q ′ , with p ′ q ′ .By induction we have α(p ′ ) R α(q ′ ).Using the second rule we get α(ap ′ ) R α(aq ′ ). Using the last rule we get α(p) R α(aq ′ 0 + ).By the first rule we have R α(q 1 ).We conclude using the fourth rule. This implies that Theorem 5.2 yields a decidable characterization of the cca-piecewise testable languages. Corollary 5.4.It is decidable if a regular forest language is cca-piecewise testable. The proof of Theorem 5.2 follows the same outline as that of the proof of Theorem 4.1, but the details are somewhat complicated.5.1.Proof of Theorem 5.2.The proof that (5.1) and (5.2) are necessary is the same as Section 4.1.The only difference is that instead of Fact 4.5, we use the following.Fact 5.5.If r is any context, p q are tree-contexts, and t is a tree or empty, then rpt rqt. We now turn to the completeness proof in Theorem 5.2.The proof is very similar to the one of the previous section, with some subtle differences. As before, we fix a language L whose syntactic forest tree algebra (H, V ) satisfies all the identities of Theorem 5.2.We write α for the syntactic morphism. We now write s ∼ n t if the two forests s, t have the same cca-pieces of size n.Likewise for contexts. The main step is to show the following proposition. Proposition 5.6.For n sufficiently large, if t is a tree or empty, then pat ∼ n pt entails α(pat) = α(pt). Theorem 5.2 follows from the above proposition in the same way as Theorem 4.1 follows from Proposition 4.7 in the previous section.The reason why we assume that t is either a tree or empty is because when s is an cca-piece of s ′ , then s can be obtained from s ′ by iterating one of the following two operations: removing a leaf, or removing a node which has only one child.Hence during the pumping argument yielding Theorem 5.2 from Proposition 5.6 it is enough to preserve the type only for these operations.We thus concentrate on showing Proposition 5.6. We will now redefine the concept of fractal for our new, closest common ancestor setting.The key change is in the concept of a vah-decomposition.We change the notion of x ∈ X being a vah-decomposition of s as follows: all conditions of the old definition hold, but new conditions are added.First we require that s[X ] be a closest common ancestor piece of s, in particular this implies that if two elements of X have a closest common ancestor in s then this closest common ancestor is also in X .Moreover either x has no descendants in X ; or there is a minimal element of X that has x as a proper ancestor.In other words, the part of s[X ] that corresponds to h is either empty, or is a tree.In particular, s[X \ {x}] is a closest common ancestor piece of s[X ]; which is the key property required below.From now on, when referring to a vah-decomposition, we use the new definition.In particular in the concept of a fractal x 1 ∈ X 1 , ... , x k ∈ X k inside s we now have that for each i, x i ∈ X i is a vah-decomposition of s in the new sense. The proof of the following lemma is exactly the same as its counterpart in Section 4.2 (Lemma 4.10) and is therefore omitted.Lemma 5.7.Let k ∈ N.For n sufficiently large, if t is a tree or empty, then pat ∼ n pt entails the existence of a fractal of length k inside pat. such that either: • Each q i is a tree context whose root node belongs to X i \ {x i }. • Each q i is a context of the form + t i , with t i a forest.Lemma 5.8.Let k ∈ N.For n sufficiently large, if there is a fractal of length n inside pat, then there is a cca-tame fractal of length k inside pat. Proof.The proof is essentially the same as for the counter part in Section 4.2 (Lemma 4.12); only this time we need to be more careful to satisfy the more stringent requirements in a cca-tame fractal. Let m = 2k + 2. Using the same reasoning as in the proof of Lemma 4.12, if n is large enough then we may extract a subfractal of length m where either: • All the nodes x 1 , ... , x m have the same closest common ancestor.In this case, we can extract a cca-tame subfractal, where each context is of the form + t i .• The set Y = {y : y is a closest common ancestor of some x i , x j } contains a chain y 1 < • • • < y m , such that for each i ≤ m, the set Y i = {z : z ≥ y i and z ≥ y i+1 } contains at least one of the node x i .(There is a second case, where the nodes y 1 , ... , y m are ordered the other way: with y i+1 an ancestor of y i .This case is treated analogously.)In particular, y i is the closest common ancestor of x i and any of the nodes x i+1 , ... , x m .Since X i+1 contains both x i and x i+1 , each node y i belongs to the set X i+1 .As we may have x i = y i , the desired cca-tame fractal is obtained as follows: We use x 2 ∈ X 2 , x 4 ∈ X 4 , ... , x 2k ∈ X 2k as the fractal (recall that m = 2k + 2); while the decomposition qq 1 ... q k s ′ is chosen so that q i has its root in y 2i−1 , and its hole in y 2i+1 . Recall the definition of u ijl and w ijl as the image under α of the context obtained from q i+1 • • • q j by restricting s to X l and X l \ {x l }, respectively.Note that because of the new definition of fractals we have: if the q i are tree-contexts then u ijl , w ijl are tree-context-types (5.4)The definition of monochromaticity is the same as in the previous section and Ramsey's Theorem gives.Lemma 5.9.If there is a cca-tame fractal of sufficiently large size inside pat, then there is a monochromatic cca-tame fractal of size m = ω + 2 inside pat. We will now take a monochromatic cca-tame fractal, and conclude by showing that α(pat) = α(pt). Proof.Fix a monochromatic cca-tame fractal of size m = ω + 2 and let k = m − 1.Since x k ∈ X k is a vah-decomposition, the statement of the lemma follows once we show that α assigns the same type to the forest s Recall that the type of the forest s[X k ] can be decomposed as follows (the case where ) and notice that if q m is a tree-context then h is a tree-type.Therefore, the lemma will follow if Since the fractal is monochromatic, and since k = ω + 1, the above becomes 3) and monochromaticity, we have We now have two cases.If all the q i are tree-contexts, we conclude using identity (5.1) which can be applied because of (5.5), and the fact that h is then a tree-type and (5.4).If all the q i are contexts of the form + f i , we conclude from (5.5) using identity (5.2). 5.2. An equivalent set of identities.In this section, we give a set of identities that is equivalent to the one used in Theorem 5.2.The rationale is the same as in Proposition 4.15: we want to avoid the use of v w in the identities. Proposition 5.11.The conditions on the syntactic morphism stated in Theorem 5.2 are equivalent to the following equalities: (uv) ω h = (uv) ω uh (5.6) whenever h is a tree-type or empty, and whenever u and v are tree-context-types, and whenever u is a tree-context-type or empty and g, h are tree-types or empty. The rest of Section 5.2 is devoted to showing the above proposition. It is immediate to see that identity (5.1) implies identity (5.7) and that identity (5.1) implies identity (5.8).We now show that identities (5.1) and (5.2) imply identity (5.6).Let u and v be two context-types and h be a tree-type.We want to show that (uv) ω h = (uv) ω uh. We consider several cases.• In the first case we assume that u = u 1 u 2 for some tree-context-type u 2 .In that case we have: Notice now that u 2 v u 2 vu 1 u 2 vu 1 and that u 2 vu 1 u 2 u 2 vu 1 u 2 vu 1 .As u 2 is a tree-contexttype, all the context-types involved are tree-context-types and we can use identity (5.1) twice and replace u 2 v by u 2 vu 1 u 2 .This yields: And we have (uv By idempotency, this yields the desired result: As v 2 is a tree-context-type, all the context-types involved are tree-context-types and we can use identity (5.1) twice and replace v 2 by v 2 u.This yields: And we have (uv) ω h = (uv) ω (uv) ω uh = (uv) ω uh • When none of the above cases works, we must have u = f 1 + + f 2 and v = g 1 + + g 2 .In that case we have (uv) ω h = ω •(f 1 +g 1 )+h +ω •(g 2 +f 2 ), and we conclude using identity (5.2) as f 1 (f 1 + g 1 ) and f 2 (f 2 + g 2 ). We now consider the converse implication in Proposition 5.11.Assume that identities (5.6)-(5.8)hold.We show that identities (5.1) and (5.2) are satisfied.We first show the following lemma: Lemma 5.12.If u is a tree-context-type, v, w , w ′ are (not necessarily tree) context-types with w ′ w , and g, h are either tree-types or empty, then the following identity holds Note that the identity (5.2) is a direct consequence of the above, by taking u, v to be the empty context, and g, h to be the empty tree.We will also use the above lemma to show (5.1), but this will require some more work. Proof.The proof is by induction on the number of steps used to derive w ′ w . • Consider first the case when w , w ′ can be decomposed as Two applications of the induction assumption give us for all tree-type or empty g: As u is a tree-context-type we can iterate on (5.10) and then apply (5.11) in order to derive: As u is a tree-context-type, we can apply again (5.10) in the reverse direction in order to derive the desired result.• Consider now the case when w , w ′ can be decomposed as with w ′ 3 a tree-context-type or empty.We first use the induction assumption to get 13) By applying the identity (5.8), we get for all tree-type or empty g: 14) Note that it is important here that w ′ 3 h is either a tree-context-type or empty.Finally, we apply once again the induction assumption to get As u is a tree-context type, we can first iterate on (5.13), then iterate on (5.14) and finally applying (5.15) in order to get: Because u is a tree-context-type we can now apply (5.13) and (5.14) in reverse to eliminate the inner products and obtain the desired result. • Finally, consider the case when w , w ′ can be decomposed as In this case, the identity becomes: 0)g where v ′ = v(h + ).The result now follows by induction assumption with w 1 , w ′ 1 in place of w , w ′ . We now claim that all cases have been considered.Assume first that either w ′ or w consists of several trees.Then, by the definition of , w ′ and w can be decomposed into smaller forests and we conclude using the first bullet.We can thus assume that both w and w ′ are trees.If w ′ contains a node between its root and its hole then, by definition of , we can decompose w and w ′ and apply the second bullet.Similarly we can transform w using the first bullet until the third bullet can be applied. We now derive the first part of identity (5.1).Let u, v be tree-context-types such that v u, and let h be a tree-type.We show by induction on v that u ω h = u ω vh. where both v 1 and v 2 are tree-context-types then we consider v 2 first and v 1 next: It is important here that v 2 h is a tree-type. Therefore it is enough to consider the case where v is of the form α(a)( + f ) for some letter a and some forest-type f .In the sequel we write a instead of α(a) in order to improve readability.From v u we get u = u 1 a( + g)u 2 where u 1 and u 2 are tree-context-types and f g.Then we have from identity (5.6) for any tree-type h: and therefore, as a( + g)h is a tree-type we get for any tree-type h: Iterating on (5.16) we get: It will therefore be enough to show for f g.This, however, is a consequence of (5.9).The second part of identity (5.1), u ω = vu ω , is shown the same way using identity (5.7) instead of identity (5.6) and building on (5.17) below instead of (5.9).Lemma 5.13.If u is a tree-context-type, v, w , w ′ are (not necessarily tree) context-types with w ′ w , and g, h are either tree-types or empty, then the following identity holds (5.17) Proof.Identical to the proof of Lemma 5.12, applying the other side of identity (5.8). Variations In this section we show that the techniques we developed in the previous sections are fairly robust and can be adapted to many situations.We describe some of them. 6.1.Languages definable in Σ 1 .Here we treat the relatively simple case of languages defined by Σ 1 sentences (rather than boolean combinations of such formulas).We will prove: Theorem 6.1.It is decidable whether a given regular forest language L is definable by a Σ 1 (<, < dfs ) sentence. We will show how to do this using the syntactic forest algebra and syntactic morphism, although this could be carried out just as well using an automaton model.The argument we give is based on an idea of Pin [12] concerning ordered monoids. Let L ⊆ H A be a regular forest language, and let α L : A ∆ → (H L , V L ) be its syntactic morphism.We set The relations ≤ H L and ≤ V L are partial orders on H L and V L , respectively.These orders are compatible with the algebra operations in the sense that whenever Proof.This is straightforward from the definitions: Transitivity and reflexivity of ≤ H L are obvious.To prove antisymmetry, suppose Transitivity and reflexivity of ≤ V L are likewise trivial, and antisymmetry follows from the antisymmetry of ≤ H L and the faithfulness of the action of V L on H L .For the multiplicative properties, let h i , u i , v i be as in the statement of the Proposition. Theorem 6.3.Let L ⊆ H A be a regular forest language.The following are equivalent: • L is definable by a Σ 1 (<, < dfs ) formula. • For all contexts p, q and forests t, Proof.The first condition implies the second, because inserting new nodes in a forest does not change the < or < dfs relation among the already existing nodes. To show that the second condition implies the first, we use a pumping argument: Let n = |H L |.There exists K > 0 such that any forest s with at least K nodes has a factorization s = q 1 q 2 • • • q n t for some forest t, nonempty contexts q i .In particular, there is a factorization s = pqt with α L (t) = α L (qt).Thus a forest belongs to L if and only if it is obtained by successive insertion of nodes starting with a forest in L of size less than K .We can write a Σ 1 sentence φ that describes all the relations among nodes of the forests of size less than K that belong to L, and thus this sentence defines L. To show the equivalence of the second and third conditions, suppose the second condition holds.We need to show v ≤ V L for all v ∈ V .This says that for every forest s and every context p, s ∈ L implies ps ∈ L, which follows from the second condition.Conversely, suppose the third condition holds, and that p, q are contexts and t a forest with pt ∈ L. Then α L (pt) = α L (p) α L (t) ∈ X .By the multiplicative properties of the partial order, α L (p)α L (q)α L (t) ∈ X , and thus pqt ∈ L. Theorem 6.1 is an immediate corollary, since one can effectively compute the order ≤ V L given the syntactic algebra and syntactic morphism of L. 6.2.Commutative languages.In this section we consider forest languages that are commutative, i.e., closed under rearranging siblings. A forest t ′ is called a reordering of a forest t if it is obtained from t by rearranging the order of siblings.In other words, reordering is the least equivalence relation on forests that identifies all pairs of forests of the form p(s + t) and p(t + s).A forest language is called commutative if it is closed under reordering.In other words, a forest language is commutative if and only if its syntactic forest algebra satisfies the identity We say a forest s is a commutative piece of t, if s is a piece of some reordering of t.A forest language L is called commutative-piecewise testable if for some n ∈ N, membership of t in L depends only on the set of commutative pieces of t that have no more than n nodes.This definition also has a counterpart in logic, by removing the forest-order from the signature.The following proposition is immediate: Proposition 6.4.A forest language is commutative-piecewise testable iff it is definable by a Boolean combination of Σ 1 (<) formulas. If a language is commutative-piecewise testable, then it is clearly commutative and piecewise testable (in the more powerful, noncommutative, sense).Below we show that the converse implication is also true: Theorem 6.5.A forest language is commutative-piecewise testable if and only if it is commutative and piecewise testable. As piecewise testability is decidable, by Corollary 3.2, and commutativity is obviously decidable, the theorem above implies decidability: Corollary 6.6.It is decidable if a regular forest language is commutative-piecewise testable.Theorem 6.5 follows quite easily from: Lemma 6.7.Let n ∈ N.For k sufficiently large, if two forests have the same commutative pieces of size at most k, then they can be both reordered so that the resulting forests have the same pieces of size at most n. To see this, assume L is a commutative and piecewise testable forest language.We need to show that there is a k such that if t and s have the same commutative pieces of size k then t ∈ L iff s ∈ L. As L is piecewise testable there exists an n such that whenever s and t have the same pieces of size no more than n then t ∈ L iff s ∈ L. Let k be the number given by Lemma 6.7 for that n.Assume now that s and t have the same commutative pieces of size k.By Lemma 6.7 they can be reordered into respectively s ′ and t ′ such that s ′ and t ′ have the same pieces of size n.Hence s ′ ∈ L iff t ′ ∈ L. But as L is commutative this yields s ∈ L iff t ∈ L as desired. Proof of Lemma 6.7.Let P(s) be the set of pieces of s that have size at most n.As in Lemma 4.6, there is some k such that any forest s has a piece t s of size at most k with P(s) = P(t).Let now s 1 , s 2 be two forests with the same commutative pieces of size k.For i = 1, 2, consider the families To prove the lemma, we need to show that the families P 1 and P 2 share a common element.To this end, we show that for any X ∈ P 1 , there is some Y ∈ P 2 with X ⊆ Y , and vice versa; in particular, the families share the same maximal elements.Let then X = P(s ′ 1 ) ∈ P 1 .By the choice of k, the forest s ′ 1 has a piece t of size at most k with P(t) = X .Therefore t is a commutative piece of s 1 of size k.By assumption, the forest t is also a commutative piece of s 2 and therefore a piece of some reordering s ′ 2 of s 2 .Hence X ⊆ P(s ′ 2 ) ∈ P 2 . Similarly we can define the notion of commutative-cca-piece and commutative-ccapiecewise testable forest language.Using the same arguments as above we can prove: Proposition 6.8.A forest language is commutative-cca-piecewise testable iff it is definable by a Boolean combination of Σ 1 (⊓) formulas.Theorem 6.9.A forest language is commutative-cca-piecewise testable if and only if it is commutative and cca-piecewise testable.Corollary 6.10.It is decidable if a regular forest language is commutative-cca-piecewise testable. 6.3.Tree languages.Our previous results were provided decidable characterizations for forest languages, and in fact the algebraic theory used here works best when forests, rather than trees, are treated as the fundamental object.Traditionally, though, interest has focused on trees rather than forests.Thus we want to give a decidable characterization of the piecewise testable tree languages or, equivalently, the sets of trees that are definable by Boolean combinations of Σ 1 sentences. For certain logics, like first-order logic over the descendant relation, or first-order logic over successor, one can write a sentence that says "this forest is a tree", and thus there is no need to treat tree and forest languages separately.For piecewise testability, we need to do something more, since the set of all trees over a finite alphabet A is not definable by a Boolean combination of Σ 1 sentences over any of the predicates mentioned in this paper. We define a tree piecewise testable language over a finite alphabet A to be the intersection of a piecewise testable forest language with the set of all trees over A. In other words this is the set of languages definable by a Boolean combination of Σ 1 (<, < dfs ) formulas when we interpret these formulas in trees.This is preferable to defining a piecewise testable tree language to be a tree language that is piecewise testable (as a forest language), since the latter definition would only define tree languages that are either finite or contain only chains (no branching).Moreover it would not correspond to the tree languages definable by a Boolean combination of Σ 1 (<, < dfs ) formulas.The cases when the pieces are assumed to be commutative and/or take into account closest common ancestor are defined analogously. We will obtain our decidability result by a general method for translating algebraic characterizations of classes of forest languages to characterizations of the corresponding classes of tree languages.This method will apply to all the cases we considered earlier: piecewise testable languages, cca-piecewise testable languages, and their commutative counterparts. First, suppose α : A ∆ → (H, V ) is a surjective forest algebra morphism.Recall that we denote by H A the set of all forests of A. Based on α, we define an equivalence relation on H A : We write s ∼ t if for all contexts p such that ps and pt are both trees (this happens if p is a tree-context or if p is the empty context and both t and s are trees) we have α(ps) = α(pt).Notice that if s and t are such that α(s) = α(t) then s ∼ t and that if s and t are both trees then s ∼ t implies α(s) = α(t) (take p = in the definition of ∼).It is clear that if s ∼ t then for any context q, qs ∼ qt.Thus ∼ defines a forest algebra congruence on A ∆ .Let be the projection morphism onto the quotient by this congruence.We call α ′ the tree reduction of α.From the remark above it follows that if t and s are both trees then α Let F be a family of forest languages over A. We say that a set F of surjective forest algebra morphisms with domain A ∆ characterizes F if a forest language L belongs to F if and only if L is recognized by some morphism in F. We will further assume that F is closed in the following sense: suppose α : A ∆ → (H 1 , V 1 ) belongs to F, and β : (H 1 , V 1 ) → (H 2 , V 2 ) is a morphism onto a finite forest algebra.Then βα belongs to F. Theorem 6.11.Let F and F be as above, and let L ⊆ H A be a set of trees.Then there is a forest language K ∈ F such that L consists of all the trees in K if and only if the tree reduction of the syntactic morphism α L of L belongs to F. Proof.Let L be a tree language, α L be its syntactic morphism and let α ′ L : A ∆ → (H ′ L , V ′ L ) be its tree reduction. Assume first that there is a forest language K such that L consists of all the trees in K .Let α K : A ∆ → (H K , V K ) be the syntactic morphism of K .By definition, α K ∈ F. Fix h ∈ H K and let t, s be forests such that α K (t) = h = α K (s).We show that α ′ L (s) = α ′ L (t).Suppose this is not the case.Then there exists a context p such that ps and pt are both trees but α L (ps) = α L (pt).By definition of α L this means that there exists a context q such that qps ∈ L but qpt ∈ L. From qps ∈ L we know that qps is a tree, hence, as pt is a tree, qpt must also be a tree.By hypothesis this implies qps ∈ K but qpt ∈ K , contradicting α K (t) = α K (s). Since V ′ L acts faithfully on H ′ L , it follows that for any contexts p and q, α K (p) = α K (q) implies α ′ L (p) = α ′ L (q).Thus α ′ L = βα K for some morphism β : . By hypothesis on F this implies that α ′ L ∈ F. Conversely, suppose that α ′ L belongs to F. Let X = α ′ L (L) and set K = (α ′ L ) −1 (X ).From the hypothesis it follows that K ∈ F. Assume that t is a tree such that α ′ L (t) ∈ X .By definition of X , there is a tree s ∈ L such that α ′ L (s) = α ′ L (t).But as α ′ L is the tree reduction of α L , we have α ′ L (s) = α ′ L (t) implies α L (s) = α L (t) and therefore t ∈ L. Hence L is the set of trees of K . As a result we have: Corollary 6.12.It is decidable if a regular tree language is tree (commutative) (cca-)piecewise testable. Proof.We only give the proof for the piecewise testable case.The other cases are handled similarly. Let F be the family of piecewise testable forest languages over A, and let F be the family of morphisms from A ∆ onto finite forest algebras that satisfy the identities of Theorem 4.1.Notice that from Proposition 4.15 it follows that if α ∈ F then βα ∈ F for all onto morphism β.Hence F and F satisfy the hypothesis of Theorem 6.11. Consequently, a regular tree language L is tree piecewise testable if and only if the tree reduction of α L belongs to F. It remains to show that we can effectively compute the image of the tree reduction given α L .Consider h ∈ H L and notice that all the forests in α −1 L (h) agree on α ′ L .Hence the procedure amounts to deciding which pairs of elements of the syntactic forest algebra are identified under the reduction, which we can do as long as we know which elements are images under α L of trees.It is easy to see that if an element of H L is the image of a tree, then it is the image of a tree of depth at most |V L | in which each node has at most |H L | children, so we can effectively decide this as well.6.4.Horizontal order.We could also consider other natural predicates over forests.Recall for instance the definition of horizontal-order with x < h y expresses the fact that x is a sibling of y occurring strictly before y in the forest-order. Correspondingly we say that s is a horizontal-piece of t, denoted s t, if there is an injective mapping from nodes of s to nodes of t that preserve the horizontal-order and the ancestor relationship.An equivalent definition is that the piece relation is the reflexive transitive closure of the relation {(pt, pat) : p is a context, a is a node, t is a forest or empty and either t is empty or a does not have a sibling in pat} From this notion of horizontal-piece we derive the notion of horizontal-piecewise testability as expected and the very same proofs as in Section 4 yield: Proposition 6.13.A forest language is horizontal-piecewise testable iff it is definable by a Boolean combination of Σ 1 (< h , < dfs ) formulas.Theorem 6.14.A forest language is horizontal-piecewise testable if and only if its syntactic algebra satisfies the identity u ω v = u ω = vu ω (6.1) for all u, v ∈ V L such that v u.This implies decidability of horizontal-piecewise testability and it would be interesting to see what would be the corresponding equivalent set of identities that does not make use of , in the spirit of Proposition 4.15. A straightforward adaptation of Section 5 would also give a decidable characterization of definability by a Boolean combination of Σ 1 (<, < h , ⊓). Conclusion/discussion Simon's theorem on J -trivial monoids has emerged as one of the fundamental results in the algebraic theory of automata on words.The principal contribution of the present paper has been to show that the use of forest algebras leads to a natural generalization of this theorem to trees and forests.In proving this generalization we have introduced a number of new techniques that we believe will prove useful in the continuing development of the algebraic theory of tree automata. Let us briefly indicate a few directions for further research.There is a purely algebraic formulation of Simon's theorem, stating that every finite J -trivial monoid M is the quotient of a finite monoid N that admits a partial order compatible with the multiplication in N and in which the identity is the maximum element.Our new results have a similar formulation: Every finite forest algebra satisfying the identities of Section 4 is the quotient of an algebra that admits compatible partial orders on both its horizontal and vertical components.In fact, Straubing and Thérien [17] have proved this order property of finite J -trivial monoids directly, yielding a quite different proof of Simon's theorem.It would be interesting to know whether such an argument is also possible for forest algebras. In the word case, the boolean combinations of Σ 1 -definable languages form the first level of hierarchy whose union is the first-order definable languages.Little is known about the higher levels of this hierarchy, apart from the fact that it is strict.Indeed, the problem of effectively characterizing the languages definable by boolean combinations of Σ 2 -sentences has been open for many years.In contrast, the first-order definable languages themselves constitute one of the first classes for which an effective algebraic characterization was given: these are exactly the languages whose syntactic monoids are aperiodic.(McNaughton and Papert [11].)The corresponding problem for trees and forests, however, remains open: We possess non-effective algebraic characterizations for the forest languages definable by firstorder sentences over the ancestor relation, and for the related subclasses CTL and CTL* (see Bojańczyk, et.al. [5]), but the problem of finding effective tests for membership of a language in any of these classes remains one of the greatest challenges in this work. Figure 1 : Figure 1: The identity u ω = u ω v, with v u.The gray nodes are from v. Lemma 4 . 10 . Let k ∈ N.For n sufficiently large, pat ∼ n pt entails the existence of a fractal of length k inside pat. Figure 3 : 4 . 3 . Figure 3: The identity ω(vuh) = ω(vuh) + vh, with the white nodes belonging to u.4.3.An equivalent set of identities.In this section, we rephrase the identities used in Theorem 4.1.There are two reasons to rephrase the identities. the statement of the lemma follows if α assigns the same type to the two restrictions s[X k ] and s[X k \ {x k }].Recall the definition of u ijl and w ijl above.The type of the forest s[X k ] can be decomposed as
21,516.6
2008-06-24T00:00:00.000
[ "Computer Science", "Mathematics" ]
Towards an optimised deep brain stimulation using a large-scale computational network and realistic volume conductor model Objective. Constructing a theoretical framework to improve deep brain stimulation (DBS) based on the neuronal spatiotemporal patterns of the stimulation-affected areas constitutes a primary target. Approach. We develop a large-scale biophysical network, paired with a realistic volume conductor model, to estimate theoretically efficacious stimulation protocols. Based on previously published anatomically defined structural connectivity, a biophysical basal ganglia-thalamo-cortical neuronal network is constructed using Hodgkin–Huxley dynamics. We define a new biomarker describing the thalamic spatiotemporal activity as a ratio of spiking vs. burst firing. The per cent activation of the different pathways is adapted in the simulation to minimise the differences of the biomarker with respect to its value under healthy conditions. Main results. This neuronal network reproduces spatiotemporal patterns that emerge in Parkinson’s disease. Simulations of the fibre per cent activation for the defined biomarker propose desensitisation of pallido-thalamic synaptic efficacy, induced by high-frequency signals, as one possible crucial mechanism for DBS action. Based on this activation, we define both an optimal electrode position and stimulation protocol using pathway activation modelling. Significance. A key advantage of this research is that it combines different approaches, i.e. the spatiotemporal pattern with the electric field and axonal response modelling, to compute the optimal DBS protocol. By correlating the inherent network dynamics with the activation of white matter fibres, we obtain new insights into the DBS therapeutic action. Introduction Abnormal functioning of the basal ganglia (BG)thalamo-cortical circuit plays a central role in several movement disorders.One main characteristic of the altered circuitry in Parkinson's disease (PD) and dystonia is a change in oscillatory patterns of the cortico-BG network [20,59,86].In PD, abnormally pronounced β-activity emerges in the entire BG network.This is thought to underlie the severe motor symptoms of PD, such as hypokinesia [16,43,76].The synchronous elevated β BG rhythm should, in principle, also affect thalamic activity by sending strong inhibitory signals via GPi (globus pallidus interna) and SNr (substantia nigra pars reticulata) efferents to motor nuclei of the thalamus, i.e. ventral anterior and ventrolateral nuclei (VA and VL) [27].Studies in MPTP-treated monkeys and optogenetic mouse models on activation of inhibitory GPi-thalamic projections show that GPi stimulation results in rebound burst discharges in the thalamic nuclei [44,46] after the initial inhibition [49].The enhanced bursting activity due to hyperpolarisation of thalamic neurons comes from an increased inhibitory BG output tone and a Ttype calcium channel-dependent mechanism [1,46].In summary, this would predict a slowing, malfunctioning of thalamic oscillatory activity [27]. Deep brain stimulation (DBS) of the BG was shown to be an efficient treatment for movement disorders [19,80], but its therapeutic mechanism is still not fully understood.The main hypotheses put forward so far are (a) suppression of neuronal activity of the target area, (b) neuroprotective and plasticity effects (including structural plasticity), and (c) enhanced neuronal activity that results in firing pattern alterations, in particular, disruption of hypersynchronised β-band oscillations [15,32,43,46,64].Indeed, recordings in animal models [15,46,85] and observations from computational network models [28,63,64,67] suggest that DBS in the nucleus subthalamicus (STN) results in a more periodic and regular firing at higher frequencies in the BGthalamic network. Certain progress in predicting therapeutic DBS effects was achieved using the computational concept of the volume of tissue activated [11,12].Although it became a standard approach when estimating DBS outcomes with computational models, it does not consider the anatomy of brain fibre tracts and, importantly, does not take into account network effects.A qualitative advance in stimulation modelling is an estimation of DBS-induced activation of neuronal pathways and propagation of activity in the respective networks.For various neurological diseases, computational and clinical studies show a correlation between axonal firing in specific pathways and the patient condition [10,29,35,42].Nevertheless, it is not well understood how the pathways' activation affects the neural activity of the respective networks.Prediction of DBS outcome based on network models is not novel, but the induced stimulation current is usually simulated as an increased abstract activation [63,64,69].In this paper, we couple pathway activation modelling (PAM) with a computational network of the BGthalamo-cortical circuit in order to optimise DBS treatment for Parkinson's motor symptoms. Firstly, we use structural connectivity obtained in previous multi-modal imaging data analysis [13,50,62].Secondly, we simulate neuronal populations in different areas (GPe/GPi, STN, etc), placing and connecting neurons using the aforementioned structural connectivity.Each node-neuron is modelled with a variation of Hodgkin-Huxley's current-balance equations [36,64,73].Then, integrating this high dimensional nonlinear system, we simulate spatio-temporal patterns consistent with healthy (normal) and Parkinsonian states.We define a biomarker for thalamic spatio-temporal activity to quantify the pathological thalamic behaviour in the PD state.Using the network, we optimise pathway per cent activation (i.e. per cent of activated fibres) based on the deviation of the biomarker from the healthy state.After the optimal input is determined, it is assigned as the goal function for the electric field optimisation via adjustment of the electrode position and the stimulating current.The ultimate aim of the study is to obtain a DBS protocol that provides the therapeutic pattern encompassing the entire BGthalamo-cortical network. Data sources To describe the structural connectivity of the network, data from different studies on human brain anatomy were utilised.The BG nuclei and their substructures were taken from the DISTAL Atlas [13,22], and the Melbourne Subcortex Atlas [74] was used to delineate substructures of the thalamus, while the relevant cortical regions were selected using the Brainnetome Atlas parcellation [23].Fibre tracts classified to pathways in the vicinity of the STN were taken from [62], and projections of the VA nucleus to motor-cortical regions, required to complete the BGthalamo-cortical network, were extracted from the structural group connectome of 90 PPMI PD-patients GQI [50], post-processed in [22].All data were represented in the Montreal Neurological Institute space. Structural connectivity of the BG-thalamo-cortical network To investigate DBS network effects in PD, the classic circuit model of the BG-thalamo-cortical network was employed (figure 1(A)) [55].It consists of three inputs from motor-cortical regions: the direct pathway that passes through the striatum and continues as a GABAergic projection to the GPi and SNr; the indirect pathway that also traverses the striatum but has GABAergic projections to the GPe (external segment of the globus pallidus), which in turn inhibits the STN, GPi and SNr; and the hyperdirect pathway through which the STN receives direct excitatory input from the cortical areas.The glutamatergic efferents of the STN innervate the GPe, GPi and SNr.GABAergic projections of the two latter nuclei to the VA and the VL regions of the thalamus represent the output of the BG circuit [7].The circuit loop is closed by excitatory input of the thalamus to the motorcortical regions. The intrastructural connectivity of the network was obtained from fibre tracts that were either already assigned to pathways [62], classified based on their position relative to the involved structures (thalamo-cortical projections), or substituted by direct current modulations (striatal efferents, see below).The classified fibre tracts are presented in figure 1.Furthermore, the fibres from [62] were grouped depending on the subregions of brain structures they connect.In this study, the simulated network (figure 1(B)) does not include the substantia nigra (SN).The dopaminergic projections of the SNc (substantia nigra pars compacta) have a low level of myelination [33].They are hence less excitable (by approximately two orders of magnitude) by extracellular electric fields [72].The SNr is often paired with the GPi in the context of the direct/indirect pathways, although the fibre trajectories related to STN-DBS electrodes are distinctly different.To ensure homogeneity of the BG pathways, only those pathways present in [62] were employed, which did not include the striatonigral and the nigrothalamic projections. Modelling structural connectivity using complex network theory and pathway classification The topological structure of the brain network plays a critical role in the emergent neuronal activity and brain functionality.Structural brain connectivity features were modelled by defining neuronal elements as nodes and the interconnections among them as edges of the network.The underlying connectivity can be described and simplified using complex network theory [9,70]. We assimilate the network structure of section 2 to build a realistic computational model that provides insights into the mechanisms behind the brain functionality or dis-functionality (i.e.movement disorders) and predict the neuronal dynamics on multiple scales [37,38].Specifically, we build a directed network using the previously described structural connectivity.The intrastructural connectivity within the areas (STN, GPe GPi, VA, motor cortex (MC)) was defined assuming an internal complex network structure (see section 2.3.2). Construction of the inter-connectivity using structural data analysis We express the structural connectivity of section 2.2 using network theory, i.e. the connectivity is written as a graph G = (V, E), where V is the set of nodes and E represents the set of edges.The nodes of the structural network are defined as points in the threedimensional space and correspond to the starting and ending points of a fibre tract.Also, due to limitations of imaging, the network resolution is set at 1 mm 3 .The structure of the network is described with the adjacency (or connectivity) matrix A: if there is a fibre tract starting at position p i = (x i , y i , z i ) and ending at p j = (x j , y j , z j ) then A(i, j) = 1; otherwise A(i, j) = 0.At the given resolution of 1 mm 3 , the network contains 134 STN nodes, 244 and 246 GPe and GPi nodes, respectively, 833 thalamic nodes and 2070 cortical nodes [68]. Internal connectivity using small world topology In this section, we model the internal connectivity of each region (i.e.how the nodes-neurons are connected within a region) using small-world structures [4,9,70,83].In such small-world complex networks [60,83], each node interacts with its k nearest neighbours; additionally, a few randomly chosen remote connections (with a small probability p) within the area are also formed [83].Small-world structures are commonly used/emerged in computational and clinical neuroscience [4-6, 17, 24, 58, 66, 68].Note that the model only considers the respective motor parts of the different structures and not any other functionally distinct areas. The GPe/GPi, VA and MC layers are modelled as separate small-world networks.Each node increases the initial number of connections (or the degree of the node) by k = 20 degrees on average.The new link lies at a distance ⩽5 mm (defining the local neighbourhood).We assume that 95% of the neurons are interconnected locally.Nonetheless, the small-world topology [83] also allows remote connections (at a distance >5 mm) with a small probability of 5%.The phenomenological values of k and p in this model are chosen such that the network resembles both random and lattice structure as it appears in many studies in neuroscience [4-6, 17, 24, 58, 66].With the chosen values, the connectivity structure resembles real neuronal connectivity as it is shown, for example, in the work of [17] and [58]. For the STN, we chose a modified sparse small world approach inspired by experimental findings of [2,30].In our model, 80% of STN does not form local STN connections, and the remaining 20% of STN neurons have an average of 25 connections each, while few of these are randomly chosen remote connections [30,68,69]. Modelling the dynamics of the thalamo-cortical BG circuitry We construct neuronal populations in the different areas (GPe, GPi, STN, etc), placing and connecting neurons using the aforementioned connectivity of the augmented network.Depending on the region, each node-neuron is modelled with a variation of Hodgkin-Huxley current balance equations [36,64,73], i.e. the membrane potential of the ith node is given by: where C is the membrane capacitance and V i represents the membrane potential.The first term on the right-hand side of equation ( 1) represents the summation of ionic currents, the second, the summation of synaptic currents and the third term represents the DBS stimulus (which is simulated for all BG areas by different levels of direct current injection, although far from the stimulation site, distant nuclei are activated by synaptic connections and probably remain little affected by the stimulation electrode) or sensorimotor inputs in the cases of the thalamus and motor areas.Different areas show different properties for the currents in equation (1) The model contains the following brain regions: the nucleus STN, the globus pallidus externa (GPe) GPi, the thalamus (VA) and the MC region.We model the connection between neurons according to the augmented structural connectivity of section 2.3.2.In the supplementary section, a detailed mathematical description is given. Macroscopic biomarker of pathological thalamic activity in the Parkinsonian state The highly synchronised activity (characteristic for PD) coincides with enhanced bursting activity of the BG and thalamus [75,86].Activity in the β-band range is found in both healthy and Parkinsonian states, and it is thought to be associated with the control of ongoing motor activities [59].In the Parkinsonian state, this activity is significantly enhanced [59] [43]; however, the reason for this enhancement is not fully understood [21,59]. The increased BG inhibitory output reduces the excitability of postsynaptic thalamic neurons [44].As a result of the increased inhibitory drive on thalamic motor nuclei, thalamic neurons fire in low-frequency burst discharges [27,46].This pathological activity refers to rebound bursting [27,46,49], as thalamic neurons rhythmically recover from hyperpolarisation, involving T-type calcium channel activation [27,46], and likely also HCN channels [45].This altered behaviour is related to Parkinsonian changes, such as tremors and hypokinesia, but also arousal states in rats, monkeys, and humans [46,49].In order to quantify the thalamic activity, we define a macroscopic variable that takes into account precisely these two main characteristics of the disturbed abnormal thalamic behaviour, i.e. a decreased neuronal activity in conjunction with an increased bursting behaviour: where X represents the stimulated state (i.e.healthy (normal)/Parkinsonian/DBS), meaning that the parameters of the model are tuned in accordance to each simulated state, see also section 3.1, BR is the average number of the bursting blocks (in the thalamic area) and SP represents the average number of spikes (in the thalamus).The parameter a ∈ [0, 1] defines a convex combination and controls the behaviour of the macroscopic function f a , which summarises thalamic activity by counting the relative amount of bursting and spiking activity. Experimentally and computationally, it is challenging to identify bursts because bursting sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains [14].Here, we follow the simplifying method of [14] to detect bursting blocks.When the mean value of several successive inter-spike intervals is not larger than 15 ms, this behaviour is defined as burst firing. Another macroscopic variable that is used in this paper is the mean voltage activity V of neurons in an area (e.g.STN); specifically, we define: The mean voltage activity V (related to the local field potential) is utilised for the characterisation of synchronised rhythm (through the Fourier spectrum) under different states (healthy, Parkinsonian or DBS). Pathway per cent activation of DBS using the network model The effectiveness of the DBS treatment depends on (a) which specific pathways are activated and (b) on the level of activation [35,40].In the network simulations, we use a one-compartment soma neuron model for all areas.Thus, the activation of an edge (i.e. an axon) due to DBS is modelled as a supra-threshold DBS stimulus on the neuron node, which is the ending point of the axon (i.e. the postsynaptic neuron). We define the per cent activation A x→y from nucleus x to y as: where N A,x→y is the number of activated edges from x → y and N x→y is the total number of edges (axons) from x to y nucleus.In this study, we analyse the following per cent activations, A 1 = A MC→STN , A 2 = A STN→GPe , A 3 = A STN→GPi , and finally A 4 = A GPi→VA .We developed an algorithm using the Metropolis optimisation [47] for randomly chosen activated nodes (described in the supplementary material) consistent with per cent activation A x→y . For each set of the parameters A = (A 1 , A 2 , A 3 , A 4 ), the model produces a different neuronal activity whose effectiveness is evaluated from the objective function.The objective function should capture the main impact of DBS, i.e. to increase the thalamic neuronal activity and simultaneously reduce thalamic bursting activity (see also section 2.5) Substituting the form of f, i.e. equation ( 2), we obtain: The values of the model parameters A i , i = 1, . .., 4, were estimated numerically by minimising objective function i.e. equation (6). The minimisation problem was solved using the MATLAB function fmincont, a gradient-based method for nonlinear problems with constraints for the function class C 1 [52].The step size tolerance was set to tol(X) = 0.001 and the function tolerance was set to tolF = 1.The maximum number of iterations was set to 100.The optimal firing rates now are referred to as A optimised .They will constitute the target vector of equation ( 7) of section 2.8 in order to compute corresponding DBS protocols. Estimating realistic pathway per cent activation While the optimised per cent activation, suggested by the network, is defined on abstract neurons, it is important to consider spatial constraints of the activation extent related to the mutual proximity of various neuronal circuits, including regions of avoidance, e.g. the internal capsule implicated in motor sideeffects for STN-DBS [77].Note that the external stimulation alters the transmembrane potential on both axons and somata with dendritic trees, but the former has been shown to be more excitable by external stimuli [61].Therefore, here we quantify activation by Here the electrode is shown at the position that was optimal for the normalisation of the VA-thalamic activity.Note that in order to reduce the computational effort, the axons do not cover the whole length of the pathways but only segments exposed to the high electric field.(B1), (B2) Activation status of the network pathways and other projections (corticofugal and associative pallidosubthalamic), respectively.By solving the cable equation for the computed distribution, responses of passive axons to the stimulation are quantified.If the axon elicits an action potential, it is considered activated.Note that some axons traverse the electrode and/or the encapsulation layer; those are considered 'damaged' and excluded from the simulation.(C) Electrode tip placement is optimized within a 4 mm bounding box centred at the motor aspect of the STN. the axonal response to DBS.In figure 1, projections that are expected to be reinforced by stimulation are shown in thick lines, and only on these projections were axon models allocated.It is important to note that the DBS-induced extracellular field might affect not only pathways projecting from/to the STN but also passing fibres, such as the pallidothalamic tract.Additionally, we monitored activation in the corticofugal tract to control for current protocols that can cause capsular side effects and the associative portion of the pallidosubthalamic projections, which stimulation might affect cognitive function. To probe an initiation of action potential in the neuronal tissue, a double cable model of the myelinated axon was employed [54].This widely used model of the mammalian nerve fibre describes a detailed morphology of internodal segments and complex ion channel dynamics.Importantly, no tonic firing was modelled at this stage so that action potentials could be elicited only due to the DBS-induced field.The initial membrane potential was set to −80 mV, and fibre diameters of 5.7 µm were chosen, except for the hyperdirect pathway, where the collateral diameter was set to 3.0 µm.It should be noted that the majority of fibres in the BG are thinner [26,51]; however, larger diameters were also reported [79].An array of previously published works in the computational field uses fibre diameters of, or close to, 5.7 µm [31,35,42,54], and consequently we chose to adapt this value.To reduce computational costs, axon lengths for long projections were limited (e.g.16 mm for the hyperdirect pathway) since the electric field magnitude further from the STN is negligible.Seeding of an axon was always initiated at the fibre tract's segment closest to stimulation contacts, and in total, the algorithm allocated 6735 axon models (see figure 2). To compute the spatial distribution of the induced extracellular field, a volume conductor model was developed following the approach described in [10].In brief, the heterogeneity of conductivity was obtained from patient-specific MRI segmentations, while tissue anisotropy was assessed based on the normative diffusion data [87], warped into patient (native) space using routines implemented in Lead-DBS toolbox [39].All calculations were performed within the simulation platform OSS-DBS [11], specifically developed for iterative DBS modelling that encompasses geometry and mesh generation, electric field computations and quantification of the induced neuronal activation.The accuracy of the field modelling was ensured by an adaptive mesh refinement algorithm of OSS-DBS that assessed the convergence of the electric potential on the axonal compartments. Computing optimal electrode position using the per cent activation of the neuronal network model The optimised pathway per cent activation defined the goal function for the electric field optimisation where A i is the per cent activation in the ith pathway. Here, we quantified the difference between activation patterns with the Canberra distance [48] that was previously shown to be effective in a correlation model for the PD motor symptoms' improvement [10].Furthermore, we rejected solutions that yielded per cent activation above 5% in the corticofugal tracts, which are implicated in motor contractions.The following DBS parameters were set as the optimisation variables: the coordinates of the implantation site, the tilt of the electrode in the sagittal and coronal planes, and the delivered currents through the electrode contacts.The coordinates of the implantation were bounded by a 4 mm box centred on the STN, and the sagittal and coronal tilts were varied between 29.2 • -45 • and 10 • -30 • , respectively [34].The DBS signal was defined as a 130 Hz 60 µs train pulse.Its amplitude was limited by 2.5 mA for concentric and 2.0 mA for segmented contacts.This not only ensured that the charge density per phase lies below the threshold of 0.03 mC cm −2 [3,53] but also restricted the optimisation algorithm to more energyefficient protocols. In this study, an axon was considered to have been activated if the discrete event of an action potential initiation was observed in response to the altered extracellular field.Therefore, the standard derivative-based methods are less suitable for optimisation.Furthermore, the goal function and the distribution of the projections implicated a non-convex problem with multiple local minima.After preliminary testing, the generalised simulated annealing [84], implemented in the Python package Scipy [81], was chosen as the optimisation algorithm.The optimisation of the current was performed utilising the linearity of Laplace's equation.It allows for avoidance of the recalculation of the electric field for any new combination of currents/potentials for the given electrode position.The algorithm works in the following order for an electrode with N contacts: The computationally intensive operations, such as the iterative solution of the electric field problem and the cable equation for thousands of axons were efficiently parallelized on a 24-Core 3.8 GHz workstation with 256 GB of memory.The electrode placement optimizer converged after ≈34 days of continuous computations. Simulating normal state Parameters were tuned to simulate healthy conditions during the initiation of motor activity (as described in the supplementary material).Figure 3 depicts the neuronal activity of the nuclei under normal conditions.The STN, GPe, and GPi show sparse firing with low levels of correlation of neuronal activity between STN and the pallidal nuclei (except for one rather synchronised firing period, which intrinsically emerged in the network at approx.650 ms, followed by an approx.50 ms pause in the GPe, and to a lesser extent in the GPi).1: Choose a new electrode position with the outer loop optimiser 2: For frequencies in the power spectrum of the DBS rectangular pulse, compute spatial distributions of the electric potential for each contact, setting this contact to 1 V, others to floating potentials and grounding the exterior of the computational domain 3: Apply Inverse Fourier Transformation (IFT) to obtain N solutions ϕ 1V (r, t); 4: Find the optimal currents by scaling the solutions with vector S and superpositioning (i.e.solve the optimisation problem for S): • If available, choose S from the previous iterations as the initial guess • With the scaled potential ϕ scaled (r, t) on the axonal compartments, compute the pathway activation and evaluate equation ( 7); 5: Return the result to the outer loop optimiser Generally, STN neurons form an almost random pattern and fire irregularly.Neuronal activity usually consists of spiking (B) and rare bursting activity (see figure 3(B)).The Fourier analysis of the macroscopic mean voltage V STN (see equation ( 3)) shows a wide, nearly stable spectrum, with a small peak at around 20 Hz and a drop from 500 Hz onwards (expected to reflect intercalated firing of different neurons, with ever smaller inter-spike intervals becoming increasingly unlikely; see also supplementary material). The GPe-GPi activity is plotted in figures 3(b) and (c).Like the STN activity, it is also characterised by sparse firing, while individual neurons can show both bursting and spiking activity (b).The frequency of the mean voltage V for both GPe and GPi shows peaks in low β-band, underlying weak rhythmicity at approx.15 Hz. The resulting time-dependent activity of neurons in the VA thalamus and MC is shown in figures 3(d) and (e), respectively.In contrast to STN and GP, both thalamic and motor areas show a dense spiking activity in γ-band with a peak at 48 Hz, with a broader range of γ-band (between 45 and 60 Hz) for the MC (see also supplementary material). Simulating Parkinsonian state In PD, the degeneration of nigrostriatal dopaminergic neurons leads to a loss of dopaminergic innervation in the striatum.The resulting reduction of D1/D2 receptor-mediated activity affects direct/indirect pathway functionality.In the direct pathway, reduced D1-receptor activation results in an increase of the GPi neuronal activity as a consequence of disinhibition, which in turn, results in higher levels of inhibitory activity in projections to the thalamus.In the indirect pathway, the reduction of suppressive D2-mediated receptor activation leads to an inhibition of GPe, thus enhancing STN activity (note that at this stage, we do not consider feedback activation of the STN from the GPe, which actually would lead to unphysiological delayed STN inhibition).The overactive STN will enhance neuronal activity in the GPi even more-which again leads to even more pronounced thalamic inhibition. Consistent with disturbed SNc-striatal pathway activation, in the model, we increase the conductances g STN , g STN→GPe , g STN→GPi , while decreasing the internal GPe conductance.Simultaneously, we increase the inhibition from GPi to the thalamus by increasing the synaptic conductance g GPiVA (see supplementary material, section 3). Figure 4 shows the network dynamics under Parkinsonian conditions.The main emergent characteristic of the BG network activity is the synchronous activity, see figures 4(A) and (B), particularly in STN and GP, peaking at 24 Hz.The neurons, in turn, often fire in bursts (B). The increased GPi activity is projected directly through the network to the VA and VL regions of the thalamus.An immediate consequence of this projection is that the ventral thalamic neurons change from regular spiking (healthy behaviour, at approx.47 Hz) to bursting activity (at approx.8 Hz).Due to the complex network structure of the thalamus, the ventral thalamic bursting activity, in turn, is projected to the VA thalamus through remote inter-thalamic connections [65].For motor-cortical activity, this results in a slight slowing, reaching a peak now at 37 Hz, rather than 47 Hz under healthy conditions. The time-dependent activity of all neurons in the thalamus and MC is shown in figures 4(d) and (e), respectively.As already described, both thalamic and motor areas show an altered power spectrum compared to the healthy conditions: The power spectrum of thalamic activity peaks at low frequency, i.e. 8 Hz (low α), resulting mainly from the silencing of the thalamus and the emergence of low-frequency bursting.The spectral analysis of the MC shows a low γ to high β band activity, with the main component shifted to 37 Hz (from 47 Hz in the healthy state); see the supplementary material. Simulating DBS using optimal per cent activation of the network In this section, we present the optimisation results for the network interaction dynamics based on the minimisation of the biomarker of the thalamic activity defined in section 2.6 with equation (6).For this, the network structure and the model parameters reflect the Parkinsonian state.The optimal values of the per cent activation found are: A MC→STN = 0.878 (from MC to STN), A STN→GPe = 0.464, A STN→GPi = 0.562, A GPi→Tha = 0.052, see also figure 5. DBS induces dramatic alternations in firing dynamics in all three BG nuclei.In the STN, neurons fire tonically, and overall they show a significant peak in the Fourier spectrum at 43 Hz, see figure 6(a)-(III), with further dampened peaks at harmonic frequencies, as reported by [78].Looking more specifically at individual neurons (all identical in properties, but heterogeneously connected), four classes of response behaviour can be differentiated in this nucleus: About 34% follow the stimulation at 130 Hz, showing additionally subthreshold depolarization (that might be attributed to either intrasubthalamic connections or GPe inhibition).Another 30% do not follow this rhythm but fire action potentials at 43 Hz.About 22% of the neurons remain unaffected by the stimulation and just fire single action potentials during the entire 300 ms period.The remaining cells stay utterly silent at hyperpolarised potentials (see section 5 and figure 1 of the supplementary material). As a result of this activity in STN, GPe and GPi regions, neurons also show fast tonic activity again with the first component at ≈43 Hz, corresponding to one-third of the neurons (STN driving these cells).However, an even higher peak appears at ≈130 Hz, in resonance with the DBS and corresponding to fastfiring cells of the STN (another third of the neurons), presumably driving these cells. As a consequence of STN DBS, thalamic activity is reset to fire tonically, albeit not at the 47 Hz rhythm of the healthy state, but at a slightly faster pace 3).For comparison reasons, the power spectrum of the healthy conditions is also depicted with blue colour.Under DBS, the activity in STN, GPe and GPi remains highly synchronised, particularly between GPe and GPi, at even higher peak frequencies (43 Hz).The representative neurons in the thalamus and cortex, i.e.II (d) and (e), become tonic again, at higher frequencies (58 Hz), in the γ range, and thus assume an activity pattern similar to the healthy condition again.(58 Hz).In turn, neurons in the MC revert to firing faster, close to the second peak of the healthy state (58 Hz).So overall, the activity is approaching healthy conditions again.As outlined above, the mean activity V in the thalamus peaks at 58 Hz, with a wider activation range in the interval Hz, which shows a broad γ-band activation close to a previous healthy condition.Analogous behaviour is presented in MC with a characteristic γ-band activation and the highest peak at 58 Hz.These findings underscore that during DBS, in all nuclei, there is a bandwidth shift to the γ range, overwriting the pathological β synchronisation of Parkinsonian state, see figure 4. Matching the optimal per cent activation in the volume conductor model The question addressed in this study is whether an electrode position and current protocols can be found that will approximate optimised per cent activation A optimised predicted by our network model.While it may be possible to disrupt pathological activity by optimising stimulation patterns in the network model, it might still be problematic to translate this into feasible stimulation settings if the pattern is not reproducible with standard DBS systems. The electrode placement and current optimisation allowed us to reach nearly the same per cent activation (see figure 9) using the directional lead (Boston Scientific Vercise TM , Marlborough, MA, USA), except for the hyperdirect pathway, the activation of which was limited by the detrimental capsular stimulation. Discussion This study proposes a biomarker-based optimisation paradigm for DBS treatment by combining complex spatio-temporal patterns emerging in large-scale networks with modelling of pathway activation.Rather than optimising a volume of tissue activated, the combined model extends this approach and aims to optimise realistic spatio-temporal discharge patterns in the motor nuclei of the thalamus.This directly associates the network's structural connectivity and the inherent dynamics to trajectories of white matter fibres, whose activation can be quantified by PAM.It allowed us to evaluate different electrode positions and stimulation protocols with respect to pathological bursting vs. spiking thalamic activity, serving as biomarkers. The large-scale network model produces realistic spatio-temporal patterns with enhanced β activity (24 Hz, see figure 4) in the BG during the simulation of the Parkinsonian state.This pathological emergent behaviour is in good agreement with experimental findings in animal models of PD and with increased burst firing in patients [16,43,75,76].Specifically [76], analysed eight patients with PD and DBS and showed that pathological symptoms of bradykinesia, dyskinesia and tremor were accompanied by STN multi-unit activity showing a β-peak (≈20 Hz ± 6 Hz) [76]. Additionally, in our model, not only the frequency spectrum of the BG activity is shifted in the Parkinsonian state, but also that of the thalamic activity: the latter shows a slowing of activity, with a shift in the spectrum, from γ-to slow α-band (8 Hz; see figure 4).These emergent spatio-temporal dynamics of the thalamus are characterised by decreased thalamic tonic spiking and increased bursting activity. These results are confirmed by several studies in patients and human and animal models [27,44,56].Both the reduced thalamic activity (i.e.reduced firing rate) [27,56], and, more specifically and importantly, an increase of power in the 3-13 Hz band (mean at 8 Hz, i.e. in the low α-band) with the reduction of γband spiking activity in VA neurons of the thalamus was shown in Parkinsonian nonhuman primates [44].In the current study, simulating the neuronal network with Parkinsonian conditions, the number of bursting periods of VA thalamic neurons increased as a result of the enhanced BG inputs (see figure 8(B)).An increased number of bursts was also reported in the study of Kim et al [46]: using an optogenetic mice model in which specific BG outputs were selectively perturbed, multiple Parkinson-like symptoms could be evoked.Specifically, Kim et al [46], using light trains of 20 Hz in GPI (imitating Parkinsonian β rhythm), induced high-frequency muscle and lowfrequency tremor activity.The authors showed that these symptoms depended on the number of thalamic neurons that exhibit bursting behaviour. The optimal per cent activation of fibres under DBS (computed by our method) re-establishes higher frequency dynamics in the BG network.Thus, the activity peak in STN, GPe and GPi shifts from β-to γ-band, i.e. from 24 to 43 Hz (see figure 6).Similarly, the thalamus shows an activity shift from 8 Hz (α) to 58 Hz (γ), and in the MC, this results in a shift from 37 to 58 Hz (i.e. from low to higher γ).DBS simulations at optimal activation rate showed that a proportion of STN neurons depict unusually wide action potentials during DBS (33% of STN neurons).We assume that these neurons experience unphysiologically strong activation during DBS.Although there are no measurements of action potentials in vivo under these conditions, our model predicts that under such conditions, calcium currents are activated, which are strong enough to result in underlying [Ca2+]-spikes. GABAergic synaptic plasticity effects during DBS The simulation highlights the importance of the depression of GABAergic activity (either by direct silencing or desensitisation) for the efficacy of DBS.For the latter, a postsynaptic loss of sensitivity for the ligand still bound to the receptor is well described for GABAergic synapses [8].In addition, a depletion of the readily releasable transmitter pool as a presynaptic loss of synaptic efficacy is possible.This depletion of the transmitter has been described for hippocampal GABAergic interneurons, and it is dependent on the firing history of the synapse and, as a consequence, on the frequency of applied DBS.Thus, at high frequencies (>100 Hz), the transmitter release is suppressed shortly after the onset of the stimulus [57,82].It is possible that also pallidal neurones show this behaviour. Our model proposes that precisely this suppression (be it via desensitisation or depletion) is one part of the DBS mechanism: high-frequency DBS induces loss of GABAergic synaptic efficacy, leading to a disruption of the pathological signal transmission through the pallidothalamic network.In figure 7, we show three single thalamic neurons under DBS, but under two different conditions: In the first (blue traces), desensitisation of GABAergic synapses is taken into account, and in the second, not (black traces).These examples demonstrate that when desensitisation of GABAergic pallidal projections is present, thalamic neurons continue to fire tonically.Otherwise, the majority of cells in the motor thalamus actually fall silent figures 7(A) and (B) due to the ongoing inhibition.Only a small fraction escapes it and continues to fire figure 7(C). Two possible mechanisms of DBS: pre-and postsynaptic GABAergic silencing The modelling of DBS suggests two possible mechanisms of DBS acting in parallel, as shown in figure 8.In this figure, the critical importance of the level of pallido-thalamic inhibition is demonstrated.Under healthy conditions, this level was simulated as extremely low (close to 0.01 pA), with minimal fluctuations, assuming a locomotive activity of the network.In essence, this releases thalamic neurons into firing tonically, and the firing frequency is finetuned by minimal changes in inhibitory drive figure 8 (A)-(I)−(III).Under Parkinsonian conditions, the GPi becomes much more active (due to disinhibition in the direct pathway and due to facilitation in the indirect pathway), resulting in much higher pallidothalamic inhibition levels (0.1-0.4 pA; baseline and peaks, respectively).This silences thalamic neurons for most of the time and only occasionally allows for burst-like behaviour in figures 8 (A)-(I)−(III), particularly after rhythmic and strong inhibitory episodes (B)-(II).Under these conditions, in all examples, the inhibition level is generally much higher than under healthy conditions, varying between 0.1 pA as basal level, and 0.4-0.6 pA at peak inhibition.As a consequence, thalamic activity is strongly reduced, and only intercalated firing can be observed.While in principle, intermittent bursting of thalamic neurons can be seen in all cases, the higher the rhythmicity and organisation of inhibition become (see example 2 with very rhythmic inhibitory drive), the more pronounced the rebound burst firing in thalamic neurons upon release from inhibition.(C) Under conditions of DBS, two different scenarios lead to restoration of thalamic firing: in example 1, inhibition is low but still present, indicating that pallido-thalamic projections are active, but the response is desensitised postsynaptically.As a result, the thalamic neuron fires tonically at a lower frequency.In example 2, a second mechanism takes over: In this case, inhibition is completely lost, possibly due to presynaptic silencing of GPe (due to GPi projections, for example).Again, this results in tonic firing, but now at a higher frequency.Example 3 shows that DBS is not acting perfectly: Here, the inhibition level is not very low (neither desensitisation nor silencing work perfectly), and as a consequence, the thalamic neuron is mainly silent, as under Parkinsonian conditions.The optimal electrode position was found in the motor aspect of the STN (highlighted in black).All contacts of the directional lead were active with both polarities, but the amplitude on a single contact did not exceed 1.3 mA, and the total current summed up to 0.9 mA.The obtained per cent activation deviated by less than 10% from the network-based optimal rates, except for the hyperdirect pathway, where the targeted rate was 30% higher.The reason for the limited activation is the spread of the current to the corticofugal pathway (not shown here); in the model, its per cent activation was capped at 5% to avoid capsular side effects.GPa and GPm-associative and motor portions of the globus pallidus, respectively.(B) Power spectrum of the VA/Tha activity in three cases, i.e. in a healthy state (black), under conditions of DBS, using thalamic biomarker to optimise network behaviour (red, network), and under conditions of DBS using electric field optimisation around the electrode (blue, electric field).(C) Power spectra of GPi in three cases, i.e. in a healthy state (black), under conditions of DBS using the thalamic biomarker to optimise network behaviour (red, network), and for per cent activation achieved by electric field optimisation (blue, electric field ).The power spectra for the latter case is computed in the network using, however, the values of per cent activation that are depicted in A. Looking at this example more closely, the burst firing occurs in the wake of an inhibitory event but before the inhibition has entirely ceased, that the inhibitory hyperpolarisation is instrumental in initiating the burst.Under DBS, the high-frequency activation of the STN efferents also drives the GPi to higher activity.Nevertheless, pallido-thalamic inhibition levels drop to levels close to the healthy situation.Why this?As already explained, one mechanism of this reduction of the inhibitory drive is actually due to postsynaptic desensitisation [8] at high-frequency pallidal firing-an example is seen in figures 8(C)-(I).Besides this postsynaptic process, also a presynaptic one appears to occur: a complete silencing of inhibition due to DBS.One possible explanation is that massive STN activation due to DBS also activates the excitatory STN-GPe pathway (or, actually, the pathway is activated directly), and via this route, also the inhibitory GPe-GPi pathway, which would silence at least some of the GPi neurons.Such an example is seen in figures 8(C)-(II).The simulation also allows for a third insight: DBS is not always perfectly effective.In figures 8(C)-(III), the pallido-thalamic inhibition escapes suppression and remains nearly as active as under Parkinsonian conditions (in fact, pallidal afferents appear to fire in the range of [60 − 70] Hz, two times lower than DBS frequency (130 Hz)).As a consequence, the thalamic neuron falls silent, also because the inhibitory firing frequency appears to be too low to allow for postsynaptic desensitisation. Optimised electrode position closely matches the clinically efficient placement for motor symptoms The optimisation results for the current and electrode placement demonstrated that a conventional DBS system could induce activation patterns similar to those proposed by the network to restitute VAthalamic activity close to healthy conditions.A complete restitution is not reached since (a) DBS induces artificial activity that does not match physiological conditions and (b) the anatomical constraints limit stimulation (e.g.due to the proximity of the corticofugal tract).The optimised electrode position closely matched the clinically efficient placement in the motor aspect of the STN [18].It should be emphasised that the computational network operates with abstract activations and can generate physiologically implausible solutions, which need to be further evaluated with the biophysical model (see figure 9(A)).In this case, the suggested hyperdirect activation level was high.As was shown in [41], activation of the HDP is also highly correlated with activation of the corticofugal tract, which is associated with motor contractions, dysarthria and other side effects.This was taken into account for the current optimisation in our biophysical model, where the HDP activation reached 57.4%, comparable to the values reported in [41].The per cent activations computed in the biophysical model were again tested in the network model and created comparable VA dynamics (see figure 9(B)). Furthermore, the optimised currents used both polarities, resulting in a strong local electric field, with only a 0.9 mA flow to the remote grounding, i.e. the casing (see figure 9).Importantly, one should bear in mind that various modelling approximations and uncertainties affect the electric field computations.Hence, the provided values should be rather used as the initial setting, further adjusted depending on electrophysiological recordings and patient response. Model's limitations The present study constitutes a computational approximation of the BG-thalamo-cortical network with the following assumptions and limitations.The connectivity structure was based on normative atlases; however, the methods can be adapted using individual patient tractography, replacing the network adjacency matrix A of section 2.3.1.The synaptic coupling was tuned according to the theory of direct indirect pathways' activation in Parkinson's to produce beta-band oscillatory activity within the BG as a pathophysiological marker of PD.Further, intrastructural connectivity in the nuclei was assumed to take the form of small-world complex structures.This novel approach in BG modelling is reasonably justified in previous modelling and experimental publications [4-6, 17, 24, 58, 66].As a limitation of the model, the exact structure of the connectivity on this microscopic level is not known.Hence, it remains to be clarified in the future how this can be analogously modelled.In the current study, the striatal input to GPe and GPi was simplified as homogeneous constant currents to all neurones in the nuclei but with different values between GPe and GPi, as well as between the healthy and Parkinsonian conditions. The approach for the modelling and quantification of pathway activation from the network model in this study differs from [25].For the current approach, as explained in the methods section 2.6, neurons in the nucleus (STN, GPE etc) are modelled as singlesoma-neuron; however, in the computation of pathways activation, we used a multicompartment axon model.Based on experimental findings [27,44,46,49], in the current study, we introduced new biomarkers (bursting and spiking behaviour of thalamic neurons, inspired by the experiment of [46]), thus focusing on restoring thalamic functional, dynamic behaviour.Other possible options for biomarkers could be envisaged, for example, the local field activity of the MC or BG, which would focus on β-band suppression [25,43,63,71].Finding optimal biomarkers to guide DBS, with the clinical scores as the outcome measure, would be the gold standard in future studies. From a future modelling perspective, one important topic will be the study of dynamic network variations with respect to DBS parameters (targeting, pulse width, frequency, shape, etc).Functional brain partitioning, which can be reached by combining model dynamics and network structure, may offer a computational approach and a systematic method that connects the clinical score with the biomarkers used for DBS optimisation. Figure 1 . Figure 1.Basal ganglia-thalamo-cortical circuits.(A) Circuit model comprising all main connections.MC/PMC-motor and premotor cortical regions, respectively.(B) Simulated reduced circuit model with highlighted (bold arrows) connections affected by STN-DBS.The synaptic connections between Striatum and Globus Pallidus pars externa/interna (GPe and GPi) were modelled by different constant currents.(C) Structural connectivity of the simulated network is based on the pathway atlas of the human motor network constructed from multimodal data, including diffusion, histological and structural MRI data, fused to a virtual 3D rendering [62].(C1) Projections from GPi to the thalamus (VA nucleus) are shown in red, connections between the nucleus subthalamicus (STN) and GPe are shown in green, and projections from STN to GPi are shown in violet.(C2) Connections between the motor cortex (MC/PMC) and the thalamus projections were obtained by classifying fibre tracts from [50] and are shown in orange.Projections from the motor cortex to STN (hyperdirect pathway) are shown in blue.Nuclei are shown in the following colours: GPe in light grey, GPi in dark grey, STN in dark orange, thalamus (VA nucleus) in yellow and motor cortex (M1) in light grey.Adapted from [68].CC BY 4.0. Figure 2 . Figure 2. Pathway activation modelling.(A) Extracellular electric potential distribution on computational axons is computed in the volume conductor model for a given current protocol.Here the electrode is shown at the position that was optimal for the normalisation of the VA-thalamic activity.Note that in order to reduce the computational effort, the axons do not cover the whole length of the pathways but only segments exposed to the high electric field.(B1), (B2) Activation status of the network pathways and other projections (corticofugal and associative pallidosubthalamic), respectively.By solving the cable equation for the computed distribution, responses of passive axons to the stimulation are quantified.If the axon elicits an action potential, it is considered activated.Note that some axons traverse the electrode and/or the encapsulation layer; those are considered 'damaged' and excluded from the simulation.(C) Electrode tip placement is optimized within a 4 mm bounding box centred at the motor aspect of the STN. Figure 3 . Figure 3. Representation of the whole network dynamics under healthy conditions.Each row represents the indicated area (STN, GPe, GPi, Tha/VA and MC).Under healthy conditions, there is only loosely correlated activity in STN, GPe and GPi, with tonic activity in the thalamus and motor cortex, at frequencies peaking at 47 Hz, i.e. in the γ range.(I) Raster plot representation.Black dots represent activated neurons (i.e.action potentials defined as transients passing V = −15 mV to positive values) against time (in ms) and space (i.e.neuron index in the nuclei).(II) The middle column depicts the time series of one representative neuron in each of the nuclei.(III) The right column shows the power spectrum of the mean voltage activity; see equation (3).The axes in III are on a logarithmic scale. Figure 4 . Figure 4. Representation of the network dynamics under Parkinsonian state.Each row represents the indicated area (STN, GPe, GPi, Tha/VA and MC).Under Parkinsonian conditions, there is a marked increase in synchronisation among STN, GPe and GPi, with frequency peaking at 24 Hz, i.e. within the β range.Altogether, this results in a lower thalamic firing frequency (major peak at 8 Hz, and a secondary one at 37 Hz, i.e. in the low β and low γ ranges).(I) Raster plot representation.Black dots depict activated neurons (i.e.action potentials defined as transients passing V = −15 mV to positive values) against time (in ms) and space (i.e.neuron index in the nuclei).(II) The middle column depicts the time series of one representative neuron in each of the nuclei.(III) The right column shows the power spectrum of the mean voltage activity see (3).For comparison reasons, the power spectrum of the healthy conditions is depicted with blue colour. Figure 5 . Figure 5. DBS-induced per cent fibres activation, optimised in the network using the biomarker of the VA-thalamic activity.The optimal values of the percentages of activated fibres found are: A MC→STN = 0.878 (from MC to STN), A STN→GPe = 0.464, A STN→GPi = 0.562 and A GPi→Tha = 0.052. Figure 6 . Figure 6.Representation of the network dynamics under DBS conditions using the optimal per cent activation values of section 3.3.Each row represents the indicated area (STN, GPe, GPi, Tha/VA and MC).(I) Raster plot representation.Black dots depict activated neurons (i.e.action potentials defined as transients passing V = −15 mV to positive values) against time (in ms) and space (i.e.neuron index in the nuclei).(II) The middle column depicts the time series of one representative neuron in each of the nuclei.(III) The right column shows the power spectrum of the mean voltage activity; see equation (3).For comparison reasons, the power spectrum of the healthy conditions is also depicted with blue colour.Under DBS, the activity in STN, GPe and GPi remains highly synchronised, particularly between GPe and GPi, at even higher peak frequencies(43 Hz).The representative neurons in the thalamus and cortex, i.e.II (d) and (e), become tonic again, at higher frequencies (58 Hz), in the γ range, and thus assume an activity pattern similar to the healthy condition again. Figure 7 . Figure 7. Single unit activity of three neurons in the ventral anterior thalamus under STN-DBS.In all cases (A)-(C), the blue traces depict neuronal activity considering GABAergic desensitisation, and the black traces show neuronal activity with non-desensitising GABAergic inhibition.If desensitisation is present, thalamic neurons continue to fire tonically in the γ-band in all instances (A)-(C).If desensitisation is not occurring, most of the neurons are actually inhibited and silent (A) and (B), and only some continue to be active (C). Figure 8 . Figure 8.Time series representation of activity (right ordinate, red traces) of three different thalamic neurons in the network (three rows) under normal (A), Parkinsonian (B) and (C) DBS treatment.The total synaptic current (from GPi to THA) is given on the left ordinate.(A) Modelling the healthy state, the three examples show the tonic firing of the thalamic neuron, whose frequency depends on the inhibition level.Although this is generally low, there is still some inhibition present in examples 1 and 3. Still, virtually none in example 2. (B) Simulating the Parkinsonian state, essentially three similar behaviours are observed:Under these conditions, in all examples, the inhibition level is generally much higher than under healthy conditions, varying between 0.1 pA as basal level, and 0.4-0.6 pA at peak inhibition.As a consequence, thalamic activity is strongly reduced, and only intercalated firing can be observed.While in principle, intermittent bursting of thalamic neurons can be seen in all cases, the higher the rhythmicity and organisation of inhibition become (see example 2 with very rhythmic inhibitory drive), the more pronounced the rebound burst firing in thalamic neurons upon release from inhibition.(C) Under conditions of DBS, two different scenarios lead to restoration of thalamic firing: in example 1, inhibition is low but still present, indicating that pallido-thalamic projections are active, but the response is desensitised postsynaptically.As a result, the thalamic neuron fires tonically at a lower frequency.In example 2, a second mechanism takes over: In this case, inhibition is completely lost, possibly due to presynaptic silencing of GPe (due to GPi projections, for example).Again, this results in tonic firing, but now at a higher frequency.Example 3 shows that DBS is not acting perfectly: Here, the inhibition level is not very low (neither desensitisation nor silencing work perfectly), and as a consequence, the thalamic neuron is mainly silent, as under Parkinsonian conditions. Figure 9 . Figure 9. Optimised electrode placement for the suppression of thalamic rebounding.(A) The optimal electrode position was found in the motor aspect of the STN (highlighted in black).All contacts of the directional lead were active with both polarities, but the amplitude on a single contact did not exceed 1.3 mA, and the total current summed up to 0.9 mA.The obtained per cent activation deviated by less than 10% from the network-based optimal rates, except for the hyperdirect pathway, where the targeted rate was 30% higher.The reason for the limited activation is the spread of the current to the corticofugal pathway (not shown here); in the model, its per cent activation was capped at 5% to avoid capsular side effects.GPa and GPm-associative and motor portions of the globus pallidus, respectively.(B) Power spectrum of the VA/Tha activity in three cases, i.e. in a healthy state (black), under conditions of DBS, using thalamic biomarker to optimise network behaviour (red, network), and under conditions of DBS using electric field optimisation around the electrode (blue, electric field).(C) Power spectra of GPi in three cases, i.e. in a healthy state (black), under conditions of DBS using the thalamic biomarker to optimise network behaviour (red, network), and for per cent activation achieved by electric field optimisation (blue, electric field ).The power spectra for the latter case is computed in the network using, however, the values of per cent activation that are depicted in A.
12,572
2023-11-21T00:00:00.000
[ "Engineering", "Medicine", "Computer Science" ]
Time to scale up molecular surveillance for anti-malarial drug resistance in sub-saharan Africa Artemisinin resistance has emerged and spread in the Greater Mekong Sub-region (GMS), followed by artemisinin-based combination therapy failure, due to both artemisinin and partner drug resistance. More worrying, artemisinin resistance has been recently reported and confirmed in Rwanda. Therefore, there is an urgent need to strengthen surveillance systems beyond the GMS to track the emergence or spread of artemisinin and partner drug resistance in other endemic settings. Currently, anti-malarial drug efficacy is monitored primarily through therapeutic efficacy studies (TES). Even though essential for anti-malarial drug policy change, these studies are difficult to conduct, expensive, and may not detect the early emergence of resistance. Additionally, results from TES may take years to be available to the stakeholders, jeopardizing their usefulness. Molecular markers are additional and useful tools to monitor anti-malarial drug resistance, as samples collected on dried blood spots are sufficient to monitor known and validated molecular markers of resistance, and could help detecting and monitoring the early emergence of resistance. However, molecular markers are not monitored systematically by national malaria control programmes, and are often assessed in research studies, but not in routine surveillance. The implementation of molecular markers as a routine tool for anti-malarial drug resistance surveillance could greatly improve surveillance of anti-malarial drug efficacy, making it possible to detect resistance before it translates to treatment failures. When possible, ex vivo assays should be included as their data could be useful complementary, especially when no molecular markers are validated. Background The development of resistance to the currently used antimalarial drugs is threatening the major gains in malaria control and elimination made over the last decade. Artemisinin resistance, defined as delayed parasite clearance following treatment with artemisinin monotherapies or artemisinin-based combination therapy (ACT), has been associated with specific mutations in the Plasmodium falciparum kelch 13 gene (Pfk13) [1]. Those validated molecular markers were initially observed in the Greater Mekong Sub-region (GMS), followed by ACT failures due to both artemisinin and partner drug resistance [2][3][4][5][6][7][8][9]. The recent reports of a validated molecular marker of artemisinin resistance in Rwanda [10,11], and its association with delayed parasite clearance [12], are a major threat to malaria control and elimination in sub-Saharan Africa. Even though the continuous high efficacy of the first-and second-line anti-malarial drugs (artemether-lumefantrine and dihydroartemisinin-piperaquine) in Rwanda is reassuring, an improved scheme for monitoring anti-malarial drug resistance is warranted to mitigate the spread of artemisinin resistance, and ACT failure. Currently, the World Health Organization (WHO) recommends therapeutic efficacy studies (TES) for monitoring drug efficacy and resistance [13], whereas molecular markers and ex vivo monitoring are optional. There is no doubt that anti-malarial drug policy change should be based on TES results; however, molecular and ex vivo data have played an important role in confirming and monitoring artemisinin and partner drug resistance in the GMS [3,9,14,15]. Indeed, partial resistance to artemisinin is difficult to assess in vivo, especially in high transmission settings where acquired immunity is a major confounding factor [16][17][18]. Moreover, TES are time and resources consuming, and may be difficult to conduct in low transmission settings, where the risk of de novo emergence of resistance is highest [19][20][21], due to the low number of patients. Molecular markers as early warning tools Molecular markers offer an additional strategy to monitor the early emergence and spread of anti-malarial drug resistance, are not impacted by host immunity, and may be more cost effective when implemented for routine surveillance. Retrospectively, it has been suggested that partial sulfadoxine-pyrimethamine resistance had multiple origins including areas of high transmission in Eastern Africa [22,23] and, interestingly, the same region seems to be a hotspot for partial artemisinin resistance [12,24]. Today, tools (molecular markers) to closely monitor the early emergence and spread of artemisinin and partner drug resistance are available. This knowledge should be used to establish a comprehensive molecular surveillance system to avoid the mistakes from the past, when the spread of resistance to anti-malarial monotherapies has been detected at a late stage, contributing to thousands of deaths in the meantime. Molecular markers cannot predict treatment outcome at an individual level, however their increase often precedes that of treatment failures [25]. While monitoring Pfk13 mutations is of paramount importance, monitoring partner drug resistance molecular markers is also crucial [26]. Indeed, high prevalence of Pfk13 validated molecular markers is not usually associated with treatment failure [27], as evidenced by recent data from Rwanda where the efficacy of both first and second-line treatments is still high despite the increasing prevalence of the Pfk13 561 H mutation (Table 1). Currently, molecular surveillance is often done retrospectively based on convenience sampling, and does not provide an accurate estimation of resistance on a national or sub-regional level. For example, when looking at available molecular data for Rwanda, there are large spatiotemporal gaps [10][11][12]28], and no clear trend is discernible (Table 1). However, high prevalence of the confirmed artemisinin resistance marker 561 H at two sites (Masaka and Rukara) are worrisome, especially as samples have been collected in 2018, and the current situation could be worse. Moreover, the prevalence of this marker has increased from 7 to 20% in Masaka between 2015 and 2018 (Table 1), even though the small sample size does not allow for definitive conclusions, but it is likely that the prevalence is even higher now, and only routine molecular monitoring could accurately assess the trend. (4·5%) Each colour represents data from the same sites spatiotemporal dynamics of molecular markers of interest [29][30][31][32]. The data could be then used to inform and calibrate mathematical models of malaria transmission aiming at guiding interventions strategies, for example to select the sites for TES, using specific thresholds for artemisinin and partner drug resistance markers prevalence, to assess either delayed parasite clearance or treatment failure, respectively [21,33]. Indeed, it is difficult to predict where resistance will emerge, and a more flexible scheme with rotating sites for TES based on molecular markers prevalence and models prediction may be more appropriate for early detection of resistance. Mathematical models could be used as well to predict when treatment failure could occur based on the molecular markers prevalence, giving more time for policymakers to prepare the change in anti-malarial drug treatment policy [34,35]. Logistically, sample collection would require only dried blood spots collection at selected health centres when patients have a confirmed malaria diagnosis. The molecular analysis could be centralized at regional, national or sub-regional laboratory to maximize the cost effectiveness of the surveillance system [36]. With the increasing availability of high throughput techniques and assays for molecular markers of resistance genotyping in malaria endemic countries, there is an opportunity to strengthen the capacity of National Malaria Control Programmes (NMCPs) for molecular monitoring of anti-malarial drug resistance [37,38]. Cross-border collaboration is also critical, especially in regions such as the Great Lakes (Fig. 1), where validated resistance markers have been detected in different countries, including Rwanda, Democratic Republic of the Congo, Kenya, and Tanzania (Fig. 1). Regional monitoring is key to track resistance, as parasites will spread quickly from one country to another. The establishment of regional reference laboratories associated with regional data repositories could facilitate the prompt detection of resistance and early implementation of mitigation strategies; and the malaria community must leverage on the different initiatives on the continent to improve access to the infrastructure and technical expertise for high throughput molecular analyses [39]. Ex vivo assays: a useful,but difficult tool to implement Ex vivo assays have played an important role in monitoring artemisinin resistance in the GMS [3,15,40,41]. Even though they are often used for phenotypic assays to validate molecular markers of resistance, where available they can be useful to monitor resistance. Indeed, as for molecular markers, immunity is not a confounding Fig. 1 Map showing countries with WHO validated (in red) and candidate (in blue) artemisinin resistance markers in the Great Lakes region. (adapted from [11, 12, 24]) factor for ex vivo assays, even though parasite culture may reduce the complexity of the infection by preferentially selecting specific clones. However, ex vivo assays can be a valuable tool for drugs with no validated molecular markers, such as lumefantrine and pyronaridine, the former being the partner drug of the most widely used ACT, and the latter, the most recent ACT partner drug approved by the WHO. Moreover, compared to molecular markers, ex vivo assays are difficult to implement, as fresh blood is required for parasite culture and the high intra and inter-assays variability limits their ability for spatiotemporal dynamics assessment [42]. Conclusions Anti-malarial drug resistance is a serious threat to malaria control and elimination, and resistance monitoring is crucial to maintain the high efficacy of the current anti-malarial drugs. Anti-malarial drug efficacy monitoring schemes should take the full advantage of molecular and ex vivo culture techniques, as they may be the most appropriate tools to provide early warning signals of antimalarial drug resistance in high transmission settings. Reinforcing routine molecular surveillance programme could help detecting the emergence and spread of artemisinin and partner drug resistance at an earlier stage, before it translates to treatment failures.
2,112.6
2021-10-13T00:00:00.000
[ "Medicine", "Biology" ]
Formulation of Composite Materials Containing Tengiz Sulfur-Oil Production Waste The description of additional secondary semi-finished products, or sulfur wastes, which occur because of the work of the oil production and refining industries, which are the main economic sector of the country, was considered in the given article. The negative impact of their vast territory on the environment, flora and fauna and the local population was represented in literary and practical form. It is scientifically proven that storage of sulfur waste in the open by enterprises in this area causes irreparable damage to human life. The ways of using the residues of oil production and refining sulfur as one of the main components of raw materials in the production of rubber products were studied. The results of the study of exemplary sulfur samples using modern physicochemical and electronmicroscopic methods showed its applicability. The research was conducted on the serial formulations of rubber compounds for lightweight tire filler cord of Tengiz sulfur and tire tread. The results of the study of the effect of Tengiz waste sulfur on the physicochemical and technical-operational properties of breaker and tread rubber were presented. Tengiz sulfur reduces the amount of sulfur in the formulation without reducing the rate of vulcanization, which eventually leads to an increase in the quality of the rubber. In addition, it was found that the use of Tengiz sulfur allows regulating the elastic properties of rubber. INTRODUCTION One of the most important tasks with a clear socio-ecological direction is the study of the state of the environment, forecasting its changes under the influence of anthropogenic impacts, and determining the safe levels of man-made ecological burdens. The importance of environmental issues and environmental activities is constantly growing, substances that are dangerous to humanity and the natural ecosystem are released into the environment and are constantly accumulating in its various elements in the modern world. Environmental pollution is increasing due to the widespread introduction of energy-intensive and chemical technologies, production of new chemical products, increased international sales of chemicals and technologies, as well as insufficient environmental control in almost all areas of human activity [Minister of Environmental Protection of the Republic of Kazakhstan, 2007]. Elemental sulfur from Kazakhstan's oil is a very valuable raw material for chemical companies. However, the bulk of the afore-mentioned chemical substance is still stored near oil production facilities. Sulfur in Tengiz, in the form of large solid blocks, is stored in specially equipped areas called "sulfur emissions". The huge amount of waste generated in oil production, i.e. sulfur (today more than 8 million tons of products are stored in the "sulfur emissions"), is of great concern, because under local climatic conditions, sulfur can be converted into many compounds. The above-mentioned sulfur massifs are located in the sanitary protection zone of the Tengiz Gas Processing Plant, in the carbonated zone Formulation of Composite Materials Containing Tengiz Sulfur-Oil Production Waste under the influence of flue gases containing carbon, hydrogen and various metals. The direction of the wind towards the sulfur storage area increases its effect. At the junction of the atmosphere and sulfur, micro-regions of ventilation of different intensities appear along the entire surface of the sulfur massif. During strong winds, sulfur particles are spread over a considerable distance in the air basin. At the same time, they can sink to the surface of earth, water or react with other chemicals resulting in the transition to new harmful substances. Therefore, the main problems in oil production at Tengiz are contaminated soil, groundwater, the release of sulfur dust, as well as the release of sulfur sulfide into the atmosphere [Zharylkasyn et al., 2016a]. A number of methods for desulfurization of oil and oil products are known [ST RK 1052-2002]. Outdated technologies of oil desulfurization and irrational usage of secondary products have a negative impact on the ecological state of the country's environment. In this regard, the development and implementation of waste-free, low-waste and highly efficient technologies is especially relevant and promising. The technology of processed heat-resistant composite materials must comply with the provisions of the "Sanitary and epidemiological requirements for the estab- The residues of oil production and refining technologies`-the assessment of the environmental impact of sulfur as a secondary product, professional safety and safety of process equipment in the production of heat-resistant composite materials-technical rubbers can be the basis for the development of technology for the production of heat-resistant composite materials. Sulfur is a vulcanizing agent for many rubber products, including tires. The ever-increasing demands on the quality of tires require the development of effective components of rubber compounds. Particular attention is paid to the development of vulcanizing agents [Zharylkasyn et al., 2015, Zharylkasyn et al., 2016b. It should be noted that the problems of sulfur storage, which belongs to the IV hazard class, is still relevant today. This chemical causes inflammation of the mucous membranes of the eyes and upper respiratory tract, skin irritation, and diseases of the gastrointestinal tract. In addition, the storage, accumulation, storage of sulfur in the open has a negative impact on the environment and human life [Babina, 2010]. Petroleum and associated petroleum gases contain up to 14% hydrogen sulfide. Hydrogen sulfide, separated from the oil supplied to the production site by associated gases, decomposes into water and sulfur in the Klaus installation. The separated liquid sulfur is directed to granulation or to tanks or sulfur waste storage facilities. Lump sulfur is sulfur stored in the form of blocks in waste dumps [Podavalov, 2010]. Today, the world's oil and gas refineries produce about 50 million tons of sulfur a year. As a result of the initial refining of oil from associated components, 1 million tons of sulfur are produced only at the Tengiz gas and oil refinery in the country [Speight, 2020]. Sulfur is a major vulcanizing agent for many composite compounds based on synthetic rubbers, including technical rubber products [Turebekova, 2016]. There are special requirements for its quality and chemical composition: the degree of purity and high degree of dispersion of the product. These characteristics determine the vulcanizing activity of sulfur, its dispersion in rubber, as well as technological and technical properties of rubber and rubber compounds. Polymeric sulfur allows reducing the amount of sulfur during the process without decreasing the rate of vulcanization, which leads to an increase in the quality of rubber. The usage of polymer sulfur allows adjusting the elastic properties of the rubber produced [Behera and Prasad, 2020, Nadirov, 2003, Otarbaev et al., 2019. On the basis of the preliminary study of the physical and chemical properties of oil production and refining wastes and the activation of dispersing properties in the specified direction, a new technology for obtaining thermostable composite materials associated with using of Tengiz sulfur will improve the technical and economic performance of oil and refining industries; moreover, it has economic, social and environmental significance [Vetoshkin, 2019, Wandelt, 2018, Kalygin, 2004. The main goal of the study was to eliminate the remains of oil production and processing-a heat-resistant composite material of sulfur-by using it as a vulcanizing agent in the process of producing technical rubber. In order to achieve this goal, the study set the following tasks: to study the possibility of using sea sulfur in rubber compounds; composite material containing sea sulfur-research of technological and physical and mechanical properties of technical rubber. MATERIAL AND METHODS Research methods include physical, chemical, physical and mechanical analyses, as well as technical and operational tests. The physicochemical analysis of raw materials was carried out in a specialized laboratory for methods of physical and chemical analysis "Sapa". The technical and operational test methods were carried out specifically in the "Complex laboratory of modern test methods" of the South Kazakhstan University named after M. Auezov. The preparation of the rubber mixture was carried out in the laboratory mixer PD 630315/315. The temperature of the front shafts of the shaft is 50-60°С and the back ones is 60-70°С. The mixing was carried out on laboratory shafts with the following characteristics: • Shaft diameter-160 mm. During mixing, the following process parameters were observed: • mixing time; • shaft clearance; • shaft surface temperature. The cleaned and cut rubber passes through the shaft holes until a thin layer is formed. The mixture is often cut for high-quality plasticization, thereby the direction of the deformation force is changed. The procedure of adding the ingredients was carried out in accordance with theoretical rules: first, emollients, accelerators, activators, plasticizers were added. Technical carbon was added several times in small amounts, scattered on a flat sheet technical carbon was re-added into the mixture. The vulcanizing agents were added at the end. The rubber compound was regularly cut during the mixing process. Ready mixtures were obtained from shafts in the form of rolled blanks. The determination of salt hardness was carried out according to GOST 263-75 [1988]. The hardness of rubber is characterized by the resistance of the rubber to the compressive force of a metal needle or ball indenter under the action of a spring force or load. The hardness is determined by the depth of compression of the needle by the compressed spring when the base of the instrument touches the surface of the specimen. Pressing the needle moves the link proportionally on the instrument scale. The test piece is a parallel surface plate. When measuring, the distance between the measuring points must be at least 5 mm, and the distance from any measuring point to the edge of the sample must be at least 13 mm. The sample thickness should be at least 6 mm. The test temperature should be (23 ± 2)°С. The hardness is measured at at least three points in different parts of the sample. The test result is the arithmetic mean of all measurements, rounded to the nearest whole number. The tensile strength of the samples is determined according to GOST 261-79 [1980]. For testing, the samples are taken according to permits and marked in accordance with GOST 270-75 [2008]. The test pieces are cut from rubber plates with a thickness of 2 ± 0.2 mm or 1 ± 0.2 mm. The frequency of the deformation is set. The temperature in the chamber is brought to the set value. The amount of dynamic and static displacement of the clamps of independent machines in accordance with the specified deformations of the samples is set, which are determined by the length of the working zone. According to the calculation formulas, the values of the following indicators were determined: where: S 0 -initial state of the sample, m 2 ; where: b 0 -initial width of the sample, mm; h 0the initial thickness of the sample, mm. Relative elongation at break, E. E z = (l 1 -l 0 )/l 0 • 100% (4) where: l 1 -the length of the working area of the sample in a minute after "rest". The tensile test method for rubber according to the GOST 262-93 [2002] consists of stretching a cut piece and measuring the load at which the test piece breaks. The tests are carried out on a cutting machine. The samples are cut with a special tool. Durability of rubber to rupture Σ z (N/m) of rubber is calculated using the following formula: where: P r -load causing rupture of the cut specimen, N; h k -initial sample thickness, m. RESULTS AND DISCUSSION The experiments were carried out to determine the optimal amount of polymeric sulfur to obtain vulcanizate with the best usage properties. Polymeric sulfur is obtained from the melt during the sudden cooling of sulfur melted in a hardened medium. Polymeric sulfur was obtained in the form of a light yellow dispersed powder. Tengiz sulfur was added to the rubber mixture to partially or completely replace ordinary sulfur. Several types of primary reference rubber compounds were used in the following tables in the course of research on the production of composite materials (Table 1-4). The preparation of rubber additives in stages 1 and 2 was carried out on the PD 630315/315 laboratory crusher. The temperature in the front shaft crusher is 50-60°С, in the back 60-70°С. The cleaned and chopped rubber passed through the holes between the shafts until a thin skin was formed. The additive was often cut for quality plasticization, thus the direction of the deformation force was changed. The procedure for the introduction of the ingredients of the first stage was carried out in accordance with the theoretical principles: initially, softeners, bulk ingredients, The set of the additives after 2 stage 500.10 activators, plasticizers were introduced. Technical carbon was introduced several times in small quantities, technical carbon scattered in a flat tin pan was re-introduced into additive. There were no problems during the mixing process. Polymeric sulfur and accelerator were introduced in the second stage. The positive effect of the studied polymeric sulfur on the technological properties of standard rubber compounds was observed. Polymeric sulfur easily penetrates into the rubber mixture. The separation of polymer sulfur in the mixture is satisfactory, it does not require changes in the order of crushing and vulcanization. The rubber mixture was cut regularly during the mixing process. The finished mixture was obtained from the grinder in the form of leaflet, sheets, and blanks. After the first stage, the rubber mixture rests for at least 2 hours. The samples were vulcanized in the process of electric vulcanization RDE 800x800 at a temperature of 155°C and in the order of 20 minutes to determine the physical and mechanical parameters. The set of the additives after 2 stage 500.00 The set of the additives after 2 stage 500.00 The determination of strength of vulcanizates was carried out in accordance with GOST 270-75 [2008]. For testing, the samples were taken according to the tolerances and marked according to GOST 270-75 [2008]. The samples of rubber plates with a thickness of 2 ± 0.2 mm or 1 ± 0.2 mm were used for the test. The results of determination of physical and mechanical properties of vulcanizates are shown in Figures 2-4. The results shown in the figures prove that the physical and mechanical properties of vulcanizate meet the control standards when using sulfur as a vulcanizing agent. Thus, the usage of polymeric sulfur as a vulcanizing agent and its positive effect on the properties of rubber were determined. As can be seen in Figure 2, the sample of 3, which corresponds to the control norm, is 2 MPa lower than the reference value. The content of polymeric sulfur is 2-2.5 ms compared to the control norm has the highest conditional tensile strength. Increasing the amount of polymer sulfur to 3.0 units of mass leads to a decrease in this strength. According to the data in Figure 3, the relative elongation at break of vulcanizates in all versions is higher than the control norm and the reference version. However, version 2 can be noted with the highest rate of 287%. Figure 4 shows that the relative elongation at break is 265% and the tensile strength is 118 MPa. The hardness index on the Shore corresponds to the control norm and 72 MPa to the reference version. Extending the conditional According to the results of the study, the optimal amount of polymeric sulfur, which achieves the maximum physical and mechanical properties of vulcanizates, the optimal size is 1.3 parts per 100 mass parts of rubber. Dosing of technical sulfur in the reference recipe of this order is 100 masses of rubber 1.8 times the mass per unit area, which is 0.5 times more than the using polymer sulfur. This is probably due to the activity of polymeric sulfur, which is completely involved in the vulcanization reaction, forming very strong sulfur bubbles, resulting in a partial increase in the strength properties of small amounts of vulcanizates. As a practical application of the research results, we are sure that the introduction of technology for the production of rubber compounds used in the production of rubber products at the OJSC "Elastopolimet" confirmed by ICT industrial tests may result in a reduced technogenic load on the environment due to sulfur consumption. The positive effect of polymer sulfur on the technological properties of the investigated rubber compounds was established. Polymer sulfur is easily added into the rubber compound. The distribution of polymer sulfur in rubber is satisfactory, it does not require changing the order of grinding and vulcanization. According to the tables, the amount of polymer sulfur is 2-2.5 ms, in comparison with the reference sample, has the highest conventional tensile strength. Increasing the amount of polymer sulfur to 3.0 ms leads to the decrease of the given strength index. It is optimal at increasing in the value of the conditional voltage by 300% and the amount of polymer sulfur in salt hardness is 2-2.5 ms. Further increase in the amount of polymer sulfur leads to a decrease in hardness from 78.5 to 75 conventional units. The test results show that the best set of physical and mechanical properties in rubber is achieved if the amount of polymer sulfur is 1.5 ms. According to the research results, the optimal amount of polymer sulfur at which the maximum physical and mechanical properties of achieved tread rubber is 1.3 parts per 100 masses of rubber. In this standard reference formula, the dosage of technical sulfur is 1.8 parts per 100 parts by weight of rubber, which is 0.5 times higher than with polymer sulfur. This is probably due to the activity of polymer sulfur, which fully participates in the vulcanization reaction, forming very strong sulfur bubbles, from which, in small amounts, a partial increase in the strength properties of vulcanizates occurs. CONCLUSIONS Formulations of composite materials with the content of sea sulfur were developed, technological and physical-mechanical tests were carried out to confirm the quality of the obtained composite material -technical rubber. The influence of sea sulfur on the vulcanization time and properties of rubber was studied. The dependence of the content of sea sulfur on the quality of composite materials was determined. The optimal dose of sea sulfur was determined by the formula of the cord filler 3.5 mass per 100 mass rubber. It was found that the usage of sea sulfur leads to an increase in the strength properties, including the heat resistance of the rubber filler, due to an increase in the number of intermolecular bonds in the elastomeric matrix due to the reaction of sulfur in the formulation. The technology of manufacturing rubber compounds and their vulcanization was considered. The quality of the obtained composite materials meets the requirements of technological regulations. The recommended filler compositions significantly save the resource on the operation of the external car tire .
4,341.4
2021-07-01T00:00:00.000
[ "Materials Science" ]
Noise Estimation for Image Sensor Based on Local Entropy and Median Absolute Deviation Noise estimation for image sensor is a key technique in many image pre-processing applications such as blind de-noising. The existing noise estimation methods for additive white Gaussian noise (AWGN) and Poisson-Gaussian noise (PGN) may underestimate or overestimate the noise level in the situation of a heavy textured scene image. To cope with this problem, a novel homogenous block-based noise estimation method is proposed to calculate these noises in this paper. Initially, the noisy image is transformed into the map of local gray statistic entropy (LGSE), and the weakly textured image blocks can be selected with several biggest LGSE values in a descending order. Then, the Haar wavelet-based local median absolute deviation (HLMAD) is presented to compute the local variance of these selected homogenous blocks. After that, the noise parameters can be estimated accurately by applying the maximum likelihood estimation (MLE) to analyze the local mean and variance of selected blocks. Extensive experiments on synthesized noised images are induced and the experimental results show that the proposed method could not only more accurately estimate the noise of various scene images with different noise levels than the compared state-of-the-art methods, but also promote the performance of the blind de-noising algorithm. Introduction Images captured by charge-coupled device (CCD) image sensors will bring about many types of noise, which are derived from CCD image sensor arrays, camera electronics and analog to digital conversion circuits. Noise is the most concerning problem in imaging systems for it directly degrades the image quality. Therefore, image de-noising is a critical procedure to ensure high quality images. Most of existing de-noising algorithms process the noisy images via assuming that the noise level parameters are given. However, this is not the correct operation in real noisy images, because the noise level parameters generally are unknown in the real world. Blind de-noising without requesting any prior knowledge is a high-profile study. How to accurately estimate the noise parameters is a challenging issue in blind de-noising field, hence, noise level estimation has attracted many studies and a number of noise estimation algorithms have been developed [1][2][3]. Actually, estimating the CCD noise level function (NLF) is to extract ground true noise from noisy image by finding the proximal noise probability density function (PDF). Comprehensive noise noise estimation approaches by collecting irregular shaped smooth blocks. For their simplicity and efficiency, the block-based noise estimation methods are drawing more and more attention recently. Whereas, there is a challenge for stripping homogenous blocks that can easily influence the performance of estimation. Some previous studies have mentioned that entropy can be used for image segmentation and texture analysis [23][24][25]. Meanwhile, other studies also reported that entropy has a robust stability to noise [26,27]. However, there is little research on applying the entropy to estimate image sensor noise. Considering the good performances of entropy, the local gray statistical entropy (LGSE) is proposed in this paper to extract blocks with weak textures. By making the utmost out of the block-based method and robust MAD-based method, a novel noise estimator including the LGSE for blocks selection and Haar wavelet-based local median absolute deviation (HLMAD) for local variance calculation is designed. In our work, we also engage in Poisson-Gaussian noise (PGN), which is the less studied but better modeled for the actual camera noise. The existing PGN sensor noise estimators can be classified as: Scatterplot fitting-based methods, least square fitting-based methods and maximum likelihood estimation-based methods. The scatterplot fitting-based methods are proposed to group a set of local means and variances of the selected blocks, and to fit those scatter points using linear regression [28][29][30]. The least square fitting-based methods [31][32][33] use a least square approach to estimate PGN parameters by sampling weak textured blocks and fitting a group of sampled data. In [34,35], the maximum likelihood estimation-based methods still extract smooth blocks first, and then exploit the maximum likelihood function to estimate PGN parameters. The three categories of PGN estimators are equally popular for estimating PGN parameters. Specifically in this paper, we choose maximum likelihood estimation (MLE) to process extracted local mean-variance data for PGN estimation. There are four contributions in this paper: (1) We analyze the noise sources of an imaging camera, and choose the simple but effective noise models in the image domain to describe most of noise forms of imaging sensor. (2) In the view of conventional noise estimation methods, textures and edges of noisy images are eliminated first in those algorithms. Inspired by this principle, an effective method using LGSE to select homogenous blocks without high frequency details is proposed in this paper. As far as we know, this is a novel approach to utilize local gray entropy in noise estimation. (3) By comparing traditional noise estimation methods, we find local median absolute deviation has a robust performance to compute local standard deviation and MLE can overcome the weakness of scatterplot-based estimation. By combining above two methods and their advantages, a reliable noise estimation method is established. (4) Due to these measures, a robust image sensor noise level estimation scheme is presented, and it is superior to some of the state-of-the-art noise estimation algorithms. The structure of this paper is organized as follows: The noise model of image sensor in image domain is discussed, and the AWGN model and PGN model are presented in Section 2. The LGSE and HLMAD based noise estimation algorithm is described, and the whole details of the novel and robust method are presented in Section 3. Experimental results and discussions on synthesized noisy datasets are shown in Section 4. Finally, Section 5 gives the conclusion of this paper. Image Sensor Noise Model Irie et al. [5,6] indicted that the CCD camera noise model includes comprehensive noise sources: Reset noise, thermal noise, amplifier noise, flicker noise, circuit readout noise and photon shot noise. Here, the reset noise, thermal noise, amplifier noise, flicker noise and circuit readout noise vary only in additive spatial domain and can be generally included in the readout noise. Hence, the total noise of a CCD camera can be defined as the combination of the readout noise (additive Gaussian process) and photon shot noise (multiplicative Poisson process) [5]. In the CCD noise analysis experiment, we used a 532 nm He-Ne laser to generate light beams, chose CCD97 (produced by e2v, England) in the standard mode (without using multiplicative gain register) to realize the photo-electric conversion, and selected the 16-bit data acquisition card (0-65535 DN) to collect the images. In this experiment, the diaphragm was used to adjust the intensity of the illumination, and the polaroid and optical lens were used to preserve the illumination uniformity on CCD97 light receiving surface. In order to ensure the stability of the scene and the uniformity of the CCD97 received light, a photomultiplier tube was selected as the wide-range micro-illuminometer to monitor the stable brightness at the CCD light receiving surface before the experiment. In addition, the average operation of 25 images in each group can overcome the output fluctuation of He-Ne laser and promote the accuracy of system testing. To identify and calculate the total noise, readout noise and shot noise of image sensor according to the measurements of [5,6], 100 image groups in different illumination intensities were captured in optical darkroom, and each group had 25 consecutive images without illumination and 25 consecutive images with illumination. Due to the captured images under uniform illumination intensity, the total noise variance in each group is equal to the average value of variances of 25 consecutive images with a given illumination. The variance of the readout noise in each group can be computed by averaging the variances of 25 images captured in dark conditions. The shot noise variance in each group is the difference value between total noise variance and readout noise variance. By measuring the noise variances of these images, the photon transfer curve (PTC) and its logarithm were computed. The experimental platform and the computed PTC curve are shown in Figure 1. Figure 1a shows the experimental platform of CCD noise analysis. Figure 1b depicts that the total noise of image sensor contains signal independent readout noise (red dots) and signal dependent shot noise (green dots). It can be seen from Figure 1b that the noise curve is almost flat under low illumination levels implying that the noise (mostly readout noise) in dark regions of an image is signal independent, while as the illumination increases, the shot noise becomes more and more signal dependent. CCD97 light receiving surface. In order to ensure the stability of the scene and the uniformity of the CCD97 received light, a photomultiplier tube was selected as the wide-range micro-illuminometer to monitor the stable brightness at the CCD light receiving surface before the experiment. In addition, the average operation of 25 images in each group can overcome the output fluctuation of He-Ne laser and promote the accuracy of system testing. To identify and calculate the total noise, readout noise and shot noise of image sensor according to the measurements of [5,6], 100 image groups in different illumination intensities were captured in optical darkroom, and each group had 25 consecutive images without illumination and 25 consecutive images with illumination. Due to the captured images under uniform illumination intensity, the total noise variance in each group is equal to the average value of variances of 25 consecutive images with a given illumination. The variance of the readout noise in each group can be computed by averaging the variances of 25 images captured in dark conditions. The shot noise variance in each group is the difference value between total noise variance and readout noise variance. By measuring the noise variances of these images, the photon transfer curve (PTC) and its logarithm were computed. The experimental platform and the computed PTC curve are shown in Figure 1. Figure 1a shows the experimental platform of CCD noise analysis. Figure 1b depicts that the total noise of image sensor contains signal independent readout noise (red dots) and signal dependent shot noise (green dots). It can be seen from Figure 1b that the noise curve is almost flat under low illumination levels implying that the noise (mostly readout noise) in dark regions of an image is signal independent, while as the illumination increases, the shot noise becomes more and more signal dependent. According to the noise analysis of the CCD image sensor, two parametric models were used to model the image sensor noise: The generalized zero-mean additive white Gaussian noise (AWGN) model and the Poisson-Gaussian noise (PGN) model. Additive White Gaussian Noise Model In the various sources of CCD camera noise, the reset noise, thermal noise, amplifier noise and circuit readout noise can be seen as additive spatial and temporal noise. For all of these noise sources that obey the zero-mean whited additive Gaussian distribution, their joint noise distribution model can be simplified as: where ( , ) x y is the pixel location, and ( , ) N f x y denote original noisy image, noise-free image and random noise image at ( , ) x y , respectively. Generally, the random noise image According to the noise analysis of the CCD image sensor, two parametric models were used to model the image sensor noise: The generalized zero-mean additive white Gaussian noise (AWGN) model and the Poisson-Gaussian noise (PGN) model. Additive White Gaussian Noise Model In the various sources of CCD camera noise, the reset noise, thermal noise, amplifier noise and circuit readout noise can be seen as additive spatial and temporal noise. For all of these noise sources that obey the zero-mean whited additive Gaussian distribution, their joint noise distribution model can be simplified as: where (x, y) is the pixel location, and f (x, y), f S (x, y), f N (x, y) denote original noisy image, noise-free image and random noise image at (x, y), respectively. Generally, the random noise image f N (x, y) is assumed to be a whited additive Gaussian distribution noise with zero mean and unknown variance σ 2 n : In the AWGN model, due to the images always having rich textures, the unknown variance σ 2 n estimation still is a head-scratching issue for blind de-noising. To solve this problem, a novel noise variance estimation method is proposed in this paper. Poisson-Gaussian Noise Model For the practical image sensor noise model, the signal-dependent shot noise cannot be ignored. As depicted in Figure 1b, with the increase of irradiation, the measured shot noise also increased significantly. As the shot noise follows the Poisson distribution, the actual CCD camera noise model can be treated as a mixed model of multiplicative Poisson noise and additive Gaussian white noise: where η(x, y) is the mixed total noise image at (x, y). In the Poisson-Gaussian mixed noise model, the mixed total noise image η(x, y) contains signal-dependent (Poisson distribution) noise and signal-independent (Gaussian distribution) noise, and is modeled as: where p S (x, y) follows Poisson distribution, that is p S (x, y) ∼ P( f S (x, y)), and as mentioned above, the f N (x, y) ∼ N(0, σ 2 n ) at (x, y). Hence, the noise level function becomes: where σ 2 η (x, y) is the local total noise variance of mixed noise model, f S (x, y) is the expected intensity of noise-free image which is approximately equal to the local mean value f (x, y) of original noise image, ρ and σ 2 n represent the Poisson noise parameter and Gaussian noise variance, respectively. In the PGN model, our mission was mainly to estimate the two parameters ρ and σ 2 n . We can calculate the local mean f (x, y) of the selected image blocks, and estimate the local total variance σ 2 η (x, y). Then, a set of local mean-variance was extracted to accurately estimate ρ and σ 2 n by MLE. Proposed Homogenous Blocks Selection Method Primarily, a group of image blocks with size (M + 1) × (N + 1) from input noisy image of size R × C is generated by sliding the rectangular window pixel-by-pixel, and the group of blocks is expressed: is the total number of blocks, and the local mean of nearly noise-free signal is computed via averaging all the noisy pixels in the block: where the local window is around (x, y), and the window size is (M + 1) × (N + 1). Then, the proposed homogenous blocks selection method is introduced as following. In information theory, Shannon entropy [36] can effectively reflect the information contained in an event. The local gray statistic method is similar to Shannon entropy, but the new modified method focuses on how messy the gray texture values are. Firstly, given the original noisy image, the local map of normalized pixel grayscale value can be calculated as: where i ∈ (x − M/2, x + M/2) and j ∈ (y − N/2, y + N/2) created the location (i, j) in local window around (x, y). Meanwhile f (i, j) is the gray level of original noisy image, g(i, j) is the normalized pixel grayscale value, and ξ = 10 −9 is a small constant used to avoid the denominator becoming zero. Then, the proposed LGSE is defined as follows: where H le (x, y) is the local gray entropy of 3 × 3 center pixels and all of the (M + 1) × (N + 1) neighbor pixels around the location (x, y). According to the theory of Shannon entropy, the LGSE reflects the degree of dispersion of local image grayscale texture. When the local image of gray distribution is uniform, the LGSE is relatively bigger, and when the local image of gray texture level distribution has a large dispersion, the LGSE is smaller. Due to the LGSE is the outcome of combined action of all pixels in the local window, the local entropy itself has a robust ability to resist noise. Six different noise-free textured blocks and their corresponding values of LGSE are given in Figure 2. As shown in Figure 2, the weakly textured blocks have relatively higher LGSE values. Note that here the weak texture is a generic term that includes not only smooth blocks, but also some slightly textured blocks. In Figure 3, the noisy image blocks are the blocks where the noise-free image blocks in Figure 2 In information theory, Shannon entropy [36] can effectively reflect the information contained in an event. The local gray statistic method is similar to Shannon entropy, but the new modified method focuses on how messy the gray texture values are. Firstly, given the original noisy image, the local map of normalized pixel grayscale value can be calculated as: where is a small constant used to avoid the denominator becoming zero. Then, the proposed LGSE is defined as follows: where ( , ) le H x y is the local gray entropy of 3 × 3 center pixels and all of the ( ) ( ) neighbor pixels around the location ( , ) x y . According to the theory of Shannon entropy, the LGSE reflects the degree of dispersion of local image grayscale texture. When the local image of gray distribution is uniform, the LGSE is relatively bigger, and when the local image of gray texture level distribution has a large dispersion, the LGSE is smaller. Due to the LGSE is the outcome of combined action of all pixels in the local window, the local entropy itself has a robust ability to resist noise. Six different noise-free textured blocks and their corresponding values of LGSE are given in Figure 2. As shown in Figure 2, the weakly textured blocks have relatively higher LGSE values. Note that here the weak texture is a generic term that includes not only smooth blocks, but also some slightly textured blocks. In Figure 3, the noisy image blocks are the blocks where the noise-free image blocks in Homogenous Blocks Selection Based on LGSE with Histogram Constraint Rule Dong et al. [33] indicated that the image gray level histogram and block texture degree have a strong relationship. Inspired by this vision, to reduce or even overcome the disturbance of too many outliers, the gray level histogram-based constraint rule is introduced to firstly filter out the image blocks with disorderly rich textures. In [19,28,30], the estimated noise level functions (NLFs) all have a comb shape in these scatterplots. These comb-shaped NLFs indicate that the pixel intensities approaching the highest or lowest pixel intensity may cause a disaster for noise estimation, because the brightest or darkest pixels may generate a number of outliers, ultimately reducing the precision of noise estimation. Therefore, we first exclude the blocks whose pixel mean value is too dark (0-15, the pixel intensity range is 0 to 255) or too bright (240-255) in the gray level histogram: where i f is the mean gray value of the i-th block, i e is the exclusion rule of darkest or brightest blocks, and i b is the updated values of i-th block by darkest or brightest block removal operation. [16, 239] r I ∈ is the residual gray level of B after dark or bright block removal operation. The probability of occurrence of r I can be computed by: where ( ) r hist I denotes the frequency of r I in gray level histogram and PN T is the total pixel number of non-zero values of B . The blocks where gray values do not occur frequently may also have high LGSE values, which will generate some undesirable isolated values. Therefore, rather than dealing with all of gray levels, we only explore the gray levels with high occurrences. Accordingly, we try to eliminate the blocks with low frequency of occurrence below a certain threshold value. To solve this gap, the joint probability is firstly introduced as: Homogenous Blocks Selection Based on LGSE with Histogram Constraint Rule Dong et al. [33] indicated that the image gray level histogram and block texture degree have a strong relationship. Inspired by this vision, to reduce or even overcome the disturbance of too many outliers, the gray level histogram-based constraint rule is introduced to firstly filter out the image blocks with disorderly rich textures. In [19,28,30], the estimated noise level functions (NLFs) all have a comb shape in these scatterplots. These comb-shaped NLFs indicate that the pixel intensities approaching the highest or lowest pixel intensity may cause a disaster for noise estimation, because the brightest or darkest pixels may generate a number of outliers, ultimately reducing the precision of noise estimation. Therefore, we first exclude the blocks whose pixel mean value is too dark (0-15, the pixel intensity range is 0 to 255) or too bright (240-255) in the gray level histogram: where f i is the mean gray value of the i-th block, e i is the exclusion rule of darkest or brightest blocks, and b i is the updated values of i-th block by darkest or brightest block removal operation. I r ∈ [16,239] is the residual gray level of B after dark or bright block removal operation. The probability of occurrence of I r can be computed by: where hist(I r ) denotes the frequency of I r in gray level histogram and T PN is the total pixel number of non-zero values of B. The blocks where gray values do not occur frequently may also have high LGSE values, which will generate some undesirable isolated values. Therefore, rather than dealing with all of gray levels, we only explore the gray levels with high occurrences. Accordingly, we try to eliminate the blocks with low frequency of occurrence below a certain threshold value. To solve this gap, the joint probability is firstly introduced as: where φ( f i ) = hist( f i )/max[hist(I r )] is the frequency of f i versus maximum frequency of I r and thr is the threshold to evaluate and remove the blocks with low occurrence frequency of mean gray value. In terms of Bayes rule, the conditional probability can be expressed as: By using the Equations (12)(13)(14), the blocks whose occurrence probability of their mean gray value f i is smaller than the conditional probability value will be eliminated. These procedures of gray level histogram-based constraint rule can be simply considered that we arrange the gray levels of pixels in a descending order, select the blocks of mean gray value f i greater than thr, and renew block set B. To be specific, we set thr = median(φ( f i )) to remove disorderly blocks. Hence, the blocks where the mean gray values do not occur frequently represent disorderly rich textured blocks, and the LGSE values of those blocks can be set as zero values. In Figure 4, the whole procedure of histogram-based constraint rule for eliminating outliers is shown using the "Birds" image as example. Figure 4a shows the original "Birds" image and its corresponding gray level histogram is shown in Figure 4b,c is the result of the selected gray levels using histogram constraint rule to overcome outliers. In Figure 4d, the green dots have covered the residual pixel blocks after the operation of histogram constraint rule. is the frequency of i f versus maximum frequency of r I and thr is the threshold to evaluate and remove the blocks with low occurrence frequency of mean gray value. In terms of Bayes rule, the conditional probability can be expressed as: By using the Equations (12)(13)(14), the blocks whose occurrence probability of their mean gray value i f is smaller than the conditional probability value will be eliminated. These procedures of gray level histogram-based constraint rule can be simply considered that we arrange the gray levels of pixels in a descending order, select the blocks of mean gray value i f greater than thr , and renew block set B . To be specific, we set to remove disorderly blocks. Hence, the blocks where the mean gray values do not occur frequently represent disorderly rich textured blocks, and the LGSE values of those blocks can be set as zero values. In Figure 4, the whole procedure of histogram-based constraint rule for eliminating outliers is shown using the "Birds" image as example. Figure 4a shows the original "Birds" image and its corresponding gray level histogram is shown in Figure 4b,c is the result of the selected gray levels using histogram constraint rule to overcome outliers. In Figure 4d, the green dots have covered the residual pixel blocks after the operation of histogram constraint rule. The histogram constraint rule is a simple pretreatment to remove undesirable outliers but cannot obtain the reliable weakly textured blocks. To further suppress the disturbance of the textures of residual blocks, the LGSE-based homogenous image blocks selection method is proposed: where τ is the selected minimum value of LGSE, max le H is the maximum value of LGSE, and δ is the select ratio to control the weakly textured blocks selection. Based on the statistics and observation of noise estimation processing of 134 scene images, the select ratio is empirically set 10% δ = of the total pixel number of whole images in our experiments. This setting means that we only selected 10% of the block set as homogenous blocks. An example for the homogenous blocks selection in "Birds" image is shown in Figure 5 and the 15 × 15 red boxes have outlined the finally selected homogenous blocks. The whole operations of homogenous blocks selection of "Barbara" and "House" are shown in Figure 6. The histogram constraint rule is a simple pretreatment to remove undesirable outliers but cannot obtain the reliable weakly textured blocks. To further suppress the disturbance of the textures of residual blocks, the LGSE-based homogenous image blocks selection method is proposed: where τ is the selected minimum value of LGSE, maxH le is the maximum value of LGSE, and δ is the select ratio to control the weakly textured blocks selection. Based on the statistics and observation of noise estimation processing of 134 scene images, the select ratio is empirically set δ = 10% of the total pixel number of whole images in our experiments. This setting means that we only selected 10% of the block set as homogenous blocks. An example for the homogenous blocks selection in "Birds" image is shown in Figure 5 and the 15 × 15 red boxes have outlined the finally selected homogenous blocks. The whole operations of homogenous blocks selection of "Barbara" and "House" are shown in Figure 6. Haar Wavelet-Based Local Median Absolute Deviation for Local Variance Estimation In this paper, a robust Haar wavelet-based local median absolute deviation (HLMAD) method is presented to estimate the standard deviation of noisy image block. The Haar wavelet, which is one of the simplest but most commonly used among wavelet transforms, can be exploited to detect noise for its good regularity and orthogonality. In the first scale of 2-D Haar wavelet analysis, the HH coefficients represent the high frequency component that is the noise component in selected homogenous blocks. The sub-band wavelet coefficients (HH) of first scale of 2-D Haar wavelet transform are calculated by: Haar Wavelet-Based Local Median Absolute Deviation for Local Variance Estimation In this paper, a robust Haar wavelet-based local median absolute deviation (HLMAD) method is presented to estimate the standard deviation of noisy image block. The Haar wavelet, which is one of the simplest but most commonly used among wavelet transforms, can be exploited to detect noise for its good regularity and orthogonality. In the first scale of 2-D Haar wavelet analysis, the HH coefficients represent the high frequency component that is the noise component in selected homogenous blocks. The sub-band wavelet coefficients (HH) of first scale of 2-D Haar wavelet transform are calculated by: Haar Wavelet-Based Local Median Absolute Deviation for Local Variance Estimation In this paper, a robust Haar wavelet-based local median absolute deviation (HLMAD) method is presented to estimate the standard deviation of noisy image block. The Haar wavelet, which is one of the simplest but most commonly used among wavelet transforms, can be exploited to detect noise for its good regularity and orthogonality. In the first scale of 2-D Haar wavelet analysis, the HH coefficients represent the high frequency component that is the noise component in selected homogenous blocks. The sub-band wavelet coefficients (HH) of first scale of 2-D Haar wavelet transform are calculated by: where f (x, y) is the original noisy image, H 1x (x, y) is the high frequency component of first step which uses 1-D Haar analysis in each row of the noisy image, and H 2xy (x, y) is the high frequency coefficients of 2-D Haar wavelet analysis. Since the MAD is a more stable statistical deviation measure than sample variance, it works much better for more noise distributions. In addition, the MAD is more robust to outliers in datasets than sample standard deviation. By applying the robust MAD depicted in [15,16] to estimate the noise variance, the local block standard deviationσ η (x, y) is derived as: where MAD[H 2xy (i, j)] is the median value of absolute deviation of the high frequency coefficients of 2-D Haar wavelet H 2xy (i, j), the constant of proportionality factor k = Φ −1 (3/4) = 0.6745, andσ η (x, y) is the estimated standard deviation of the selected homogenous image block. Maximum Likelihood Estimation for Multi-Parameter Estimation According to the central limit theorem, the distribution of the sample approximately follows normal distribution when the number of sampling points is sufficient. Therefore, by fitting the NLF model σ 2 η (x, y) = ρ f S (x, y) + σ 2 n , the likelihood function of the dataset of selected homogenous blocks would be written as: where T is the total number of selected blocks, dataset D is a set of local pixel mean value and local total variance pair ( f S ,σ 2 η ), and d i = ( f i ,σ 2 ηi ), in here, f i andσ 2 ηi are the pixel mean value and the estimated noise variance of the i-th block. In addition, the ln-likelihood function is: According to the nature of maximum likelihood estimator, the two parametersρ andσ 2 n could be estimated through maximizing the probability L(ρ, σ 2 n ) of the selected samples: Here, the gradient descent method is applied to solve the maximum likelihood function: And the two parametersρ andσ 2 n of NLF can be estimated reliably. Proposed Noise Parameter Estimation Algorithm for the AWGN Model In the most commonly used image sensor noise model that is AWGN model, the noise standard deviation is the only parameter that needs to be estimated. According to these above mentioned methods and theories, the accurate noise standard deviation estimation algorithm is established. Firstly, we divide the noisy image into a group of blocks by sliding window pixel-by-pixel, and compute the mean gray value of each block. The LGSE of each block is calculated, and the LGSE values of darkest and brightest blocks are excluded. Next, the LGSE values of residual blocks make a descending order, and the homogenous blocks can be extracted by selecting several largest LGSE values. Then, the local standard deviation of each selected block is computed by the HLMAD, and build up a cluster of local standard deviation. Finally, the noise standard deviation is precisely estimated by picking the median of the cluster:σ whereσ η is the cluster of local standard deviationσ η (x, y),σ n is the estimated noise standard deviation for AWGN model. The whole of the proposed noise parameter estimation algorithm for the CCD sensor AWGN model is summarized in Algorithm 1. Algorithm 1. Proposed noise parameter estimation algorithm for the AWGN model Input: Noisy image f , window size N, blocks selection ratios δ. Output: Estimated noise standard deviationσ n . Step 1: Group a set of blocks by sliding window pixel-by-pixel: Step 2: Compute the mean gray value f (x, y) of all pixels at each block. Step 3: for block index k = 1:BN do Compute the LGSE of blocks according to (8) and (9). end for Step 4: Exclude the LGSE of blackest and whitest blocks according to (10) and (11). Step 5: Select homogenous blocks based on selection of residual LGSE using (15). Step 6: for homogenous block index t = 1:T do Obtain the local standard deviation of homogenous block by HLMAD according to (16)(17)(18). end for Step 7: Estimate the noise standard deviationσ n via using median estimator derived from (23). Proposed Noise Parameter Estimation Algorithm for the PGN Model We also work on the PGN model, which is the most likely model for practical image sensor noise. It is noteworthy that the proposed estimation algorithm for the PGN model generally agrees with the proposed algorithm for AWGN model. Contrasted with Algorithm 1, for the PGN model, we append the gray level histogram-based constraint rule method for firstly removing disorderly blocks, and compute the local mean value of selected blocks. Then, the local mean-variance pair ( f ,σ 2 η ) of selected blocks can be obtained. Finally, by utilizing the MLE estimator to process the local mean-variance pair ( f ,σ 2 η ), the two parametersρ andσ 2 n ) of PGN model are estimated credibly. The whole of the proposed noise parameter estimation algorithm for the image sensor PGN model is summarized in Algorithm 2. Algorithm 2. Proposed noise parameter estimation algorithm for the PGN model Input: Noisy image f , window size N, blocks selection ratios δ. Output: Estimated NLF parametersρ andσ 2 n . Step 1: Group a set of blocks by sliding window pixel-by-pixel: Step 2: Compute the mean gray value f (x, y) of all pixels in each block. Step 3: for block index k = 1:BN do Compute the LGSE of blocks according to (8) and (9). end for Step 4: Exclude the LGSE of blackest and whitest blocks according to (10) and (11). Step 5: Utilize the gray level histogram-based constraint rule to remove disorderly blocks, and exclude the LGSE of low frequency gray value according to (12)(13)(14). Step 6: Select homogenous blocks based on selection of residual LGSE using (15). Step 7: for homogenous block index t = 1:T do Compute the mean gray value of all pixels in selected block using (7). Obtain the local variance of homogenous block by HLMAD according to (16)(17)(18). Experimental Results In this section, a series of experiments on synthesized noisy images under various scene images are conducted to evaluate the performance of proposed noise estimation algorithm. Furthermore, some state-of-the-art algorithms and classical algorithms are selected for performance comparison. These experiments are performed in Matlab 2016a (developed by MathWorks, Massachusetts, USA) on the computer with 3.2 Ghz Intel i5-6500 CPU and 16 Gb random access memory. Test Dataset The test dataset is composed of 134 images for synthesized noisy images, which were generated by adding the AWGN or PGN component to three typical test datasets. These three datasets consisted of classic standard test images (CSTI), Kodak PCD0992 images [37] and Berkeley segmentation dataset (BSD) images [38]. The CSTI images have 10 classic standard test images of size 512 × 512 and are shown in Figure 7. The Kodak PCD0992 images have 24 images of size 768 × 512 or 512 × 768 released by the Eastman Kodak Company for unrestricted usage, and many researchers use them as a standard test suite for image de-noising testing. The BSD images totally have 100 test images and 200 train images of size 481 × 321 or 321 × 481, but in our experiments, we randomly selected 25 images from the Berkeley test set and 75 images from Berkeley train set to process. Therefore, the test dataset contains various scene images. Testing on this dataset proves that the algorithm is suitable for many scenarios. Sensors 2019, 19, x FOR PEER REVIEW 13 of 23 Figure 7. Classic standard test images for synthesized noisy images: "Barbara," "Lena," "Pirate," "Cameraman," "Warcraft," "Couple," "Peppers," "Bridge," "Hill" and "Einstein." Effects of Parameters In this part, the experiment is conducted to compare the performance of proposed noise estimation algorithm under various parameter configurations. To qualify the accuracy of the proposed noise estimation algorithm, the mean error of noise level estimation is used to compare the effects of different parameters: . Obviously, the more accurate the noise estimation algorithm is, the smaller the mean error is. The proposed noise estimation algorithm is a block-based method, and the performance of proposed method is related to block size. In this experiment, the test dataset with selected 134 images was processed by the proposed noise estimation algorithm with different block sizes. The mean errorbars of different block sizes are shown in Figure 8, and it indicates that the block size 15 N = can be more suitable for these noisy scene images. . Classic standard test images for synthesized noisy images: "Barbara," "Lena," "Pirate," "Cameraman," "Warcraft," "Couple," "Peppers," "Bridge," "Hill" and "Einstein." Effects of Parameters In this part, the experiment is conducted to compare the performance of proposed noise estimation algorithm under various parameter configurations. To qualify the accuracy of the proposed noise estimation algorithm, the mean error of noise level estimation is used to compare the effects of different parameters: where σ 2 added is the added noise variance, σ 2 est is the group of estimated noise variances, and σ 2 est is the average value of estimated noise variance. In this parameter selection experiment, we set noise level n ∈ [0, 30]. Obviously, the more accurate the noise estimation algorithm is, the smaller the mean error is. The proposed noise estimation algorithm is a block-based method, and the performance of proposed method is related to block size. In this experiment, the test dataset with selected 134 images was processed by the proposed noise estimation algorithm with different block sizes. The mean error-bars of different block sizes are shown in Figure 8, and it indicates that the block size N = 15 can be more suitable for these noisy scene images. Figure 7. Classic standard test images for synthesized noisy images: "Barbara," "Lena," "Pirate," "Cameraman," "Warcraft," "Couple," "Peppers," "Bridge," "Hill" and "Einstein." Effects of Parameters In this part, the experiment is conducted to compare the performance of proposed noise estimation algorithm under various parameter configurations. To qualify the accuracy of the proposed noise estimation algorithm, the mean error of noise level estimation is used to compare the effects of different parameters: . Obviously, the more accurate the noise estimation algorithm is, the smaller the mean error is. The proposed noise estimation algorithm is a block-based method, and the performance of proposed method is related to block size. In this experiment, the test dataset with selected 134 images was processed by the proposed noise estimation algorithm with different block sizes. The mean errorbars of different block sizes are shown in Figure 8, and it indicates that the block size 15 N = can be more suitable for these noisy scene images. Comparison to AWGN Estimation Baseline Methods In this part, four classical noise level estimation methods and three state-of-the-art noise level estimation methods are introduced in the comparison experiments to evaluate the performance of the proposed AWGN level estimation method. The Immerkaer's fast noise variance estimation (FNVE) [9], Khalil's median absolute deviation (MAD) based noise estimation [39], Santiago's variance mode (VarMode) noise level estimation [40] and Zoran's discrete cosine transform (DCT) based noise estimation [12] were chosen as the classical noise level estimation methods. The Olivier's nonlinear noise estimator (NOLSE) [11], Pyatykh's principal component analysis (PPCA) based noise esti-mation [17] and Lyu's noise variance estimation (EstV) [13] were selected as the state-of-the-art noise estimation methods. Figure 9 shows three example images ("Barbara", "House", "Birds") which were selected from three image datasets, and the corresponding noise level estimation results were obtained by the compared noise estimation methods. The original images are shown in the first row. Figure 9d-f shows their results of noise level estimation respectively. As shown in Figure 9, the DCT-based noise estimator performs better at low noise level, but underestimates the noise at higher noise level. FNVE, MAD, VarMode and NOLSE noise estimators easily overestimate the noise level. EstV noise estimator underestimates the noise level in each instance. For most noise levels, both the PPCA-based AWGN estimator and the proposed AWGN estimator work well. In these three scenes, our proposed method outperforms these several existing noise estimation methods. In this part, four classical noise level estimation methods and three state-of-the-art noise level estimation methods are introduced in the comparison experiments to evaluate the performance of the proposed AWGN level estimation method. The Immerkaer's fast noise variance estimation (FNVE) [9], Khalil's median absolute deviation (MAD) based noise estimation [39], Santiago's variance mode (VarMode) noise level estimation [40] and Zoran's discrete cosine transform (DCT) based noise estimation [12] were chosen as the classical noise level estimation methods. The Olivier's nonlinear noise estimator (NOLSE) [11], Pyatykh's principal component analysis (PPCA) based noise estimation [17] and Lyu's noise variance estimation (EstV) [13] were selected as the state-of-the-art noise estimation methods. Figure 9 shows three example images ("Barbara", "House", "Birds") which were selected from three image datasets, and the corresponding noise level estimation results were obtained by the compared noise estimation methods. The original images are shown in the first row. Figure 9d-f shows their results of noise level estimation respectively. As shown in Figure 9, the DCT-based noise estimator performs better at low noise level, but underestimates the noise at higher noise level. FNVE, MAD, VarMode and NOLSE noise estimators easily overestimate the noise level. EstV noise estimator underestimates the noise level in each instance. For most noise levels, both the PPCA-based AWGN estimator and the proposed AWGN estimator work well. In these three scenes, our proposed method outperforms these several existing noise estimation methods. To further prove the capability of the proposed method, we expanded our experiments on test dataset with 134 images. Tables 1-3 show the results of different AWGN estimation algorithms on the CSTI images, the Kodak PCD0992 images and the BSD images, respectively. In Tables 1-3, we set the added noise levels n σ = 0, 1, 5, 10, 15, 20 and 25. As shown in the Tables 1-3, FNVE, MAD, NOLSE and VarMode estimators overestimate the noise level. This is because these filter-based methods fail to fully consider the effects of image textures on noise estimation. DCT and EstV estimators work well for low noise levels, even better than the proposed estimator when the noise level is less than five, but seriously underestimate the noise level at high noise conditions. The reason To further prove the capability of the proposed method, we expanded our experiments on test dataset with 134 images. Tables 1-3 show the results of different AWGN estimation algorithms on the CSTI images, the Kodak PCD0992 images and the BSD images, respectively. In Tables 1-3, we set the added noise levels σ n = 0, 1, 5, 10, 15, 20 and 25. As shown in the Tables 1-3, FNVE, MAD, NOLSE and VarMode estimators overestimate the noise level. This is because these filter-based methods fail to fully consider the effects of image textures on noise estimation. DCT and EstV estimators work well for low noise levels, even better than the proposed estimator when the noise level is less than five, but seriously underestimate the noise level at high noise conditions. The reason is that these two methods use band-pass statistical kurtosis to extract noise components and eliminate the interference of textures to noise estimation. For higher noise levels, these estimators remove too many noise components while removing textures, which leads to underestimation of noise. Compared with those above methods, the PPCA estimator has a better performance. In most cases, the proposed AWGN estimation method is superior to the PPCA estimator. Generally speaking, the experimental results prove that the proposed AWGN estimator not only is suitable for most noise levels, but also has the better estimation capability compared with those state-of-the-art methods. [35] were selected as baseline methods because they are well studied and widely used for assessing new PGN estimator. Figure 10 shows three example images, and their corresponding estimated NLF results. The noisy images with ρ = 0.5 and σ 2 n = 10 are shown in the first row. The second row displays their NLF results of the proposed estimation method respectively. The third row shows the estimated NLF results of different PGN estimation methods respectively. It can be seen in Figure 10 that, the CPG, SPGN and RCF noise estimation methods overestimate the NLF in three examples, while both the LPCA-based PGN estimator and proposed PGN estimator work well, and the proposed method works better than LPCA on "Barbara" and "Birds" at noise parameters ρ = 0.5 and σ 2 n = 10. Figure 10 shows three example images, and their corresponding estimated NLF results. The noisy images with ρ = 0.5 and 2 n σ =10 are shown in the first row. The second row displays their NLF results of the proposed estimation method respectively. The third row shows the estimated NLF results of different PGN estimation methods respectively. It can be seen in Figure 10 that, the CPG, SPGN and RCF noise estimation methods overestimate the NLF in three examples, while both the LPCA-based PGN estimator and proposed PGN estimator work well, and the proposed method works better than LPCA on "Barbara" and "Birds" at noise parameters ρ = 0.5 and 2 n σ = 10. Table 6 shows that both the proposed PGN estimator and LPCA-based PGN estimator have comparable estimation results and are better than other estimators on BSD Tables 4-6, CPG, RCF and SPGN estimators overestimate the parameters of NLF. Tables 4 and 5 depict that the proposed PGN estimator work better than other PGN estimators on CSTI images and Kodak PCD0992 images in all cases. Table 6 shows that both the proposed PGN estimator and LPCA-based PGN estimator have comparable estimation results and are better than other estimators on BSD images. This indicates that the LPCA-based PGN estimator performs well for BSD images of size 481 × 321 but the proposed PGN estimator can obtain the best performance for all three image datasets, which means that the proposed PGN estimator can work more stably for images with different sizes. Consequently, compared with these representative PGN estimators, the proposed PGN estimator shows more accuracy for various scenarios at different PGN parameters. Blind de-noising is a pre-processing application that needs the estimated noise parameters. In this part, we try to prove that the proposed AWGN estimator and proposed PGN estimator can promote the performance of existing blind de-noising algorithms. In this part, we chose the state-of-the-art de-noising algorithm BM3D [42] to remove the estimated AWGN of noisy image, and selected representative PGN de-noising algorithm VST-BM3D [43] to eliminate the estimated PGN of noisy image. In order to make a reliable comparison of similarity between noise-free image and de-noised image, two image quality assessment methods were used in this paper: (1) The structural similarity index measurement (SSIM) [44] and (2) the peak signal-to-noise ratio (PSNR). The SSIM is defined as: where f is the noisy image, f S is the noise-free image, u f and u f S are the mean values of noisy image and noise-free image, respectively; σ f , σ f S and σ f , f S are standard deviations of noisy image, noise-free image and their covariance, respectively; C 1 and C 2 are small positive constants used to avoid the denominator becoming zero. The PSNR is written as: where MSE is the mean squared error of noisy image and can be derived from: where the size of input noisy image is R × C, and (r, c) is the pixel location. Noise Estimation Tuned for AWGN Blind De-Noising To quantitatively evaluate the performance of proposed noise estimation algorithm for blind de-noising, all 134 test images of the test dataset were used in AWGN blind de-noising experiment. The added noise levels were set as σ n = 1, 5, 10, 15, 20 and 25 in this experiment. The average SSIM and PSNR comparisons of different AWGN estimators with BM3D for blind de-noising on the test dataset are shown in Figure 11. Figure 11a depicts the SSIM values of different AWGN estimators with BM3D for blind de-noising, and the proposed estimator with BM3D de-noising algorithm has the biggest SSIM values at different noise levels. This demonstrates that the proposed AWGN estimator can promote the structure similarity index of blind de-noising. Figure 11b shows the proposed estimator with BM3D de-noising algorithm has the highest bars of PSNR value at different noise levels. This indicates that the proposed AWGN estimator can decrease the distortion of blind de-noising. In summary, the proposed AWGN estimator can increase the performance of blind de-noising stably. Figure 12 shows the average SSIM and PSNR values of different PGN estimators with VST-BM3D for blind de-noising at different noise parameters, and the proposed PGN estimator with VST-BM3D de-noising algorithm has the largest SSIM and PSNR values at different noise parameters and is superior to LPCA. This demonstrates that the proposed PGN estimator can improve the performance of PGN blind de-noising. From above experiments, it can be seen that the proposed AWGN and PGN estimators not only can work stably for various scene images with different noise parameters, but also can promote the performance of blind de-noising. Figure 12 shows the average SSIM and PSNR values of different PGN estimators with VST-BM3D for blind de-noising at different noise parameters, and the proposed PGN estimator with VST-BM3D de-noising algorithm has the largest SSIM and PSNR values at different noise parameters and is superior to LPCA. This demonstrates that the proposed PGN estimator can improve the performance of PGN blind de-noising. From above experiments, it can be seen that the proposed AWGN and PGN estimators not only can work stably for various scene images with different noise parameters, but also can promote the performance of blind de-noising. Conclusions and Future Work A novel method based on the properties of LGSE and HLMAD is presented in this paper to estimate the AWGN and PGN parameters of an image sensor at different noise levels. By using the map of LGSE, the proposed method can select homogenous blocks from an image with complex Conclusions and Future Work A novel method based on the properties of LGSE and HLMAD is presented in this paper to estimate the AWGN and PGN parameters of an image sensor at different noise levels. By using the map of LGSE, the proposed method can select homogenous blocks from an image with complex textures and strong noise. The local noise variances of the selected homogenous blocks are efficiently computed by the HLMAD, and the noise parameters can be estimated accurately by the MLE. Extensive experiments verify that the proposed AWGN and PGN estimation algorithms have a better estimation performance than compared state-of-the-art methods, including FNVE, MAD, VarMode, DCT, NOLSE, PPCA, EstV for the AWGN model and CPG, RCF, SPGN, LPCA for the PGN model. The experimental results also demonstrate that the proposed method can work stably for various scene images at different noise levels and can improve the performance of blind de-noising. Therefore, combing the proposed noise estimators with recent de-noising methods to develop a novel blind de-noising algorithm, which can remove image sensor noise and preserve fine details effectively for actual noisy scene images, is one important direction in our future studies.
12,033
2019-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Performing legal transactions based on entries in public registers. Selected issues The article concerns the analysis of the consequences of making further legal transactions with entities which should not be disclosed in public registers due to the invalidity of legal transactions on the basis of which the entry in the public register has been made. As a result of the conducted research it was found that, despite an absolutely invalid deed/founding act, the partnership/company can be disclosed in the National Court Register, and further legal transactions are valid. If the real estate is purchased on the basis of an absolutely invalid legal transaction, disclosure of the purchaser in the second section of the land and mortgage register is not covered by the principle of public credibility of land and mortgage registers. This also applies to the subsequent business real estate trading. Introduction Public registers are open and the data contained in them are covered by the principle of public credibility. It is presumed that the data entered in public registers are true, consistent with the actual and legal status [Article 3, 5 of the Act on LMR&M, 2 Article 17 section 1 of the Act on NCR, 3 Article 33 of the Act on FEA 4 ]. Legal transactions, on the basis of which entries are made in public registers, are carried out by natural persons, legal persons, organizational units without legal personality, and it may happen that the said legal transactions are absolutely invalid. In such a case, no entry should be made in the relevant public register. However, it may happen that, despite the absolute invalidity of a legal transaction, it will be disclosed in the public register. This applies to situations where a court referendary, judge or official (regarding entries in the Central Registration and Information on Economic Activity) who makes an entry, does not notice that the legal transaction is absolutely invalid. This can be illustrated by two cases. In 2014, a deed of a registered partnership was concluded between a natural person R, B sp. ] by a proxy appointed by a resolution of the shareholders' meeting due to the fact that the deed was concluded between a member of the management board R and that natural person R. In contrast, C spółka z ograniczoną odpowiedzialnością S. K. A. was represented by the sole management board member of the general partner, i.e. the natural person R. In accordance with Article 126 § 1 point 2 of the CCPC, provisions on joint-stock companies are applied accordingly to the limited joint-stock partnership. The provision of Article 379 § 1 of the CCPC stipulates that in contracts between a joint-stock company and its member of the management board, a joint-stock company is represented by the supervisory board or a proxy appointed by a resolution of the general meeting. Appropriate application of this provision to a limited joint-stock partnership, in which the sole general partner is a legal person, as in the analysed case, means that in a contract between a limited joint-stock partnership and a member of the management board of the sole general partner which is a limited liability company, a limited joint-stock partnership should be represented by a proxy appointed by a resolution of the general meeting. The minutes of the general shareholders' meeting must be drawn up in the form of a notarial deed [Article 421 § 1 of the CCPC in conjunction with Article 126 § 1 point 2 of the CCPC]. In the analysed case, C spółka z ograniczoną odpowiedzialnością S. K. A. was not represented by a proxy appointed by a resolution of the general meeting, but by a member of the management board of B sp. z o.o. It was an improper representation, so the deed of the registered partnership is absolutely invalid. Nevertheless, the registered partnership was registered in the register of entrepreneurs of the National Court Register. The court referendary did not notice that C spółka z ograniczoną odpowiedzialnością S. K. A. was improperly represented and made an entry in the public register, that is in the register of entrepreneurs of the National Court Register. A registered partnership is incorporated upon its entry in the register of entrepreneurs of the National Court Register [Article 25 1 §1 of the CCPC]. This entry is of a constitutive nature, which means that after signing a deed of registered partnership, there exists the so-called pre-registered partnership and only after its entry in the register of entrepreneurs a registered partnership is incorporated -an organizational unit without legal personality [Article 33 1 §1 of the Civil Code, 1964 5 ]. In 2004, the housing cooperative U concluded an agreement for the establishment of a cooperative ownership title to a dwelling unit in favour of the president of the management board of that cooperative, the natural person P. The agreement was concluded in the form of a notarial deed and the housing cooperative U was represented by two management board members, other than the natural person P. According to Article 46 § 1 point 8) [Act of the Cooperative Law, 1982], the adoption of resolutions on legal transactions between a cooperative and a member of the management board or transactions carried out by a cooperative in the interest of a member of the management board and represention of the cooperative in such transactions is the competence of the supervisory board; two supervisory board members, authorized by the board, are sufficient to represent the cooperative. In the analysed case, in the agreement with the president of the management board member P, the housing cooperative U was improperly represented by two other management board members, while it should have been represented by two members of the supervisory board. In this case, the agreement for the establishment of a cooperative ownership title to a dwelling unit in favour of the president of the management board the natural person P is absolutely invalid, as it was concluded in a manner inconsistent with the principles of representation. It cannot be validated. Nevertheless, the land and mortgage register was established for a cooperative ownership title to a dwelling unit, and the natural person P was entered there as a person entitled to the said unit. The notary who drafted the notarial deed made an error, as he failed to ensure that the housing cooperative U was properly represented in the agreement with the president of the management board. The housing cooperative was a regular customer of the notary office in which the transaction was carried out. The president of the management board, acting in the name of the housing cooperative U, had previously carried out many transactions in the notarial office concerned. The said notarial deed was drafted by a retired notary, a deputy of the notary running the office, which partly explains why the housing cooperative U could be improperly represented. The retired notary might not have known that the transaction was carried out between the housing cooperative U and its president of the management board, if before or during the transaction the appearing party had not informed the retired notary thereof and he had not asked about it. In the notarial deed there was no indication that the natural person P was the president of the management board of the housing cooperative U. The court establishing the land and mortgage register for the cooperative ownership title to the premises was unable to conclude, on the basis of the notarial deed, that the cooperative had been improperly represented; hence the court established the land and mortgage register for the premises and entered the natural person P as an authorized entity. The invalidity of the abovementioned agreements, i.e. the deed of the registered partnership and the establishment of the cooperative ownership title to a dwelling unit, drew attention of the author of this part of the article when the natural person R turned to the author, who is a notary, to amend the deed of the registered partnership by making a contribution in kind in the form of a real property and when the natural person P wanted to conclude a contract of donation of the cooperative ownership title to the dwelling unit for the benefit of his daughter. During a conversation with the notary, the natural person P informed that, at the time when the transaction of establishing the cooperative ownership title to premises was carried out, he was the president of the management board of the housing cooperative U. In both cases, the notarial transactions were refused, i.e. amending the deed of registered partnership by making a contribution in kind in the form of real estate and the donation of the cooperative ownership title to the premises for the benefit of the daughter. The aim of the article is to analyse the consequences of further legal transactions with entities who should not be disclosed in public registers due to the invalidity of legal transactions that provided the basis for making an entry in the public register, whereby the scope of the conducted research will be limited to two public registers: the National Court Register and the land and mortgage register. The study mainly used the formal-dogmatic method based on the analysis of the applicable legal provisions. The article provides a substantive interpretation of legal provisions with the use of the directive of criticism. Analyses that were conducted were as follows: the analysis of scientific publications, the analysis of court judgments related to the topic of the article and the analysis of the practice of applying law and its improvement. The formal-dogmatic method is insufficient to indicate legal solutions to the real problems of functioning of public registers described above. The text also includes the axiological analysis of the existing legal regulations in the scope of application of the principle of transparency and equality in the economic turnover, which allowed formulating conclusions regarding the directions of changes in the analyzed legal provisions. Effects of making an entry in the National Court Register on the basis of an absolutely invalid legal transaction The . 3740] it is pointed out that a limited liability company registered in the relevant register exists regardless of the type of shorcomings that occurred at the conclusion of the articles of association. In both analysed cases, entries in the relevant register were made for limited liability companies incorporated on the basis of absolutely invalid articles of association. The statement of the reasons to the Resolution of the Supreme Court of 10 January 1992, III CZP 140/91, specifies that despite the significant shortcoming occurring during the company incorporation process, the legal transactions made by it at the time before being removed from the register are undoubtedly effective, because the company existed and could therefore acquire rights and incur obligations. Although the above Supreme Court rulings were issued at he time when the Commercial Code 6 and Commercial Register 7 were in force, they are fully valid. With reference to the companies, i.e. the limited liability company and the joint stock company, Article 21 § 1 point 1) of the CCPC stipulates that the registry court may decide to wind up a company entered in the National Court Register if no articles of association have been concluded, and Article 21 § 6 of the CCPC specifies that the ruling on the winding-up of a company does not impair the validity of legal transactions of the company that has been registered. The judgment of the Court of Appeal in Białystok of 9 November 2012, I ACa 534/12 [LEX No. 1235980], concerning an improperly formed registered partnership, stipulates that upon entry in the register, the registered partnership is given the benefit of the doubt concerning the official nature of the information published therein and that it is reasonable to limit the effects of a defectively concluded founding act, which is mainly dictated by the security of trade and the protection of this partnership's business partners who concluded contracts therewith having been unaware of the defects of the founding act. From the above-mentioned provisions of the Code of Commercial Partnerships and Companies and court rulings, it can be concluded that registration in the National Court Register of any company whose deed is absolutely invalid makes this company being formed, makes it operate in trade, and legal transactions made after its registration in the National Court Register are valid and fully effective. Registration in the National Court Register of any entity whose deed/founding act is absolutely invalid does not result in restoration of the company's absolutely invalid deed/founding act. This applies to every entity which is subject to entry in the National Court Register, i.e. entrepreneurs and entities that are not entrepreneurs, e.g. foundations, associations not conducting business activity. The above-mentioned practical examples indicate that there are cases of registration in the public register, i.e. the National Court Register, of the entities established under contracts that are absolutely invalid, due to the fact that, among others, the court referendary or the judge failed to notice that the articles of association or the deed are absolutely invalid. Such entities may operate in trade for a long time, concluding various short-and long-term contracts, which are valid in accordance with Article 21 § 6 of the CCPC, the resolution of the Supreme Court of 10 January 1992, III CZP 140/91, the judgment of the Court of Appeal in Białystok of 9 November 2012, I ACa 534/12. However, this state of affairs may not last long. Persons establishing companies, foundations, associations and other entities may not be aware that a legal transaction aimed at establishing a given entity, under which the entity was registered in the National Court Register, is absolutely invalid. This also applies to most business partners of these entities. In everyday trading between companies, associations, foundations, when sales agreements are concluded, etc. and between the aforementioned entities and consumers, it is very rare that the parties check the articles of association or deeds in terms of their validity. Entry of the entity in the National Court Register is a sufficient condition to conclude agreements both within the ordinary course of business and going beyond the ordinary course of business. In practice, however, it may happen that other participant of a business transaction will notice that the deed/founding act of the entity disclosed in the National Court Register is absolutely invalid. Such cases may occur when an entity seeks to obtain bank financing or concludes a lease agreement. Such contracts are usually concluded for quite long periods and ensuring the return of financial resources/repayment of obligations under the lease agreement requires, among other things, becoming familiar with the deed/founding act establishing the entity, in order to check whether the conclusion of a financing agreement (credit, guarantee, surety, bond issue, leasing) requires the consent of any of the bodies, e.g. the shareholders meeting, general meeting, the supervisory board, etc. By becoming familiar with the deed/ the founding act, employees of, for example, a bank, may state that the deed in question is absolutely invalid and cannot be validated. In such cases, it should be assumed that the principle of public credibility of a public register is excluded and the entity that has discovered that the deed/founding act of an entity disclosed in the National Court Register is invalid, cannot conclude a financing agreement with the entity entered in the National Court Register on the basis of an absolutely invalid deed/founding act. The same should be done by any other participant of trading who knows that the deed/founding act preceding an entry in the National Court Register is absolutely invalid. Any entity that decides to conclude a contract with an entity disclosed in the National Court Register, despite being aware that the deed / founding act preceding the entry is absolutely invalid, acts in bad faith. This also applies to the entity disclosed in the National Court Register, whose deed/founding act preceding the entry is absolutely invalid, and who is aware of the invalidity thereof. The absolute invalidity is an objectively existing condition that anyone can refer to (ab initio and erga omnes) and there is no need for this to be found earlier by the court [Radzikowski, 2015, p. 193]. Any entity that notices that any company/partnership, foundation or association disclosed in the National Court Register was entered in the said register in spite of an absolutely invalid deed/founding act, cannot enter into legal transactions with such companies/ partnerships, foundations, associations, because it would act in bad faith. This also applies to persons performing public trust professions, i.e. professional lawyers, who were to for example, make amendments to the deeds/founding acts, e.g. notaries. The effectiveness of employment contracts concluded with entities that should not be entered in the National Court Register According to Article 3 [The Labor Code], organisational units 8 and natural persons have the capacity to be an employer. These entities are entitled to the employer's status if they 8 The literature of the subject emphasises that one can distinguish three types of organizational units having legal capacity to employ employees. These are: legal persons (e.g. joint-stock companies and limited liability companies), independent organizational units without legal personality which may acquire rights and incur liabilities on their own behalf and may sue and be sued (registered, professional, limited and limited joint-stock partnerships companies under organisation, tenants' associaitons, ordinary associations) and some organizational units forming part of legal entities. Z. Hajn hire employees. However, it follows from Article 22 § 1 of the LC, as well as the nature of the employment relationship that it is enough to employ one employee to become an employer. Employers may be independent organizational units without legal personality, which may acquire rights and incur liabilities on their own behalf, sue and be sued, if they hire at least one employee. A registered partnership, as an organizational unit, can be regarded as an employer. However, it needs to be considered whether a defective registered partnership, which should not be entered in the National Court Register, may have the employer's status and whether employment contracts concluded with such a partnership will be effective. At the same time, it should be pointed out that the legislator, in the current state of law, ignores the issue of defective partnerships. The analysis of Polish legal literature indicates that few authors attempt to analyse the legal consequences associated with the formation of a defective partnership [Nowak, 2010, p. 325]. Some authors point out that a defective deed of registered partnership, in principle, does not affect the subjectivity of the partnership [Nowak, 2010, p. 325]. In support of this position, it should be pointed out that at the moment of registration, the company is established as a legal entity, which is directly stated in case of registered partnerships in Article 25 1 of the CCPC. It should also be emphasized that upon entry in the register, a registered partnership uses the benefit of the doubt related to the official nature of information contained and published therein. Thus, employees who concluded a contract of employment not knowing about the defects of the partnership's founding act should not suffer any adverse effects in accordance with the principle of free access to the court register. In addition, this position is also supported by the principle of permanence of the employment relationship and the principle of protection of the employee's interest, among others expressed in the case of a person who not being at least 14 years of age in light of the law concluded an effective contract of employment. 9 With regard to the above-mentioned arguments, it should be stated that the effects of concluding a defective deed of registered partnership should be limited to only ex nunc 10 effects. This position is also reflected in case-law. 11 The Court of Appeal in Bialystok found 9 The Supreme Court found that "a minor who, not at least 14 years of age, has entered into a contract of employment and suffered a bodily injury in connection with employment as a result of an accident, is entitled to seek -before bodies appointed to settle disputes over claims of employees on account of the employment relationship that a defectively incorporated registered partnership is still effective against both third parties and internal relations between the partners. 12 Therefore, it should be acknowledged that a defectively incorporated registered partnership has no impact on its status as an employer, and the employment contracts concluded with such a partnership are effective. These contracts should also provide grounds for the retirement and disability rights, and contributions paid to the Social Insurance Institution (ZUS) should not be questioned, among others on the grounds of a defectively incorporated registered partnership. Similar conclusions apply to any other entity disclosed in the National Court Register, which concluded an employment contract, despite the fact that its deed/founding act is absolutely invalid. Taxes paid by entities that should not be disclosed in the National Court Register The constitutive entry in the National Court Register enables the entities disclosed therein to participate in business trading, earn revenues, incur costs, generate income and pay taxes on the income obtained. This means that such entitities can be taxpayers and paymasters. The invalidity of legal transactions preceding the entry in the National Court Register shall not affect the possibility of seeking a refund of tax paid by entities that should not be disclosed in the National Court Register. This follows, among other things, from the following provisions: Register is absolutely invalid, means that these entities are taxpayers. The absolute invalidity of the deed/founding act of an entity disclosed in the National Court Register is not, for example, a prerequisite for the application of Article 6 of the Goods and Services Tax Act 13 to transactions that are subject to the goods and services tax, carried out after the disclosure of the entity in the National Court Register. The partnership deed is subject to tax on civil law transactions pursuant to Article 1 section 1 point 1) letter k) of the Act on Tax on Civil Law Transactions. In the case when the the partnership deed/founding act is absolutely invalid, there is no tax obligation under Article 3 section 1 point 1 of the Act on Tax on Civil Law Transactions, and the tax paid, if any, is not due and constitutes an overpayment subject to refund [Radzikowski, 2015, pp. 192-193]. Pursuant to Article 199 § 3 of the Tax Ordinance Act, the tax authority may on its own ex officio conclude that the partnership deed/founding act is absolutely invalid. Termination of the legal existence of the entity disclosed in the National Court Register in spite of the invalidity of the legal transaction preceding the entry Based on Article 23 of the Act on the NCR, the registry court examines the deed, e.g. of a registered partnership for irregularities. In a situation when the registry court finds that the deed is invalid (among others, due to improper representation of the parties), it should refuse its registration ex officio [Nowak, 2007, p. 26]. Nevertheless, as in the case described in the introduction, it may happen that a registered partnership is entered in the register despite the invalidity of its deed. The question arises how to treat such defects, which, despite their existence at the stage of registration, were revealed only upon an entry of a registered partnership in the National Court Register. It should be noted at the same time that in the current state of law there is no provision in the Act on the NCR regulating this matter. 14 Therefore, it is necessary to consider the possibilities that are available in this respect and are provided for, among others, in the CCPC. The literature of the subject presents several proposals to solve the problem of a defective partnership, which would fill the legal gap caused by the lack of regulations in this respect and would not require a unanimous resolution of the partners [Article 58 of the CCPC] in the matter concerning declaration of invalidity of the partnership on the basis of Article 189 of the Code of Civil Procedure, dissolution of the partnership on the basis of Article 63 of the CCPC nor treating the company in accordance with the rules applicable to companies through appropriate application of Article 21 of the CCPC in conjunction with Article 33 1 of the Civil Code [Nowak, 2010, pp. 325-326]. In this study, considerations regarding a defective partnership will be examined within the scope of the last two proposals. Declaration of invalidity of the partnership on the basis of Article 189 of the Code of Civil Procedure would mean that transactions performed by the defective partnership are invalid and applicable ex tunc. As mentioned above, upon registration a partnership is established as a legal entity, even if the deed of the partnership is invalid. Acknowledging that the partnership has no legal capacity would constitute a significant threat to the partnership itself, its employees and business partners. In addition, regarding partnerships/companies already registered in the National Court Register in the literature and case-law a correct standpoint is presented, namely that it is advisable to limit the effects of a defective founding act due to the safety of business trading and protection of business partners who concluded contracts with the partnership/company, being unaware of the defects of the founding act [Szajkowski, 2008, p. 795]. Therefore, it should be noted that the proposal to declare the partnership invalid on the basis of Article 189 of the Code of Civil Procedure is not appropriate in respect of a defective registered partnership due to the excessive far-reaching effects thereof. As regards the dissolution of the partnership pursuant to Article 63 of the CCPC, it should be pointed out that in accordance with this article each partner may, for important reasons, demand that the partnership be dissolved by the court. However, the legislator did not include in the analysed provision an exhaustive list of important reasons. The decision was left to the partners, who can determine it in the deed of the partnership, and to the court. 15 The literature of the subject assumes that important reasons may be of objective or subjective (culpable and non-culpable) character [Kidyba, 2017]. Some authors emphasize that objective reasons, which make any further existence of the partnership unreasonable or virtually impossible, include for example, a permanent inability of the partnershuip, demonstrated by structural reasons, to function in a competitive market. On the other hand, subjective reasons of non-culpable character include, among others, a chronic illness of the partners whose personal work for the company, due to their qualifications, is indispensable. The subjective reasons of culpable character are, for example, the conduct of criminal activity by the other partners under the cover of the partnership, or persistent violation by the other partners of the rights of the plaintiff 's partner of the partnership [Rodzynkiewicz, 2014]. In the literature of the subject, it is also noted that important reasons for dissolution of a partnership must occur after the conclusion of the deed of the partnership [Nowak, 2010, p. 332], i.e. in the case of "subsequent" defects in the partnership relationship, which should be distinguished from a situation in which any irregularities that might be potentially regarded as "important reasons" for the dissolution, already existed even before the formation of the partnership" [Nowak, 2010, p. 332]. It should be pointed out that Article 63 of the CCPC does not limit important reasons only to those that occur after the formation of the partnership. Therefore, it seems reasonable to assume that the assessment of the validity of the reasons for dissolution of the partnership should lie within the court on the basis of a specific factual circumstances and regardless of the moment of its formation. The most appropriate is general interpretation of an important reason, under which it covers all the circumstances whose existence or continuation creates an obstacle to the existence of the partnership. It should also be emphasized that the partner may make such a claim at any time. Competence in this area is not vested in persons who have a legal interest and the court which could act in this respect ex officio. De lege ferenda one should postulate the dissolution of the partnership for important reasons also by the court ex officio or at the request of persons who have a legal interest. The proposal to treat a defective registered partnership in the same manner as in case of defective companies finds its justification in the scope of legal effects made by the defective company. The provision of Article 21 of the CCPC provides grounds to state that a decision to dissolve the company does not affect the validity of the legal transactions of the company that has been registered. The analysed article, in contrast to Article 63 of the CCPC, allows for the dissolution of the company by the court which can act ex officio. The cited provision of Article 21 of the CCPC also temporarily limits the possibility of requesting a company's dissolution. It should be pointed out, however, that the linguistic interpretation of Article 21 of the CCPC clearly indicates that the analaysed provision applies only to companies, i.e. limited liability companies and joint-stock companies, and does not include partnerships. In addition, one should agree with the position presented in the literature that Article 21 of the CCPC may be de lege lata applied to defective partnerships due to the fact that, among other things, "in the period from the entry into force of the CCPC until the entry into force of the Act of 14 February 2003 on amendment of the Civil Law Act and Some Other Acts, which introduces to the legal system, inter alia Article 33 1 of the CC, there were grounds to apply the said provisions by analogy, whereas beginning from 26 September 2003, these provisions could be applied accordingly" [Nowak, 2010, p. 334]. 16 Nevertheless, the above position can be considered an example of a praeter legem interpretation. In connection with the above, the de lege ferenda postulate should be made that the solution adopted in Article 21 of the CCPC should be also extended to partnerships. Bearing in mind the above, it should be stated that after registration, the legal existence of a partnership is challenged with the use of the instruments provided for in the CCPC. A defective registered partnership can be dissolved pursuant to Article 58 of the CCPC and Article 63 of the CCPC. However, the dissolution of such a partnership can only be effected by partners of a registered partnership. Thus, it should be pointed out that the legislator has differentiated the legal situation of partnerships and companies. However, such a differentiation has no legal justification. In addition, it places a registered partnership in a less favourable legal position than a company, in particular due to the period in which it is possible to demand dissolution of a company. De lege ferenda one should postulate introduction of the same legal solutions for defective partnerships and companies. Effects of further real estate trading in case of a disclosure of the owner in the land and mortgage register despite an absolutely invalid contract as the basis for entry in the land and mortgage register In real estate trading, the absolute invalidity of a legal transaction takes place in the following cases: when a legal person was improperly represented, a foreigner acquired real estate without the required permit of the competent minister of internal affairs [Article 6 section 1 on the Acquisition of Real Estate by Foreigners], in violation of the statutory pre-emption right which is vested under the Act to the State Treasury, local government unit, co-owner of the real estate, tenant [Article 599 §2 of the CC], co-holder of the co-operative ownership title to a dwelling unit [Article 17 2 section 6 on Housing Cooperatives]. In each of the above mentioned cases, despite the conclusion of an absolutely invalid agreement to transfer the ownership of real estate, agreement for perpetual usufruct right to land, the cooperative ownership title to the premises, and in the case of the pre-emption right, conclusion of an absolutely invalid sales contract, the purchaser may be disclosed in the land and mortgage register as a result of an error made by a notary drawing up the notarial deed and the court referendary making an entry in the land and mortgage register in the ICT system. The Supreme Court in its judgment of 24 January 2002, III CKN 405/99 [LEX No. 53722] ruled that the principle of public credibility of land and mortgage registers does not protect the purchaser if the agreement to transfer the ownership of the real estate was invalid. The entity who is disclosed in the land and mortgage register, on the basis of an absolutely invalid agreement, as the owner, perpetual usufructuary, holder of the cooperative ownership title to the premises, while performing further disposal-related transactions, does not effectively transfer the titles disclosed in the second section of the land and mortgage register if he does not hold them himself or herself. It should be consistently assumed that further trading in the real estate, perpetual usufruct right to land, cooperative ownership title to premises does not protect the subsequent purchasers, regardless of whether they act in bad or good faith, when one of the preceding legal transactions was absolutely invalid. A notary drawing up the notarial deed should read the document providing grounds to the seller in the capacity of the owner, perpetual usufructuary of the land, holder of the cooperative ownership title to a dwelling unit in order to, among others, verify whether the document in question, for example a sales contract in the form of a notarial deed, is valid. If the notary determines that the contract under which the purchase was made is absolutely invalid, he/she is obliged to refuse to carry out any further notarial actions. A notary is not authorised and, therefore has no obligation, to inform the land and mortgage register court of the inconsistency between the legal state disclosed in the land and mortgage register and the actual legal state. It may happen that the person whom the notary refuses to perform any further notarial actions due to an absolute invalidity of the previous legal transaction, will approach another notary, who will not notice the said invalidity of the preceding legal transaction or will not become familiar with the notarial deed on the basis of which the acquisition took place. The latter cases may occur when the seller assures that he or she does not have a copy of the notarial deed of the preceding contract and the notary does not read a copy of such a notarial deed which is filed in the land and mortgage register. The seller, informed by another notary that the contract, under which he/she was disclosed in the second section of the land and mortgage register, is absolutely invalid, trying to sell the real estate may intentionally conceal the content of the contract before another notary. A notary is not obliged to read the document on the basis of which the seller is disclosed in the second section of the land and mortgage register, although he or she should do so within his/her professional diligence. The principle of public credibility of land and mortgage registers does not protect free of charge dispositions or dispositions made for the benefit of the buyer acting in bad faith (Article 6 § 1 of the Act on LMR&M). For the existence or non-existence of the principle of public credibility, the good or bad faith of the seller does not matter. The further the real estate will be traded unpon the contract that is absolutely invalid, the more difficult it will be to discover that one of the previous purchases was absolutely invalid. For example, the second purchaser has a copy of the notarial deed of the contract under which he/she purchased the real estate but does not have the purchase contract of his/her predecessor. He or she presents to the notary only the copy of the last notarial deed and the notary is not obliged to verify in the land and mortgage register all the preceding documents that provided grounds for entries in the second section of the land and mortgage register. In case of discovery that the contract under which the entry in the second section of the land and mortgage register is absolutely invalid, it is necessary to conduct court proceedings to reconcile the inconistency between the legal state of the real property disclosed in the land and mortgage register and the actual legal state (Article 10 of the Act on LMR&M). Summary The possibility of invoking the absolute invalidity of a legal transaction by anyone does not solve the problem of the presence in business trading of entities disclosed in the National Court Register, despite the fact that the deed/founding act is invalid or further trading in real estate, the perpetual usufruct right to land, cooperative ownership title to the premises, when the entry in the second section of the land and mortgage register has been made despite the invalid agreement for the transfer of ownership, perpetual usufruct right, cooperative ownership title to the premises. The partners/shareholders of partnerships/companies disclosed in the National Court Register, despite invalidity of the deed/founding act, instead of dissolution, will seek to sell the shares, transfer all the rights and obligations to other persons, i.e. to cease to be partners/shareholders in the defective partnerships/companies. The same applies to the person disclosed in the second section of the land and mortgage register on the basis of an absolutely invalid contract. Instead of reconciling the content of the land and mortgage register with the actual legal state under Article 10 of the Act on LMR&M, which is unfortunately time-consuming and expensive, such a person may strive to sell the real estate, perpetual usufruct right to land, cooperative ownership title to the premises, if only the notary performing the legal transaction and the purchaser do not notice that the grounds for its acquisition have been absolutely invalid. This does not solve the problem of absolutely invalid legal transactions that give rise to the occurrence of subsequent events, but rather accumulates them. In order to improve the security of business trading in the event of a discovery that disclosure of an entity in the National Court Register or in the second section of the land and mortgage register has been made on the basis of an absolutely invalid legal transaction, any person who notices such an irregularity should be entitled to submit an application to the registry court or the land and mortgage register court to examine whether the legal transaction on the basis of which the entry has been made is valid. There is a danger that this entitlement may be abused by persons intending to impede real estate trading or undermine the credibility of entities disclosed in the National Court Register. If the court finds that the application is unfounded, it should obligatorily charge the costs of the application to the applicant and award damages therefrom for the benefit of the entities disclosed in the second section of the land and mortgage register, as well as order the publication of notices to correct the unfounded application at the expense of the applicant. The provision of Article 21 of the CCPC should apply to all the companies, i.e. to both partnerships and companies, as this would enable the court to dissolve partnerships in the same situations as companies.
9,230.2
2019-07-30T00:00:00.000
[ "Law", "Economics" ]
Simulation of the Mode Dynamics in Broad-Ridge Laser Diodes In this publication a new approach to simulate the mode dynamics in broad-ridge laser diode is presented. These devices exhibit rich lateral mode dynamics in addition to longitudinal mode dynamics observed in narrow-ridge laser diodes. The mode dynamics are strongly influenced by higher order effects, which are described by effective interaction terms and can derived from the band structure and the carrier scattering in the quantum well. The spatial dependency of pump current densities plays a crucial role in lateral mode dynamics, and thus, a Drift-Diffusion model is employed to calculate the current densities with an additional capturing term. Simulation of the Mode Dynamics in Broad-Ridge Laser Diodes Eduard Kuhn Abstract-In this publication a new approach to simulate the mode dynamics in broad-ridge laser diode is presented.These devices exhibit rich lateral mode dynamics in addition to longitudinal mode dynamics observed in narrow-ridge laser diodes.The mode dynamics are strongly influenced by higher order effects, which are described by effective interaction terms and can derived from the band structure and the carrier scattering in the quantum well.The spatial dependency of pump current densities plays a crucial role in lateral mode dynamics, and thus, a Drift-Diffusion model is employed to calculate the current densities with an additional capturing term. I. INTRODUCTION F ABRY-PÉROT type laser diodes find use in a diverse range of applications such as laser displays [1], [2], [3], [4] and projection [5], [6], [7], exhibiting phenomena related to mode competition [8], [9].These lasers often demonstrate mode hopping, wherein the relative activity levels of different longitudinal modes change with respect to time due to an antisymmetric interaction among these modes.In a recent development, a similar effect has been noted in broad area laser diodes, where multiple lateral modes are present [10].This mode interaction can be explained by the beating vibrations of carrier densities within the quantum well, which appear when several longitudinal modes are concurrently active [11]. The most common approach for simulating mode dynamics in these devices involves using rate equations, which entail formulating equations of motion for the photon numbers of the different optical modes and the quantum well carrier densities.An alternative method, the traveling wave method, involves solving a partial differential equation for the electrical field [12], [13], [14], [15], [16], [17], [18].Nonetheless, thus far, simulations of mode dynamics using the traveling wave method have not succeeded in reproducing the experimental results. As the mode dynamics are strongly influenced by higher order effects such as beating vibrations of the carrier densities or spectral hole burning, it is insufficient to consider a spatially constant carrier density in the rate equations.One possibility is to use spatial and energy-dependent distribution functions [19], but these calculations are very computationally expensive.Alternatively it is possible to add an effective mode interaction term to the rate equations for the photon numbers, which has been initially derived for maser devices by Lamb et al. [20].In literature this term can be found for single lateral mode Fabry-Pérot laser diodes [11], [21], [22], [23], [24], [25] and also for devices with multiple lateral modes [26]. Another important aspect for the simulation of Fabry-Pérot laser diodes with multiple lateral modes is the spatial dependency of the pump current densities.A higher current density in a specific area can result in an increased carrier density, leading to higher optical gain.Consequently, the modal gain of lateral modes increases when their mode functions reach a maximum in that region.Thus, it is important to accurately describe these pump current distributions in order to describe the lateral mode dynamics.For this purpose the Drift-Diffusion equations are often used in literature to describe carrier transport and to study the properties of semiconductor laser devices [27], [28], [29], [30], [31]. In this paper a new method is presented, where the rate equations with an effective mode interaction term are combined with an improved description of carrier transport using the Drift-Diffusion equations.As the time scale of the mode dynamics is in the region of 100 ns, using a time-dependent Drift-Diffusion model would be too computationally expensive.Thus the Drift-Diffusion equations are solved for a steady state.The resulting pump current densities are used in a subsequent mode dynamics simulation, where only the carriers in the quantum wells are considered in the dynamic equations.In Section III simulation results using this method for a green nitride laser diode with a ridge width of 10 μm. A. Mode Function The goal of this publication is the simulation of the mode dynamics in Fabry-Pérot type laser diodes.For this purpose the optical field is expanded using mode functions: where 0 is the vacuum permittivity, ω mp is the angular frequency of the mode with indices mp, B mp is the respective mode coefficient and u mp (r) is the mode function.The goal of the mode dynamics simulation is to know how mode coefficients B mp or the number of photons in each mode S mp = |B mp | 2 change with respect to time.It is assumed that the ridge width of the laser diode is small compared to the resonator length in order to be able to separate the longitudinal and transverse contributions of the mode functions u mp (r , z): Here the index m is used to number the transverse modes t m and p the longitudinal modes g p .The coordinates are chosen so that the optical field propagates in y direction and the device is grown in z direction.For transverse electric (TE) modes the polarization primarily points in the x direction t m (x, z) = e x t m (x, z) and the transverse modes are given by the eigen equation [32] for a given refractive index profile n(x, z, ω) and the interface conditions for the electromagnetic field.In the simulations this equation is solved for a frequency ω 0 near the gain maximum in order to obtain the transverse mode functions t m (x, z), their respective effective refractive indices n m eff (ω 0 ) and the group refractive indices The group refractive index determines the longitudinal mode spacing and is used to calculate the mode frequencies , and L is the resonator length.The normalization of the mode functions is given by [33] dx dz n 2 (x, z, ω) |t m (x, z)| 2 = 1, so that the photon numbers are given by the absolute squares of the expansion coefficients B mp in (1). B. Mode Dynamics The equations of motion for the carriers and photon numbers can be derived in the Heisenberg picture [33] and for the carrier densities n e and n h are approximately given by [26] The first two terms describe the losses due to spontaneous emission and nonradiative processes.The next two terms are used to describe the pumping and diffusion of carriers inside the quantum wells, while the last term gives the losses due to stimulated emission.The stimulated emission term is proportional to the photon numbers S mp and the imaginary part of the susceptibility Im χ is evaluated at the respective mode frequencies.The losses due to spontaneous emission B(n e , n h ) and the susceptibility χ(ω, n e , n h ) depend on the carrier densities and are calculated using Fermi-Dirac distributions with a fixed temperature.The equations of motion for the photon numbers are given by [26], [33] The first three term denote changes of the photon numbers due to stimulated emission, spontaneous emission and losses respectively.The last term is used to describe an effective interaction between the optical modes due to third order effects.Without this last term there would be no mode competition effects and in a simulation a steady state would be reached after a few nano seconds, where only the longitudinal mode with the highest gain would be active.The effective mode interaction is derived in [26] and is given by sum over all the other transverse and longitudinal modes: Here the mode interaction between two modes is proportional to their corresponding photon numbers S mp and an integral of the absolute squares of their respective transverse mode function at the position of the quantum well |t m x, z QW | 2 multiplied with a function G(Δω, n e , n h ).This function determines the strength of the mode interaction and depends on the frequency difference ω nq − ω mp and quantum well properties such as the carrier densities. A change of the carrier densities causes the susceptibility of the quantum wells to change.The resulting perturbations of the mode functions can also be included in the calculations by solving the one-dimensional Eigen equation [34] where tm is the corrected transverse mode function at the position of the quantum well and n eff,m is the corrected refractive Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. index.The effective dielectric function is given by where t 0 (x, z) is the solution of the two-dimensional Eigen equation ( 2) and n 2 (ω 0 , x, z) is the respective refractive index profile.The change of the dielectric function in the quantum well is given by It is possible to solve the equations of motion for a given pump term, for example a constant pump current density: For laser diodes with larger ridge widths multiple transverse modes participate in the mode dynamics.For an accurate simulation of the mode dynamics it is therefore important to know the spatial dependency of the pump term, in order to estimate which transverse modes benefit based on their respective lateral mode profiles.For example, if the pump term has a global maximum at a certain point then transverse modes will be important when their respective mode functions also have a maximum near that point. C. Drift-Diffusion Model For a more detailed calculation of the pump term we perform a steady state calculation, where the carriers in the bulk material are treated separately from the carriers in the quantum wells.For the bulk carriers we use use the Drift-Diffusion equations which are given by [35], [36] e ∇μ e = R e (n 3D e , n 3D h , φ) ) where μ M e,h are the mobilities, μ e,h are the chemical potentials, φ is the static electrical field and C is the charge density due to doping.The relationship between the bulk densities n 3D and the chemical potentials is given by the Boltzmann distribution as shown in [35], but is possible to use other statistical functions [27], [28]. The equations for quantum well carriers and the photon numbers are given by ( 4) and (5) where the time derivatives are set to zero.There are no mode dynamics in the steady state calculation, therefore it is sufficient to only consider one longitudinal mode per transverse mode, here the mode with the frequency closest to the gain maximum ω 0 is used.The charge density of the quantum well carriers is included in poisson equation as an additional charge density: where d QW is the quantum well thickness.The capture of the bulk carriers into the quantum wells is included in the recombination term: where the second term describes the losses due to spontaneous emission, SRH and Auger recombination [35].The capture term is only considered for |z − z QW | < d QW /2, and is given by = C e,h n e,h η e,h n e,h , n 3D e,h . The constants C e,h determine the strength of the capture process and the efficiencies η e,h are given by where both f Bulk k,k z and f k are Fermi-Dirac distributions that reproduce the densities n and n 3D for a fixed temperature.Similar capture terms can be found in literature [31], [37].For the bulk carriers the k • p method is used to calculate the three-dimensional band structure using the parameters from the material surrounding the quantum well.For the carriers in the quantum well the k • p method is used as well, here a one-dimensional differential equation is solved in order to obtain the two-dimensional band structure. The pump term in ( 4) is then given by This way the equilibrium current densities can be calculated for every voltage and can be included in the dynamic simulations.However, in order to include changes of the pump current densities due to changes of the quantum well densities in the dynamic simulations, the equation C e,h d QW n 3D,0 e,h η e,h n e,h , n 3D,0 e,h = j 0 e,h , is solved for an effective three-dimensional density n 3D,0 e,h using the two-dimensional density and the current density from the equilibrium simulation.The pump term in ( 4) is then given by . Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. III. RESULTS In this section results for an example structure with a single quantum well and a ridge width of 10 μm that exhibits multiple transverse modes are shown.The structure is described in more detail in Table II in the appendix.To calculate two-dimensional mode functions in (2) the dielectric function from [38] is used.Additionally the dielectric function of the quantum well is considered for a reference carrier density of 1 × 10 13 cm −2 .As mentioned before, the band structure of the quantum well is required to compute the optical properties such as the susceptibility.For this purpose the k • p-Hamiltonian proposed by Chuang et al. is used [39], [40], [41], the method is described in more detail in [26].The thickness of the InGaN quantum well used in the simulations is given by 2 nm, the indium concentration by 28 % and the material parameters are taken from [40].The formulas for the susceptibility and the mode interaction term are given in the appendix. For the charge carrier mobilities the model as described in [42] is used.The Drift-Diffusion equations are solved using the finite volume method with the Scharfetter-Gummel scheme for the carrier current densities [43].The remaining simulation parameters for the simulations are shown in Table I. In Fig. 1 the current densities and the bulk carrier recombination are shown for a voltage above threshold.With the chosen parameters most of the carrier recombination is caused by the capture term at the position of the quantum well.The steady-state simulation also gives access to the photon numbers S mp , which allows the calculation of the optical power which is shown in Fig. 2 as a function of the current I.This can be useful for the calculation of the threshold current and to choose the loss parameters in order to reproduce experimental results. The corresponding pump current densities are shown in Fig. 3 for different voltages.Due to the low hole mobility the pump current densities are high below the ridge and small in the other regions, but as the n-contacts are on the sides and far away from the ridge, the pump current densities have a maximum near the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The structure is the same as in Fig. 3 except for the additional QW, and is given in Table II in the appendix.edge of the ridge.This maximum is more pronounced for larger voltages. The method can also be used for structures with multiple quantum wells.In this case the pump current densities depend strongly on the capture coefficients, which is shown in Fig. 4 for a structure with two quantum wells.For equal capture constants the current densities are split equally between the quantum wells.However, due to the lower hole mobility the hole capture constant is expected to be larger.As expected this results in a higher pump current density for the quantum well on the p-side. Using these results it is possible to perform a mode dynamics simulation, as shown in Fig. 5 for different currents.These simulations are carried out by solving the equations of motion (4) and ( 5), which yield the photon numbers S mp over time.In addition the one-dimensional Eigen ( 7) is solved at every time step in to correct the mode profiles and to calculate the effective refractive index and the group refractive index.These refractive indices are then used in (3) to calculate the mode frequencies at every time step, which are shown in Fig. 6 as a function of time.At the beginning of the simulation they change due to changes of the carrier densities in the quantum well.However after the relaxation oscillations are finished, the mode wavelengths remain constant in time and the changes do not play a role for the simulation of the mode dynamics. The simulated streak camera images of Fig. 5 are then obtained by performing a sum over all modes at every time step, where the contribution of each mode is given by Gaussian Fig. 6.Changes of the mode wavelengths over time for I = 150 mA for the first three transverse modes.As the changes of the mode wavelengths are caused by changes of the quantum well refractive index, the quantum well carrier number is also shown as a dashed line.The carrier number was calculated by integrating both the hole and the electron densities over the quantum well area.function centered at the respective vacuum wavelength and an amplitude given by the photon number.The effect of mode hopping can be observed, where the laser output wavelength periodically changes from lower to higher values.As multiple transverse modes participate, the contributions of the respective longitudinal modes overlap, making it difficult to identify individual modes.This has been also observed experimentally for laser diodes with an even broader ridge of 40 μm [10].However the overall mode dynamics are very similar to As the effect of mode hopping is observable for small ridge laser diodes and can be explained by an interaction of the longitudinal modes, the coupling between different transverse modes does not seem to be as significant.This can be explained by the integral for the mode interaction in (6), which is larger for two modes with the same transverse mode profile, as the overlap of the respective absolute squares is is increased. To investigate this further, it can be of interest to look at the contributions of different transverse modes separately, which is shown in Fig. 7.In this case the different longitudinal modes can be observed, and the frequency separation Δω = πc Ln gr is determined by the cavity length L. For the considered structure the fundamental mode is not active, because the pump current maximum is not in the middle of the structure but near the edge of the ridge, as illustrated in Fig. 8, where the transverse mode functions at the position of the quantum well are shown together with pump current density for a current of 200 mA.If the current is increased further, more transverse modes will participate due to spatial hole burning. IV. SUMMARY In this publication a model for the simulation of mode dynamics in Fabry-Pérot type laser diodes with larger ridge widths and multiple lateral modes is presented.The carrier transport from the contacts to the active region is calculated for the steady state using the Drift-Diffusion model.The carriers in the quantum well are treated separately and are connected to bulk carriers using a capture term.The interaction between two different modes can de derived from third-order effects and is described in the simulation by an effective interaction term.This term strongly depends on the difference of the mode frequencies and the quantum well carrier densities. Simulation results are shown for a green nitride laser diode with a ridge width of 10 μm, but in principle this model can be used for other materials and geometries.The effect of mode hopping can be observed in the resulting mode dynamics, which can also observed experimentally for similar structures [10].However, the experiments show multiple mode clusters.In order to obtain the same behavior in simulations the presented model could be extended, for example fluctuations of the indium content in the quantum wells can be taken into account in future work. A. Calculation of the Susceptibility and the Mode Interaction For the calculation of the susceptibility and the mode interaction the same formulas as in [26] are used.In order to keep the calculations simple for the example structure, a constant scattering time τ s is used to describe the carrier scattering.It is of course possible to use more complicated scattering terms as described in [26] in more detail.Similarly a dephasing constant γ is used to describe the homogenous broadening and Hartree-Fock corrections are neglected.In this case the susceptibility is given by Here λ is used as index for the hole bands and the carrier distributions are given by Fermi-Dirac distributions for a given temperature and carrier densities.The quantum well band structure ε λ k and momentum matrix elements p λ are calculated using the k • p method.The mode interaction in (6) can be split into symmetric and antisymmetric contributions with respect to the frequency difference of the two modes Δω: For a constant scattering time the symmetric mode interaction strength is given by G S (Δω) = ω 4 0 Im χ(ω 0 )Im χ (ω 0 ) Δω . The mode interaction term is shown in Fig. 9 for the parameters used in the simulations and different carrier densities.The carrier losses in the quantum well due to spontaneous emission are given by [26], [44] Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The spontaneous emission spectrum I SE in ( 5) is calculated like the susceptibility, except that the factor 1 − f e k − f λ k is replaced by f e k f λ k : B. Structural Parameters The example structure with one quantum well used in the simulations has been obtained by stretching the structure in [45] in lateral direction.The structural parameters are given in Table II.The example structure with two quantum wells is obtained by repeating the 2 nm and 8 nm layers.The p-contact is located on top of the ridge and the n-contact is on top of the substrate with a distance of 10 μm to the diode. h , where the reference densities n ref e and n ref h are included in the two-dimensional Eigen equation and are chosen close to the threshold densities. Fig. 1 . Fig. 1.Solution of the stationary Drift-Diffusion model for a single QW laser diode with a ridge width of 10 μm and a voltage of 4 V.The arrows denote the carrier current densities for electrons and holes, while the colors indicate the recombination term.On the right side the region near the quantum well is shown in more detail.The positions of the quantum well and the contacts are indicated in orange. Fig. 2 . Fig.2.Blue line shows the optical power as a function of the current for the steady state simulation.The efficiency is also shown as an orange line, which has been calculated by dividing the optical power by the electric power I • U . Fig. 3 . Fig. 3. Electron pump current densities from (9) for different voltages near the threshold.The hole pump current densities look almost identical.The grey area indicates the region below the ridge. Fig. 4 . Fig. 4. Electron pump current densities from (9) for the structure with two quantum wells are shown for different ratios C h /C e of the capture constants.The structure is the same as in Fig.3except for the additional QW, and is given in TableIIin the appendix. Fig. 5 . Fig. 5. Longitudinal and transverse mode dynamics of a Fabry-Pérot laser diode with a single quantum well for four different currents.Here the output of the laser is shown as a function of wavelength and time.The data for this figure is calculated by multiplying the time-dependent photon numbers from the simulation with Gaussian functions that are centered at their respective vacuum wavelengths with a width of 0.02 nm. Fig. 7 . Fig. 7. Contributions of different transverse modes to the mode dynamics for a laser diode with a single quantum well and a current of 200 mA. Fig. 8 . Fig. 8. First five transverse mode functions at the position of the quantum well are shown as a function of the lateral coordinate x.Additionally the electron pump current density for a current of mA is shown as a blue dashed line. Fig. 9 . Fig. 9. Mode interaction strength from (10) as a function of the frequency difference Δω for different carrier densities. TABLE II STRUCTURAL [45]METERS FOR THE LASER DIODE USED IN THE CALCULATIONS, ADAPTED FROM[45]
5,705
2024-04-01T00:00:00.000
[ "Physics", "Engineering" ]
Interfaceless Exchange Bias in CoFe2O4 Nanocrystals Oxidized cobalt ferrite nanocrystals with a modified distribution of the magnetic cations in their spinel structure give place to an unusual exchange-coupled system with a double reversal of the magnetization, exchange bias, and increased coercivity, but without the presence of a clear physical interface that delimits two well-differentiated magnetic phases. More specifically, the partial oxidation of cobalt cations and the formation of Fe vacancies at the surface region entail the formation of a cobalt-rich mixed ferrite spinel, which is strongly pinned by the ferrimagnetic background from the cobalt ferrite lattice. This particular configuration of exchange-biased magnetic behavior, involving two different magnetic phases but without the occurrence of a crystallographically coherent interface, revolutionizes the established concept of exchange bias phenomenology. T he exchange bias (EB) effect, also referred to as unidirectional or exchange anisotropy, describes a magnetic coupling observed in core−shell nanocrystals (NCs) or thin films, generally between an antiferromagnet (AFM) and a ferro-or ferrimagnet (FM and FiM, respectively) separated by a physical interface. 1The EB effect with FM/FM, FiM/FiM, AFM/AFM, or FM/spin glass exchange interactions has also been reported, 2−6 as well as more exotic systems stemming from interfacial spin configurations (that is, noncollinear or frustrated interface spins) 6−8 or even in magnetic NCs holding antiphase boundaries due to their strained crystalline structure. 9,10Such coupling produces a horizontal shift in the hysteresis loop after cooling under an applied magnetic field and is often accompanied by an increase in its coercive field (H C ), endorsing these systems with a huge relevance in many technological applications related to permanent magnets 11 or magnetic recording media. 12,13iven that EB is by definition an interfacial phenomenon dependent on a physical boundary between two welldifferentiated magnetic components, 1,14 fine-tuning of the dimensions, nature, and overall quality of such interface is needed in order to control the magnetic coupling. 15,16In this context, thin interfacial layers with FM or AFM properties generated at film−substrate interfaces, driven by a structural 17 or magnetic reconstruction, 18 as well as spin disorder, 19 can add new degrees of freedom for engineering across heterointerfaces. The origin of EB is known to lie on pinned uncompensated interfacial spins, 20 but a crucial influence of the inner (bulk) pinned uncompensated spins from the AFM component was recently demonstrated. 21In fact, the interfacial spin distribution can be modified by the bulk AFM magnetic landscape, for instance, via nonmagnetic impurities or crystallographic defects, both of them conducive to AFM order dilution and the consequent AFM domain formation. 21,22These phenomena have been investigated in AFM materials, but the underlying physical mechanism can be considered for their FiM counterparts. Among the FiM candidates for the development of EB systems, combinations of different spinel ferrites stand out, given their potential in spintronics. 23,24These spinel-type oxides are prone to disorder and exchange processes on the cation sublattices with the normal and inverse spinel as the two limiting cases under an ordered sublattice occupation.The disorder, if controlled, can perturb the ideal local coordination, for instance, by inducing charge imbalances and ion vacancies, all of this having a huge impact on the heterostructure behavior.Consequently, the electrical and magnetic properties of these ferrites and, in general, of the transition metal oxide heterostructures, 25−27 can be modified when tailoring the interfacial and lattice characteristics through ionic motion. 28−31 Nevertheless, some control in the ion migration is required to avoid the otherwise deleterious effects degrading the EB. 32till, despite the important advances made in understanding the EB effect, 1,21 its analysis in single-phase objects lacking a core−shell or a layered structure, that is, lacking a physical interface between two magnetic phases, has been reported scarcely. 11,18Herein, we present a confined chemical treatment at the surface of single-crystallite CoFe 2 O 4 NCs by which a change in the spinel crystalline structure is not appreciated, but an ionic rearrangement in the subsurface and surface regions of the spherical NCs is induced.This situation offers a unique exchange coupling interaction within the same NC, without establishing a physical or coherent crystallographic interface.Yet, the magnetization reversal of the modified NCs is observed to occur in two steps and to come along with an increase in coercivity and an EB shift, suggesting the existence of a strong exchange interaction between two magnetic components.The chemical changes registered, associated with the cation rearrangement in the spinel structure, help understand the magnetism displayed and underline the possibilities of this new chemical route for the engineering of EB-related functionalities for final device applications. CoFe 2 O 4 NCs with a spherical shape and narrow size distribution (10.6×/1.3nm average diameter (95.5%), lognormal fit) were synthesized by seed-mediated growth (experimental details and Figure S1 in the Supporting Information (SI)).Figure 1a includes a TEM image of the NCs, and Figure 1b shows its powder XRD pattern at room temperature, which is indexed to a cubic spinel structure (Fd3̅ m symmetry group) and allows the disposal of secondary phases.−35 Elementary analysis using ICP-OES indicates an average Co 0.95 Fe 2.05 O 4 stoichiometry (from now on termed CoFe 2 O 4 ).A fraction of the same batch of these CoFe 2 O 4 NCs was subsequently immersed and confined in a basic aqueous medium using a water-in-oil (W/O) reverse microemulsion, that is, stabilizing them by a nonionic surfactant (Igepal CA-520) in water droplets in a hydrophobic continuous phase. 36Besides the surfactant, these water droplets of very small volume exposed the NCs to a high pH, promoting an oxidation process at their surface. 36,37This sample is hereafter labeled as CoFe 2 O 4 @Ox. To shed light on the effect these conditions exert on the CoFe 2 O 4 spinel structure, a high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) study of the as-synthesized (Figure 1c and d and Figure S2a and c, CoFe 2 O 4 sample) and the chemically treated (Figure 1e and f and Figure S2b and d, CoFe 2 O 4 @Ox sample) NCs was performed.These images show no clear differences in the crystalline structure of these representative NCs, lacking in both cases the core−shell structure expected from a superficial oxidation.Indeed, the spinel crystalline lattice highlighted at these high resolution images does not have defects, dislocations, or twin boundaries up to the surface.The higher resolution image included in Figure 1e, obtained along the [110] zone axis, permits us to appreciate distinct contrast associated with the positions of the atomic columns, offering an enlarged view of the defect-free spinel structure in the whole nanocrystal from the CoFe 2 O 4 @Ox sample.Interestingly, we can appreciate neither Moiréfringes nor grain or antiphase boundaries. 38Moreover, the size histogram of the samples (fitted to log-normal function) does not show apparent modifications either (Figures 1g and h), and the analysis of atomic column positions in a 2D projection of a CoFe 2 O 4 @Ox NC shows the absence of systematic strain fields (Figure S3), thus excluding the presence of interfacial strain between two crystalline phases. Yet, the magnetic behavior of the two CoFe 2 O 4 and CoFe 2 O 4 @Ox samples points to an important change in the configuration of the magnetic cations in the spinel structure.Figure 1i includes the comparison of the magnetic properties displayed by the as-synthesized CoFe 2 O 4 sample (blue curve) and by the CoFe 2 O 4 @Ox sample (red curve) at 10 K.The value of maximum magnetization registered for the CoFe 2 O 4 sample is ∼87 Am 2 /kg when applying the maximum field (7 T), close to that of the bulk cobalt ferrite saturation magnetization (M S(bulk) ∼ 90 Am 2 /kg) 39 and similar to other values reported for NCs. 40,41This is in agreement with the very good crystallinity of the NCs observed by HAADF-STEM and the stoichiometry registered.Anyhow, different effects related to a canted surface spin structure, 42 a gradient in the magnetic cations ratio moving outward from the core to the surface, 43−45 or a cationic disorder in the crystalline structure at the surface can explain the slight difference.While this value of M S when applying the maximum 7 T field drops to ∼45 Am 2 /kg for the CoFe 2 O 4 @Ox sample, the coercive field value increases (μ 0 H C = 1.06 T and μ 0 H C = 1.26 T for CoFe 2 O 4 and CoFe 2 O 4 @Ox samples, respectively).In both cases, these high values of μ 0 H C are related to the high magnetostriction of CoFe 2 O 4 , due to the strong spin−orbit coupling from the Co 2+ ions in the crystalline lattice. 43,46In addition, the shape of the hysteresis loop has evolved after the chemical microemulsionbased treatment, with two reversals of magnetization (one of them much larger), seen as two inflection points (μ 0 H S1 and μ 0 H S2 magnetic fields) around 5 and 10 mT and 1.14 and 1.55 T when comparing the derivatives (dM/dH), shown in Figure 1j.The significantly larger reversal contribution at low field in the CoFe 2 O 4 @Ox sample clearly hints that a cationic modification took place.Similar contributions at low field were reported in samples of CoFe 2 O 4 NCs synthesized by a coprecipitation method under alkaline conditions, 34,47,48 where phase segregation (due to Co 3 O 4 and/or Fe 2 O 3 ) or a phase with a reduced crystallinity was formed, but not detected in our samples in STEM. Figure 1k displays the temperature dependent magnetization curves, measured under ZFC (zerofield-cooled) and FC (field-cooled) conditions and recorded applying a field of 10 mT.Whereas the mass magnetization value of the CoFe 2 O 4 @Ox has decreased notably in comparison to that of the initial sample (in line with the hysteresis loops), the shapes of the ZFC and FC curves are very similar.Based on these ZFC-FC curves, it is possible to estimate the energy barrier distribution (in terms of the blocking temperature, T B : 49−51 fitted to a log-normal function in agreement with the size distribution; see Figure 1g,h,l).−54 This match in the blocking temperature reflects the very similar magnetically coherent volumes of FiM material in both samples, 55 despite the fact that there are two reversals of magnetization.Such a finding is in line with the absence of an interface or any other crystalline defect in the crystalline structure and the absence of byproducts (smaller nanoparticles and/or low anisotropy magnetic phases). In order to further analyze the switching behavior in the CoFe 2 O 4 @Ox sample in terms of exchange-coupling properties, we measured the hysteresis loops under ZFC and FC conditions applying an external magnetic field of 5 T (Figure 1m).Though there is a decrease in coercivity, the FC hysteresis loop shows a negative field shift (μ 0 H E = −56.9mT) associated with an EB effect.Such coupling, usually attributed to a FM/AFM interaction, is strong enough to produce a unidirectional anisotropy that causes the observed shift.The reduced coercivity registered for the CoFe 2 O 4 @Ox sample under FC conditions (compared to that of the ZFC loop) also unveils a reduction of the effective magnetic anisotropy, likely induced by the large cooling field.−58 In our case, this observation can be understood considering the presence of pinned uncompensated spins, which align with the sufficiently large cooling field employed and induce frustration of the FiM exchange coupling.The presence of this hypothesized larger number of pinned uncompensated moments is supported experimentally by the large drop in saturation magnetization but increased coercivity.Overall, these results point to a crucial influence of the synthetic conditions on the magnetic behavior of the CoFe 2 O 4 @Ox sample with respect to the as-synthesized one. Aiming to corroborate the idea of the confined chemical effect in the micelles as the origin of the change in the magnetic behavior observed for the CoFe 2 O 4 @Ox sample, we performed an additional mapping of Fe and Co distributions and investigated their electronic configuration using electron energy loss spectroscopy (EELS).Figure 2a [43][44][45]59 In this regard, the hysteresis loops of the initial sample show a very small reversal of magnetization at low field which can stem from the cobalt patches observed at the surface and present in both native and oxidized samples.Anyway, the very large value of coercivity registered at low temperature can only be associated with the To further support this relationship, we performed a Raman analysis of the as-synthesized CoFe 2 O 4 and CoFe 2 O 4 @Ox samples in order to investigate the chemical and structural origin of the peculiar magnetic features observed. 62 Ths technique can register six Raman active modes from the spinel structure, namely, 2A 1g , E g , and 3T 2g , characteristic of spinels with two different types of cations occupying the octahedral or tetrahedral sites, such as CoFe 2 O 4 (see Figure S4 in the SI).63 In general, the ferrite A 1g modes appear above 600 cm −1 and are usually assigned to the motion of oxygen in the tetrahedral AO 4 group along the ⟨111⟩ direction, involving a symmetric stretching of the oxygen atoms with respect to the metal ion at the tetrahedral void (T), 64 as well as the deformation of the three octahedral sites (O) nearest to each oxygen.65 Figure 2c includes the Raman spectra registered.In the as-synthesized sample (blue spectrum), the expected modes for the spinel lattice are observed, 66 with the most intense peak at 679 cm −1 (A 1g (2)) and a small shoulder at ∼620 cm −1 (A 1g (1)), which stem from the presence of Fe 3+ and Co 2+ ions at the tetrahedral sites, respectively.65,67 Note that the A 1g (1)) mode intensity is much lower than that of A 1g (2), meaning that the primary contribution to the AO 4 vibrations originates from the Fe 3+ ions.There is less consensus regarding the origin of the other low-frequency modes (E g and T 2g ), typically assigned to the tetrahedral unit in the Fe 3 O 4 material 64,68 or to the octahedral unit when considering mixed spinel ferrites such as CoFe 2 O 4 or ZnFe 2 O 4 .69,70 In the latter case, the E g vibrational mode has been assigned to the symmetric bending of oxygen with respect to Fe in the octahedral BO 6 void 64 and is usually absent in nanocrystals.71 The fact that this mode can be ascertained in our spectrum underlines once again the optimal crystallinity of the CoFe 2 O 4 NCs.On the other hand, the T 2g (2) mode has been reported to account solely for the Co 2+ ions occupying the octahedral sites.72 Hence, the higher intensity of the T 2g (2) and the A 1g (2) modes compared with the A 1g (1) mode confirms the predominant inverse spinel configuration anticipated for the cobalt ferrite. The characteristic features of the spinel crystalline structure are still present in the CoFe 2 O 4 @Ox spectrum (red curve).However, the relative intensities of the A 1g and T 2g vibration modes have notably changed, the T 2g ( 2) mode (at ∼470 cm −1 ) being the most prominent feature in the Raman spectrum.Interestingly, the vibrational modes T 2g (3) and A 1g (1) begin to merge into one broad band owing to the pronounced red-shift of the A 1g (1) mode, now located at ∼600 cm −1 (see also Figure S4).This shift is usually associated with structural distortions and/or the presence of a different set of cations at the tetrahedral/octahedral sites.Taking into account the absence of strain fields (Figure S3) and the ca.+1.0 eV shift observed in the Co L edge from the CoFe 2 O 4 @Ox sample (compared to CoFe 2 O 4 ; Figure 2b), the red-shift of the A 1g (1) mode is consistent with the presence of Co 3+ cations within the spinel structure, in addition to Fe 3+ and Co 2+ . 67The resultant charge compensation of the crystalline structure may proceed via Fe vacancies 73 or a partial Fe 3+ /Fe 2+ reduction.While the presence of Fe 3+ vacancies is supported by the decrease in the Fe/Co L 3 ratio registered, which indicates a reduced iron content in the CoFe 2 O 4 @Ox sample with respect to the pristine sample, the partial Fe 3+ /Fe 2+ reduction seems less probable given the absence of observable changes in the L edge of the Fe spectrum in the EELS analysis.The fact that we have not registered the vacancies in the HAADF-STEM analysis suggests that their content must be rather small.Along these lines, the decrease in the intensity of the A 1g (2) mode can be explained by this chemical and local modification promoted by the Fe 3+ vacancies created.The oxidation of some of the Co 2+ ions to Co 3+ and the changes associated with the Fe 3+ ions raise the question whether a nonstoichiometric Co II Co III Fe III □O 4 spinel is formed at the subsurface region, but since the HAADF-STEM analysis reveals no evidence of two crystallographic phases at the core and the surface shell, the as-formed mixed ferrite spinel must be highly disordered in terms of the metallic cation distribution.This disorder is also hinted at by the presence of additional features in the Raman spectrum of the CoFe 2 O 4 @Ox sample, displaying new modes of low intensity at 420 and 760 cm −1 , for instance. An additional experiment registering the evolution of the Raman spectra as a function of the incident laser power was also performed for the two CoFe 2 O 4 and CoFe 2 O 4 @Ox systems (Figure S5).The Raman spectrum of the assynthesized sample evolves into the same signature observed for CoFe 2 O 4 @Ox when treated under a laser power of 5.82 mW (Figure S5a), which can be associated with a partial oxidation of Co 2+ cations reported above. 74Furthermore, the very similar spectra recorded at 0.42 mW, after subjecting both samples to the highest laser power (21 mW; Figure S5a and b), exhibit a notable increase in the T 2g (2) mode intensity compared to the A 1g (2) mode, in agreement with a reduced iron content due to Fe 3+ vacancy formation.The presence of these vacancies can be understood as a preliminary step prior to the transformation toward maghemite (γ-Fe 2 O 3 ) and is also corroborated by the blue-shift of the A 1g (2) mode to ∼690 cm −1 (note that the A 1g mode characteristic from maghemite occurs at ∼700 cm −1 ). Conclusively, to explain the coupling mechanism and the local magnetic configuration given the chemical changes registered by EELS and Raman spectroscopy and given the fact that there is no crystallographically coherent interface, we take the coercivity of the initial CoFe 2 O 4 NCs as a reference (μ 0 H C = 1.06 T).With this large value taken into account, the fraction of the CoFe 2 O 4 phase at the outer shell of the assynthesized NCs switches readily (μ 0 H S1 = 5 mT; Figure 1j).However, for the CoFe 2 O 4 @Ox sample, while the magnetic phase at the subsurface region now switches with a value of μ 0 H S1 = 10 mT, the CoFe 2 O 4 phase at the core follows an even larger switching field (μ 0 H S2 = 1.55 T).This can be explained by considering the presence of unpinned uncompensated moments in the cation disordered subsurface region of the CoFe 2 O 4 @Ox sample, which couple to the external field and rotate along with the FiM CoFe 2 O 4 core, resulting in a coercivity enhancement. 21,22On the other hand, the rather strong negative μ 0 H E field (−56.9 mT) indicates the presence of pinned uncompensated moments that strongly couple to the FiM lattice but do not rotate even at the maximum field (7 T).The presence of these pinned and unpinned magnetic moments can be understood as the outcome of competing interactions within the parental spinel structure, where the Co 2+ oxidation has the Fe−O−Fe and Co−O−Fe superexchange interactions disrupted, leading to a highly frustrated subsurface region.This increased magnetic frustration, boosted by the assumed cation disorder, is reflected not only in the drop in the value of magnetization down to 45 Am 2 /kg (which can be explained by the presence of low-spin Co 3+ on octahedral sites 75 ) but also in the low-anisotropy component detected during the reversal of the CoFe 2 O 4 @Ox sample.This fact hints that, besides inducing magnetic disorder, some shortrange correlated spin disorder occurs.This situation, particularly in terms of the effects stemming from the presence of Co 3+ and the iron vacancies, inducing a charge reorganization, with local modifications of the valence charge states and possible creation of defect gap states, can explain the changes not only in the magnetization but also in the electronic, ionic, and tunnel conductivities. 76Such effects require a more in-depth investigation that falls out of the scope of this study.Alternative scenarios in the attained cation distribution at the subsurface, such as a change from an inverse to a normal spinel configuration, 77,78 would lead to a magnetization enhancement and can be therefore discarded. In summary, whereas the HAADF-STEM images show NCs with a uniform crystalline structure up to the surface and without the presence of defects or strain, both EELS and Raman spectroscopy, jointly with the magnetic properties, point to the presence of a pseudo core−shell structure with no physical interface.Such unusual characteristics render the system particularly fascinating, pointing to interfaceless exchange coupling between two different distributions of magnetic cations within the parent spinel structure. Figure 1 . Figure 1.TEM image with general particle overview and X-ray diffraction pattern with Le Bail refinement of the CoFe 2 O 4 NCs (a, b).HAADF-STEM images of representative CoFe 2 O 4 (c) and CoFe 2 O 4 @Ox (e) NCs and their respective FFT images (d, f).Inset in e: zoomed-in image of the defect-free crystalline structure.Size histograms (fitted to log-normal functions) of CoFe 2 O 4 (g) and CoFe 2 O 4 @Ox (h) NCs.Hysteresis loops measured at 10 K of CoFe 2 O 4 (blue) and CoFe 2 O 4 @Ox (red) samples (i) and comparison of their derivatives with μ 0 H S1 and μ 0 H S2 magnetic fields at which two events of magnetization switching occur (j).ZFC and FC curves measured at 10 mT (k) and distribution of energy barriers f(T B ) calculated from the ZFC-FC curves and derivatives (l) of CoFe 2 O 4 (blue dots) and CoFe 2 O 4 @Ox (red dots) samples.Hysteresis loops measured at 10 K of the CoFe 2 O 4 @Ox sample after ZFC (red) and FC at 5 T (gray) (m). shows elemental mapping for CoFe 2 O 4 and CoFe 2 O 4 @Ox samples, which indicates a similar increasing concentration of Co toward the surface of the nanocrystal in both cases.Although we cannot completely exclude an effect of electron beam irradiation, the results of different experimental techniques corroborate that CoFe 2 O 4 or MnFe 2 O 4 NCs synthesized via thermal decom-position typically have an increased content of Co or Mn at their surface owing to the different decomposition temperatures of the metallic precursors. Co x Fe 3−x O 4 stoichiometry, even with increasing values of x as moving outward.Additionally, EELS spectra of L edges for Fe and Co in CoFe 2 O 4 and CoFe 2 O 4 @Ox samples (Figure 2b) reveal that (a) the L edge of Co in CoFe 2 O 4 @Ox is shifted by ca.+ 1.0 eV in comparison to CoFe 2 O 4 , indicating an increase in the oxidation state of Co, resulting from the chemical treatment 60,61 and (b) the ratio between the Fe L 3 and Co L 3 edges decrease from 2.95 in the CoFe 2 O 4 sample to 2.59 in the CoFe 2 O 4 @Ox sample, indicating an average decrease in the iron content.These experimental results point to a partial oxidation of cobalt cations and the formation of Fe vacancies in the spinel structure at the subsurface region, providing an explanation for the presence of pinned uncompensated moments associated with the changes in the magnetic behavior of the CoFe 2 O 4 @Ox sample and, particularly, to the EB effect. Figure 2 . Figure 2. (a) Mapping of Fe and Co distributions in CoFe 2 O 4 and CoFe 2 O 4 @Ox NCs based on EELS.(b) Comparison of EELS spectra for L edges of Fe and Co between those of CoFe 2 O 4 and CoFe 2 O 4 @Ox samples.The dashed lines in the Co L 3 edge point out the shift of the edge position after oxidation.(c) Stokes-shifted Raman spectra registered using a 785 nm excitation wavelength from the CoFe 2 O 4 (blue curve) and CoFe 2 O 4 @Ox (red curve) samples.
5,815.6
2023-02-27T00:00:00.000
[ "Materials Science", "Physics" ]
AN APPROACH FOR DESIGNING PASSIVE POWER FILTERS FOR INDUSTRIAL POWER SYSTEMS BY USING GRAVITATIONAL SEARCH ALGORITHM Original scientific paper Harmonics are one of the important factors in determining the energy quality in power systems. Hence, eliminating or damping undesired harmonics is crucial in electrical power systems. Due to their low costs, Passive Power Filters (PPF) are widely used in industrial power systems to limit harmonics’ undesired affects. In this study, a new approach to design a PPF for power systems using heuristic Gravitational Search Algorithms (GSA) is proposed. Traditional Try and Error (TAE) method is, first, utilized to eliminate harmonics in our model power system. Then, GSA is adopted to minimize the effective current value of Point of Common Coupling (PCC) in the same model system. Our results yield that our proposed method with GSA eliminates the harmonics effectively. Introduction Today, power quality is important for consumers as well as electrical energy producers.Increasing use of nonlinear loads in electrical power systems has started to create serious harmonic problems in transmission and distribution system.This situation negatively affects the quality of the energy distributed to the customers.At the same time, the harmonic problems reduce the energy quality and affect the power systems in unwanted ways [1,2].The harmonic currents produced by these loads create harmonic voltages in the system.The harmonic voltages in turn increase the harmonic currents over the linear and nonlinear loads [1,2,3]. There are two solutions to limit the harmonic problems and improve power quality.The first method is the Load Analysis Method.The second is the line analysis method which utilizes active and passive power filters [3,4,5].Due to their economical costs, PPFs are widely used in the industry as a solution to power quality problems [6,7]. Heuristic methods are gaining momentum in filter design and harmonic elimination studies done in recent years.In filter design, by using Particle Swarm Optimization, minimization of harmonic currents and voltages has been achieved [8].Similarly, Chou et al [9] have proposed an optimal design for various types of high-voltage, large harmonic power filters by using Simulated Annealing (SA) algorithms.In another study, by using knowledge-based chaotic evolutionary algorithms, a PPF was designed for a three-phase fullwave rectifier [10].Berizzi et al have aimed to solve the dimension and placement problems of PPFs in power systems using genetic and micro genetic algorithms [11].In a similar study, passive filters were designed based on genetic algorithms, and by using its source current rms value as the objective function, the minimization process was achieved [12].Dehini et al. performed analyses on power quality and production cost of passive filters using Ant Colony algorithms [13].Last but not least, researches have been conducted to regulate the power factor while eliminate harmonics [14]. In this study, by using MATLAB/Simulink, a PPF was designed and a harmonic analysis was performed for an industrial power system with GSA.First, harmonic values that disrupted the system were identified.Then, appropriate PPFs were designed for those harmonics using the proposed GSA method, and each one was applied to the system.The filter's effects on the harmonic disruptions were examined.Results show that our design conforms to the IEEE-519-1992 standard. Passive power filters PPFs are composed of R-L-C components.The underlying idea behind the PPFs is to determine the L and C values such that the filters reach their resonances at the level of frequency of the undesired harmonic components [1,4].Fig. 1 shows the circuit configurations regarding the passive filters [16]. Single-tuned passive filters provide a low impedance path to suppress harmonic currents effectively [6,7].Calculation of impedance of the filter is given in Eq. ( 1) [6].X L and X C represent capacitor and inductor reactance at the main frequency respectively.The Q f value is computed as given in Eq. ( 2) [4,18,19]. Q in Eq. ( 5) refers to the quality factor and determines the sharpness of the filter setting.In this respect, the filters may be low Q-type or high Q-type filter.Single-tuned filter quality factor is given by Eq. (5).Typical values of quality factor vary between 30 and 100 [1,18,19]. Gravitational search algorithm According to Newton's law of gravity, there is a force of gravity between every object pair in the universe.Objects attract each other by the gravity force.The principle says that two particles attract each other with forces directly proportional to the product of their masses divided by the square of the distance between them.This condition is represented by Eq. (6) where F is the force in Newtons, m 1 and m 2 are the masses of the masses in kilograms, G is the gravitational constant, and R is the distance between the masses in meters [20,22,23].Newton's second law can be expressed as a mathematical formula for the amount of force needed to accelerate an object.The change in acceleration is directly proportional to the magnitude of the force applied to the object and inversely proportional to the mass of the object.This condition is represented by Eq. ( 7) where a is acceleration [20,22,23]. Gravitational Search Algorithm (GSA) is a new optimization algorithm.GSA works on the basis of Newton's law of gravity.GSA is proposed by Rashedi et al [20 ÷ 24]. GSA can be considered as isolated masses system.This system fits Newton's laws of gravity and motion.In the Newton law of gravity, each mass attracts every other mass with a gravitational force that is defined by Eq. ( 6).Masses are defined to find the best solution in GSA.GSA simulates a set of masses that behave as point masses in an N-dimensional search space.According to this algorithm masses are defined as objects.Every object represents a solution of the problem.Heavy masses corresponding to the best solution, move more slowly than lighter ones [20 ÷ 24]. In GSA, position of each mass in search space is a point in space and is taken in account as a solution to the problem.This position was given in Eq. ( 8).The masses initialize the positions of the N number of masses where Taking into account other mass fitness values, each mass fitness value is calculated according to Eq. ( 10) and Eq.(11) where M i (t) represents the i th mass of t time and ) (t fit i is also fitness value of the i th mass, best(t) and worst(t) fitness of all masses.According to the type of optimization problem, these values change as minimum or maximum [20 ÷ 24]. F is a force acting on mass i from mass j at d th dimension and t th iteration is computed as Eq.(13) where, m pi (t), m aj (t) active and passive masses respectively, at iteration tR ij (t) is the Euclidian distance between two masses i th and j th at iteration t. ε is a small constant [20 ÷ 24]. The total force acting on mass i in the dimension d is calculated by Eq. ( 14) where rand j is a random number in the interval [0,1] Eq. (15).Find the acceleration of mass i in the d-th dimension [20 ÷ 24]. By using Eqs.(10) to (17), position and velocity values for each mass are calculated repeatedly until it reaches the number of iteration [26].GSA flowchart is given in Fig. 2 [24,25]. Evaluate the fitness of each mass Update the the worst and the best population and G Calculate the speed and position Is it stop criterion be provided? Problem definition In this study, we used MATLAB/Simulink to model our system given in Fig. 3 [3].The initial power factor prior to the PPF adopted is 0,82 [3].Tab. 1 lists the harmonic values before the PPF application [3].As seen in Tab.1, the disturbance rates in the harmonic currents are 13,2 % .Fig. 4 illustrates the harmonic currents as a bar graph.Measurement point is PCC bus which is demonstrated in Fig. 3.The current values are total current values which are obtained from PCC bus.All the harmonic values are RMS values which are obtained from power gui in MATLAB simulink.Additionally, Fig. 5 shows the current and voltage wave shapes of the system.The Simulink model is shown in Fig. 6.Nonlinear loads were modelled by current sources.Motor loads were modelled as impedance loads [3,4,6].Because the passive filters use reactive power compensation at the base frequency, the system's reactive power needs must first be identified [1,6,7].Eq. ( 14) computes the reactive power needs where Q c is 2100 kVAr [6]. The filter's reactance at its resonance level is computed using Eq. ( 15).According to industrial power systems' short circuit/Load current ratio the IEEE 519-1992 standard requires the harmonic current threshold to be 5 % [3,15].Nevertheless, total harmonic current value is observed to be 13,2 % and is given in Tab. 1.It is assumed that the industrial power system runs under full load.The Simulink model in Fig. 6 was used, and the filters were determined based on the obtained harmonic current values.First, analyses with various power factors were conducted for the 5 th and 7 th harmonic amplitudes.Then, the same analysis was repeated for the 5 th , 7 th , and 11 th harmonics.In the first stage of the study, the filters were designed by TAE so as to draw the least amount of current.The design was repeated using GSA.For the risk of harmonic problems with power system harmonic values are taken lower than the harmonic order such as 5 th =4,8 th , it corresponds to 4 % of harmonic.The reactive power values are obtained according to this resonance setting.The results are given in Tabs.2, 3 and 4. The aim is to minimize the Point of Common Coupling (PCC) current's effective value, as expressed by the objective function in Eq. ( 16), when running GSA.This is the linecurrent value least affected by harmonics.While the algorithms are running, the filters' reactive power values are taken with variable constraints such as given by Eq. (17).In addition, the system's power factor is added to the algorithm with the constraint given by Eq. ( 18). We observed that 5 th and 7 th harmonics have the highest currents.So, we used them as the base to design the filters.In similar studies utilizing GSA, first case, it was determined that the 5 th and 7 th harmonics have the highest currents, and then filters were applied to these harmonics.After the filters were applied to the system, the resulting harmonic amplitudes, harmonic current and harmonic voltage values, and power factor values are displayed in Tab. 2. As seen in Tab. 2, the PCC current value drops in a filter less system from 363,9 A to 318,5 A When we had a system with the power factor of 0,952 and the reactive power of 2100 kVAr.Current and voltage harmonics values without a filter are measured as 13,2 % and 5,49 % respectively.With a filter using GSA, we measured lower values as 6,05 % and 3,05 %.We measured these values where the power factor was 0,951 and the reactive power was 2078 kVAr.Nevertheless, these reduced values are above the IEEE standard values.Because of the constraints, According to the IEEE 519-1992 standards, harmonic currents must be 5 % or less.We repeated the process by increasing the reactive power value to 2600 kVAr and redesigned filters for 5 th , 7 th , and 11 th harmonics.Because appropriate values were not obtained in the filter study with the 5 th and 7 th harmonics, the process was repeated.In this second case simulation, the filters were distributed as seen in Tab. 3 1510, 710 and 380 kVAr for TAE and redesigned filters for 5 th , 7 th , and 11 th harmonics.Then, we provided the results obtained by both traditional and GSA methods in Tab. 3 and showed the changes in the objective function with each iteration in Fig. 7.When we wanted to destroy the 5 th , 7 th , and 11 th , harmonics, current and voltage harmonics values without a filter were measured as 13,2 % and 5,49 % respectively.With a filter using GSA, we measured lower values as 4,48 % and 2,02 %.We measured these values where the power factor was 0,972.Those reduced values are in compliance with the IEEE standards.In this case, with the proposed GSA the results of current and voltage wave shapes are shown in Fig. 8.The process was repeated by increasing the reactive power value to 3100 kVAr and redesigning filters for the 5 th , 7 th , and 11 th harmonics.Because individual harmonic value of the 17 th harmonic does not comply with IEEE standard.The changes in the objective function with each iteration when the GSA was run are shown in Fig. 9.According to Tab. 5, the PCC current value dropped from 363,9 A to 308,8 A against filterless system with GSA.The power factor was 0,9889.Harmonic current and voltage values without a filter were 13,2 % and 5,49 % respectively.With a filter using GSA, the measured, lower values were 4,06 % and 1,84 %.These reduced values are in compliance with the IEEE standards [15].Current and voltage wave shapes with GSA filters are displayed in Fig. 11.Current and voltage wave shapes are seen to be better, after the filters were applied to the 5 th , 7 th and 11 th harmonics, than with a filterless system.Tab. 4 gives the filter parameters obtained as a result of last GSA filter design study.Fig. 10 shows the bar graphic of harmonic current magnitudes after the last GSA based filter was applied.Also the individual current magnitudes of system with the last designed PPF based on GSA are in compliance with IEEE 1519-1992 standard.When reactive power constraints were given properly the IEEE standards for both general and individual harmonic values are obtained in proposed GSA optimization.Harmonics are considered to be a factor that decreases energy quality.They also affect other recipients fed by the same busbar.Moreover, additional losses in the system are caused by undesired faults in the devices.The results of the GSA based filter designs made to resolve these harmonic problems in the system were discussed in this paper.In this study, GSA based single-tuned passive power filter designs and harmonic analysis was performed on an industrial power system.The design study was conducted with various power factors and by applying filters to various harmonics.The design study was conducted with various power factors and by applying filters to various harmonics.MATLAB/Simulink was used in this study.Appropriate filter values were established with the proposed GSA technique and applied to an example power system.With these filters, the current and voltage values came to a level acceptable by the IEEE 519-1992 standards.As a result, harmonic voltage and current values obtained from the application of an example power system with GSA are very close to classical method harmonic values.However, the GSA method obtained results more quickly and effectively/efficiently, without the need for trials.In conclusion, the results show that GSA is an effective solution in the design of passive power filters. Figure 1 Figure 1 Passive tuned (a, b) and passive high pass power filters (c, d, e)[16] X represents the positions of the i th mass in the d th dimension and N is the space dimension. of each mass as given by Eq.(9) Figure 2 Figure 2 Flowchart of GSA Figure 3 Figure 3Power System where PPF is to be adopted Figure 4 Figure 4 Harmonic current magnitudes prior to PPF Figure 6 Figure 6 Simulink Model of the studied power system 5 Solution Figure7Figure 8 Figure7 Changes in the GSA objective function with each iteration (for 2600 kVAr 5 th , 7 th and 11 th harmonics) Figure 9 Figure 10 Figure 9 Changes in the GSA objective function with each iteration (for 3100 kVAr and 5 th , 7 th and 11 th harmonics) Table 2 Harmonic analysis with filter in first case Table 3 Harmonic analysis with filter in second case Table 4 Thefilter parameters obtained as a result of GSA Table 5 Harmonic analysis with filter in third case
3,677.8
2015-04-22T00:00:00.000
[ "Engineering" ]
Using Silver Nano-Particle Ink in Electrode Fabrication of High Frequency Copolymer Ultrasonic Transducers: Modeling and Experimental Investigation High frequency polymer-based ultrasonic transducers are produced with electrodes thicknesses typical for printed electrodes obtained from silver (Ag) nano-particle inks. An analytical three-port network is used to study the acoustic effects imposed by a thick electrode in a typical layered transducer configuration. Results from the network model are compared to experimental findings for the implemented transducer configuration, to obtain a better understanding of acoustical effects caused by the additional printed mass loading. The proposed investigation might be supportive of identification of suitable electrode-depositing methods. It is also believed to be useful as a feasibility study for printed Ag-based electrodes in high frequency transducers, which may reduce both the cost and production complexity of these devices. Introduction The polymer vinylidene fluoride (PVDF) and the copolymer obtained from vinylidene fluoride and trifluoroethylene [P(VDF-TrFE)] have been used extensively as materials for piezo-and pyro-electrical sensors and in piezoelectric transducers [1][2][3]. The copolymer is often preferred since it can be OPEN ACCESS deposited directly onto a substrate by various methods (e.g., spin coating, bar coating, dip coating and spraying). It was recently shown that [P(VDF-TrFE)] also can be screen-printed into all-printed devices such as touch sensors [4]. In general, printed sensors and the merging of printed devices and electronics, can have a large potential for cost reduction in production [5,6], but a number of challenges have to be solved to efficiently integrate what is typically a large number of different materials (both organic and inorganic). For many sensor and transducer applications, the properties of conductive layers or electrodes are important, e.g., in terms of conductivity and transparency. In high frequency ultrasonic transducers, for example, electrodes have typically been produced by plasma and/or vacuum methods like sputtering or vacuum deposition. These methods are difficult to integrate efficiently in printing processes, and thereby limit the ability for mass production. It is therefore often preferable to use printable conductive inks (polymer-or metal-based) as electrode materials. Printable conductive polymer electrodes have previously been applied in high frequency (HF) ultrasonic transducers yielding very good impedance matched to piezoelectric films [7]. However, their low electrical conductivity affects the transducer sensitivity [8]. For electrodes used in the active part of an ultrasonic transducer, the electrode thickness also plays an important role in addition to the conductivity [9]. This is due to the additional mass introduced by the electrode material, which typically can be significantly higher when using a conductive metal-based ink as a replacement for sputtering or vacuum deposition. In the current manuscript, we have investigated both theoretically and by experiments the effects induced by a typical printed electrode in a high frequency ultrasonic transducer. A silver (Ag)-based nano-particle ink was used as the test electrode material, and different dried thicknesses of this material were applied by spin coating and drying a various numbers of layers to obtain thicknesses typical of both ink-jet and screen printing. Nano-particle inks like the one under test have previously shown promising characteristics for flexible device-electrode fabrication (e.g., good conductivity, adhesive strength and large-scale production possibilities) [10][11][12][13][14]. However, for electrodes on P(VDF-TrFE) the temperature has to be maintained below 140 °C during the sintering process, which to some extent, limits the conductivity of the material [15,16]. Our focus have therefore been to obtain electrodes with sufficient conductivity but at the same time, not to impose severe electrode mass loading, effects from e.g., times of internal reflections and bandwidth reduction. The effects on the transducer properties imposed by a backing layer were also investigated. Basic Model Theory An ultrasonic sensing system is basically composed of multilayer elements including piezoelectric film, electrodes, backing and a load medium. In the current study, we have focused on the mass loading imposed by having a thick layered electrode on the front side of the transducer. This electrode under test (hereafter denoted as EUT) is shown in Figure 1 together with a thin back side electrode (assumed to impose no mass loading), a finite backing material, and infinite loading materials on the back and front side. Many models used for analyzing transducer properties have previously been reported [17][18][19]. The one which will be used here is the impedance matrix model of a three-port network. The system configuration in Figure 1 can essentially be modeled as a three-port network as shown in Figure 2. In the model, the EUT layer was separated from the piezoelectric element and the layer assumed as one of the delay lines which imposes a mass loading on the thin front side electrode. From the figure, the piezoelectric film has the force 1 , 2 and incoming particle velocity 1 and 2 at the acoustic port. Each acoustic port is connected to the external matter of equivalent acoustic impedance Figure 2. Schematic drawing of a three port network transducer model. The model includes a piezoelectric film in cascade with one electric port and two acoustic ports terminated by acoustic loads 1 and 2 . Model of Acoustic Delay Line To estimate the effect caused by the acoustic layer, the slabs of acoustic impedance 11 and 21 (at the acoustic terminal ports, see Figure 2) were modeled as a delay line with phase constants in the materials as 11 and 21 , respectively. Determination of 1 or 2 is obtained by impedance transformation of these acoustic layers cascading with the terminated load which can be calculated from the well-known impedance transformation [19]. For instance, in case the acoustic layer at port 2 (EUT) is terminated by air 2 = ≈ 0, the equivalent acoustic impedance looking from port 2 of the transducer is: for which 21 = 21 21 , where 21 is the wavenumber in the front delay line (EUT) and 21 is the thickness of the EUT layer, 21 is the acoustic impedance of EUT and 2 is the terminated load impedance. Constituent Layer Dependence of Transmitted Power Frequency Response The transducer emits power as a function of frequency which essentially is influenced by its constituent material properties. In this section we derive a theoretical model of such an effect and compare it to COMSOL numerical modeling. To analyze the transmitted acoustic power, we start by calculating the particle velocity ( ) generated by the piezoelectric film. Particle Velocity The relation of and equivalent acoustic impedances at the transducer port can be derived from the matrix of three-port network as [20], for = ( = 1, 2 and ): where = , where is a wavenumber in the piezoelectric film, is film thickness, is electromechanical coupling coefficient, is the effective area of an acoustic port (assumed equal for both ports), is the acoustic impedance of the piezoelectric film, is the clamped capacitor of the piezoelectric film and ℎ is the piezoelectric coefficient (here it is ℎ 33 ). Acoustic Power Transmitted to Load The mean power emitted at the load port, which has equivalent acoustic impedance of , can be expressed as: where is effective area of load port and is particle velocity generated in the piezoelectric film (see Equation (2)). Equations (2) and (3) show that the emitted power at the load layer is influenced by the impedance of both backing and load port. The effect caused by an EUT layer is evaluated by calculating the power transmitted through the layer towards the load. When the front electrode (EUT) thickness is significant, as in Figure 1, acoustic power delivered to the load (here is 2 ) can also be calculated by Equation (3). Alternately, we use 2 and 2 as the impedance and particle velocity, respectively, where 2 is the particle velocity at EUT layer and 2 interface. The EUT delay line with impedance 21 can be seen as a mechanical two-port network with 2 and 2 as particle velocities at the left and right port, respectively (see right network arm of Figure 2). The value of 2 can be obtained from the EUT two-port network matrix [21]. The EUT slab thickness, however, is considered to be relatively thin compared to the wavelength in material which gives 21 and since 2 = 2 2 . Then the particle velocity in 2 , is: Constituent Layer Dependence of Transducer Electrical Properties In this section we present some analytical expressions necessary for understanding the influence of the acoustic layer on the transducer electric properties. Analytical models were also derived based on the previously mentioned three-port network. Some of these models will be compared to the experimental results later in the paper. Transducer Electrical Impedance Model The electric impedance (or inverse admittance) of loaded transducers can be determined from the impedance matrix which analytically can be expressed using Equation (1.56) from [20]. Using trigonometry identities and given that [20] can be modified into: in which = , where is the dielectric permittivity. Effect of EUT Variation on Electrical Properties As a simplified illustration, the admittance of stand-alone transducer film (no backing material and where and are the phase velocity and mass density of the piezoelectric material, respectively and 21 is the mass density of the EUT material (e.g., silver). The intersect plot of left (L) and right term (R) graphs of Equation (8) is shown in Figure 3b as a function of frequency. The intersecting trajectory of these two lines presents graphically the shifting of f r , which also roughly illustrates the saturation area and boundary limit of the f r shifts. Effect on Electrical Property of Lossy Transducer with Constituent Layer System Copolymer piezoelectric materials possess two significant loss properties which may be modeled by the dielectric loss factor tan( ) and mechanical loss factor tan( ) [22]. Accounting these loss factors into transducer response modeling was initially proposed by [23]. Additionally, in case the dielectric permittivity of the clamp capacitance is assumed frequency dependent. To implement the effect, we used a constant phase (CP) model [24], ( ) = ( ) − where and are two model parameters, for which the parameter value selection have been described in previous work [8]. In lossy material, the acoustic impedance companying a mechanical loss can be determined from = which for piezoelectric material is given by: for which: where , are stiffness coefficients at constant electric displacement and electric field, respectively, 33 is the elasticity in the direction of thickness axis and tan( ( ) ) is the mechanical loss factor of the copolymer. In most applications, polymer transducers are fabricated with non-piezoelectric layers (e.g., backing substrate or front layer) in order to support a piezoelectric-film structure or to damp out the signal tail (e.g., in short pulse applications). These non-piezoelectric layers will also introduce an additional mechanical loss factor. Thus, to improve model reliability, we have taken such effects into account in the transducer model for example in backing polymer, material phase velocity companying a mechanical loss ′ = (1 + tan( ( ) ) ) 1 2 ⁄ where is backing material phase velocity excluding loss and tan( ( ) ) is mechanical loss factor of the backing. For transducers with backing substrate, the equivalent acoustic impedance at the backing port ( 1 ) is also defined by Equation (1) and given by 1 = 11 tan 11 (for 1 = air). For the EUT layer, the acoustic impedance will be 2 = 21 21 (for 21 ≪ 1 and 2 = air). Prototyping Two polymer substrates used as a non-piezoelectric material were polyethylenimine (PEI) and polyimide (PI). These two polymers are available as commercial products of various types such as PEI sheets and PI in rolls. The piezoelectric-sensitive film was made from PVDF copolymer which was P(VDF-TrFE) powder (77:23 in molar ratio). Preparation of P(VDF-TrFE) solution for piezoelectric film development was done by blending 3.5 mL of DMF (dimethylformamide) solvent with 1 g of 77:23 molar ratio P(VDF-TrFE) copolymer powder. This solution was then mixed using an ultrasonic disperser to completely dissolve the powder into a viscous fluid. Fabrication was initiated by preparing a PEI substrate with size 50 × 50 mm 2 (also acting as a backing material). The polymer backing substrate was treated with plasma using a low pressure air atmosphere. This was done to promote good adhesion at the interface surface prior to rear electrode implementation (for mass-production purposes, the plasma treatment can be replaced by chemical treatment [25]). The first (rear) electrode layer was made by sputtering the silver (Cressington 208HR) on the pre-treated polymer substrate. The Ag layer with nominal thickness of 50 nm was then patterned by photolithography (using pattern printing on transparent paper as a mask) and wet-etching to obtain the desired electrode diameter of 3 mm. The layer of P(VDF-TrFE) solution was spin coated over the patterned electrode to achieve a film thickness close to 12 µm; two different spinning velocities (1000 rpm and 3000 rpm) were used for 9 s and 6 s, respectively. Also, to obtain a similar film thickness for four transducer elements, the substrate was centered (at the center point of the four elements) before spinning. After spinning, the substrate was degassed in a 1 mbar vacuum atmosphere to vaporize the solvent. The film thickness was estimated using a KLA/Tencor P6 surface profiler as explained in [7]. In order to crystalize the piezoelectric film, the sample was annealed at a temperature of 130 °C for 6 h. After annealing, the P(VDF-TrFe) surface was treated with plasma. The last transducer layer is a front electrode (EUT) made from Ag nano-particle ink (ALDRICH/719048) provided by Sigma Aldrich (St. Louis, MO, USA). The layer was implemented by spinning the ink on the piezoelectric film, which afterwards was sintered at the temperature of 130 °C for 1 h to enable the conductivity of the layer. To create a thicker EUT layer, the layer was repeatedly spun with the ink. To be able to make several different thicknesses on the same substrate, some areas were covered with tape which allows selective-thickness deposition of the ink on the substrate. For instance, to produce several different thickness layers, we first deposit ink covering the whole upper-side of the copolymer layer after annealing, then for the second ink deposition layer, area 1 was taped allowing ink to cover only areas 2, 3 and 4 (see Figure 4). Using the same approach, one can create the remaining layers. After that the layer was patterned to achieve an electrode diameter of 2.5 mm (by a similar process as described for patterning the rear electrode). Figure 4 also shows the transducer substrate which contains four transducer elements with an element pitch of 5 mm. In between the processing steps, the Ag nano-particle ink thickness was also measured using our KLA/Tencor P6 surface profiler. This instrument was also used to measure the rms surface roughness, typically ranging from 0.09 to 0.24 µm on the top of the transducer aperture. For a thin substrate such as PI, the thermal release adhesive tape (Nitto Denko, Inc., Osaka, Japan, courtesy of Teltec GmbH, Mainhardt, Germany) was used to support the thin film structure during the layer developing processes. The electrical connections out of the transducer were made by joining pin connectors to the electrode layer with conductive epoxy. To make P(VDF-TrFE) films piezoelectric, the elements were poled using a high voltage AC source at room temperature (10 periods with amplitude 825 V and frequency 0.25 Hz). The proposed AC poling schedule involving multiple dipole switching, which is often preferred compared to a DC poling, e.g., due to an enhanced transducer response and improved homogeneity of the poled area [7,8,26]. Characterization An electrical LCR analyzer (E4982A, Agilent, Santa Clara, CA, USA) with a 1-300 MHz frequency range was used to characterize the transducer electrical properties. To measure the transducer parameters, the instrument generated equally coherent stepped frequency wave magnitudes over the programmed frequency band consequently collecting the detected response signal from the transducer for calculation processing. To alleviate the capacitive effect introduced by wire and transducer conductive line, transducer-like calibration kits were made. The calibration kits were built to compensate for the pin through the legs to the piezo-sensitive area. The thickness of the patterned EUT element was measured using a KLA/Tencor P6 surface profiler. Transducer functionality was tested by measuring the acoustic response signal. Unlike the LCR analyzer system, the measurement setup used a pulse-echo system excited by signal of Gaussian 2nd derivative as described in [8] with a pulse width = 2.94 ns. The acoustic response signal of the transducers was measured. To measure the wave passing through the EUT layer, we consider only the reflected wave from the front side instead (which distinguished by calculating the time delay) as depicted in Figure 5. From the figure, the spacer is a microscope glass of 1 mm thickness and the reflector is highly polished surface steel of 1.5 cm thickness which ideally yields a total reflection of the wave. For a coupling media of acoustic wave, the front gap was filled up with distilled water and the setup was stabilized by applying clamping force on both side of the setup. Here, one should notice that the acoustic wave will travel through the EUT layer twice before being detected by piezoelectric film. Modeling Results of the Effect on Transmitted Power Frequency Response In this section we present results of the study in the Section 3. Initially, two different configurations of the transducer (no backing material and with polymer backing) were studied. In the model, we assume that EUT layer thickness was negligible. We used the material constants and loss factors as shown in Table 1. With front load variations (in this case 2 = 2 ), the transmitted acoustic power (TAPF) per unit area frequency response was determined by dividing Equation (3) by . Figure 6a shows the influence of the front load variation on the TAPF of the transducer with no backing material (therefore 1 = air). The variations are expressed in term of 2 ⁄ . The transducer produces the same trend as in Section 1.4.1 of [20] (higher impedance ratios yield higher magnitude, but narrower bandwidth). However, in the polymer backing case (Figure 6b), the response characteristics are different. The peak magnitude is accompanied by a broad bandwidth when For a significant EUT thickness, the effects due to electrode thickness variations were studied. As in Figure 2, we assume that the polymer backing transducer was operated with a front load ( 2 ) of water. In this case, the polymer backing length was infinite and the rear electrode thickness was neglected. Thus, 1 = 11 is the impedance of the backing polymer. With the condition 21 ≪ 1 and 21 is much larger than 2 , the impedance transformation at port 2 can be simplified to 2 = 2 + 21 21 . A variable 2 used in Equation (3) was defined by Equation (6). Thus, with the EUT thickness variation, using Equation (3) and 2 of water impedance, the TAPF at the load 2 can be plotted as shown in Figure 7. In the figure, results from numerical modeling by COMSOL simulation software are also plotted. Variations of TAPF characteristic, influenced by thickness variation, are depicted in Table 2. Here, we notice that the magnitude of acoustic power, the peak frequency and the 3dB bandwidth decreased with increases in the EUT thickness. Thicker EUT layers act as energy band pass filters with more effects at higher frequencies (right part). The thicker layer provides a higher conductivity, but will cause the reduction of bandwidth with down-shift in frequency peak magnitude. Similar effects, but resulting from the sputtered electrode transducers were also experimentally observed in [9]. Effects on Electrical Properties Direct measurement of the effect on the transducer properties in acoustic domain is relatively complicated (e.g., measurement of particle velocity in the material). It is more convenient to demonstrate the effect by comparison of analytical model and the experiment via the electrical domain. Analytical Results Computed results of electrical properties based on the theory in Section 4 are presented. The transducer fabricated on PEI polymer substrate thickness of 850 μm was used in this calculation. Acoustic impedances 1 and 2 and also the material loss factor were defined as in Section 4.3. Using Equation (7), the capacitance and phase of the polymer backing transducer with EUT thickness variations can be plotted as shown in Figure 8a,b. From the figure, rapid undulations in its envelope pattern are observed. What causes of the undulation and its characteristics were discussed in previous work [8]. In Figure 8, a rising tail at the low frequency part of the capacitance occurred when a constant phase model was included in the calculation. Here, one should notice that pattern shift of the capacitance undulation envelope is in accordance with the peak-phase frequency shift (Figure 8b), which are both proportional to the EUT thickness variation. It is also of interest to compare the effects when backing length is infinite ( 11 ⟶ ∞). For this condition, 1 becomes purely resistive and equal to the characteristic impedance of the backing material. Thus, by using Equation (7), plot of capacitance and phase of the transducer with finite and infinite backing slabs are compared (Figure 8c,d) The resulting curves of infinite backing length are smooth and have the same trend as the finite ones, but marginally different peak-phase frequencies are observed (90 MHz and 96 MHz). Experimental Results Several configurations of the polymer backing transducer were implemented and thereafter electrically poled before measuring the electrical properties. Most transducers were fabricated on PEI polymer thickness of 850 μm. As seen from the modeling, the substrate layer also influences the transducer properties. Thus, a couple of transducers were fabricated on the thin PI substrate (thickness of 25 μm and 50 μm). Figures 9a,b show the measured properties of the transducer whose both side electrodes were made by conventional metal sputtering (silver). The transducer with (both) electrode thicknesses of 50 nm was fabricated on PI polymer substrate of 50 μm thickness. Using material constants as in Table 1 for Equation (7), a comparison of the mathematical model to the experimental measurements can be made. The phase velocities of PI given in Table 1 were obtained by adjusting either Young's modulus or Poisson's ratio within the parameter ranges reported for the material, until a reasonable good agreement between the analytical and experimental curves have been achieved. One should notice that since the analytical model is one-dimensional, so an adjustment of any of these material values will give the same results as long as the longitudinal phase velocity is the same. For the PI polymer backing transduce thickness of 25 μm, the comparison plots are shown in Figure 9c,d. The transducer front electrode thickness of 0.98 μm was made of Ag nano-particle ink. For both transducers, better agreement between experimental and analysis data occurs in the lower frequency region. The observed differences in the high frequency regime can be due to calibration errors in the impedance measurements and/or frequency dependent errors occurring from e.g., the permittivity model or from other analytical models. A rapid undulation occurs in both capacitance and phase in a same way as seen in Section 7.2.1, but for the phase, the data were processed to only plot the envelope. From both figures, the capacitance pattern of the measurement and its responding phase peak shift is inversely proportional to the EUT thickness value. Figure 11. Shifting of peak phase frequency versus EUT thickness. Plots of peak phase frequency versus EUT thickness are shown in the Figure 11. The figure shows a comparison of the analysis A determined from Equation (7) with the condition of infinite backing length ( 11 ⟶ ∞), analysis B determined from Equation (7) with the actual backing length (850 μm) and the experimental data. Considering the analysis data, as we see for a thin EUT layer, the peak phase frequency changes dramatically for the variation of thin EUT layer and slows down as the EUT layer become thicker. Similar properties were observed in the case of air backing (Section 4.2). Experimental data show the same tendency as analysis B even though some values are disparate. Also, the experimental data are not smoothly sorted. A plausible explanation is that it results from variations of the thickness of piezoelectric film (i.e., errors from film manufacturing) which yield different transducer resonant frequencies. All experimental data have a lower frequency value (compared to Analysis B) for each EUT thickness. Also comparing different backing thicknesses (Analysis A and B), the infinite backing transducer yields a higher phase peak frequency for each EUT thickness. Finally, three transducers with different front electrode thicknesses (all with the Ag nano-particle ink) were characterized acoustically with the measurement set up as described in Figure 5. Examples of the acoustic responses obtained from a metal reflector are shown in Figure 12 both in the time (Figure 12a) and frequency domains (Figure 12b). The frequency responses (Figure 12b) show a decreasing magnitude as the EUT thickness increases. From the figure, it is easy to determine a central frequency fc where the maximum response occurs, and 6 dB bandwidth for each transducer, which are listed in Table 3. Figure 12 shows the same tendency as for the elastic power previously shown in Figure 7, i.e., a decrease in central frequency fc and magnitude with an increasing EUT thickness. It is interesting to compare the performance of our experimental nano-silver transducers (e.g., as summarized in Table 3) with transducers having conventional sputtered electrodes. The performance of transducers with comparable film thicknesses have been reported previously, although with some differences, e.g., in terms of substrate/backing material, sizes of the effective area, and front side medium. For example, in [27], unfocused P(VDF-TrFE) copolymer transducers with an upper of sputtered aluminum electrode and a lower sputtered chromium/gold/ chromium electrode were built on top of a similar substrate (PEI). These transducers with the same thickness (12 µm ) yielded a center frequency of 72 MHz and a 6 dB bandwidth of 70%. These parameters are comparable to the one listed in Table 3. Focused transducers made from PVDF and P(VDF-TrFE) on epoxy and aluminum backings with sputtered electrodes from chromium/gold have been studied in [2,9]. In [9], using an epoxy backing with film thickness of 10 µm, the authors obtained center frequencies varying from 35 to 44 MHz and a 6 dB bandwidth from 63% and 131% depending on differences in the measurement conditions. The work reported in [2] with a film thickness 10 ± 2 µm and aluminum backing showed a center frequency of around 38 MHz with a bandwidth of 83%. Moreover in [3], also using a focused P(VDF-TrFE) transducer and a film thickness of 10 µm, but a conductive epoxy backing, the authors obtained center frequencies around 51 MHz with a bandwidth of 120%. It is interesting to notice that all these reports on focused transducers yield smaller center frequencies than the peak frequencies listed in Table 3, while the bandwidths are either comparable or higher. We believe that these differences are mainly due to variations in physical conditions (e.g., backing material, transducer loading, focusing, and aperture size). Conclusions In all electronic sensor devices, a high conductivity is desired in order to minimize resistive losses that are known to reduce sensor/transducer sensitivity. However, as the current investigation has shown, there will always be a trade-off between increasing the electrode thickness to provide sufficient conductivity, and avoiding unwanted acoustical effects imposed by the electrode thickness. In contrast to previous studies on conductive polymers [8], our test transducers using metal based electrodes with thicknesses typical for printed devices, have identified the electrode mass loading as the most important limiting factor for high frequency performance. Within the range of tested electrode thicknesses (all providing sufficient conductivity), ink-jet printers (or other printing devices) that can produce electrode thicknesses toward the lower thickness range (0.216 μm), will be preferred. Screen printed electrodes, one the other hand, with reported dried thicknesses for nano-based inks from 2 μm and higher [13], might lead to a substantial reduction in the acoustic performance towards the highest frequencies. To summarize, we demonstrate the feasibility of using an Ag nano-particle ink as electrode material in HF copolymer ultrasonic transducers implemented on a polymer substrate. A number of transducer prototypes were investigated both experimentally and by analytical/numerical methods. The proposed analytic model is particularly useful for optimizing the transducer properties (i.e., sensitivity and bandwidth) for any frequency range and structures where the 1D wave model can be used as an approximation. A relatively good agreement was also established between the suggested constant phase model for the dielectric response and the experimental data. The proposed dielectric model can be used in electrical matching design [28] of the transducer application requiring high sensitivity. The copolymer transducers with Ag nano-particle ink-based electrode were in a pulser-receiver configuration able to send and detect a very broad-banded high-frequency pulse (center frequency >60 MHz and 6 dB bandwidth around 30 MHz or 50%), but with a peak-frequency and magnitude strongly related to the EUT thickness.
6,762
2015-04-01T00:00:00.000
[ "Materials Science", "Physics" ]
The electrophysiology of language production: what could be improved Recently, the field of spoken-word production has seen an increasing interest in the use of the electroencephalogram (EEG), mainly for event-related potentials (ERPs). These are exciting times to be a language production researcher. However, no matter how much we would like our results to speak to our theories, they can only do so if our methods are formally correct and valid, and reported in ways that allow replicability. Inappropriate practices in signal processing and statistical testing, when applied to our investigations, may render our conclusions invalid or non-generalizable. Here, we first present some issues in signal processing and statistical testing that we think deserve more attention when analysing data, reporting results, and making inferences. These issues are not new to electrophysiology, so our sole contribution is to reiterate them in order to provide pointers to literature where they have been discussed in more detail and solutions have been proposed. We then discuss other issues pertinent to our investigations of overt word-production because of the effects (and potential confounds) that speaking will have on the signal. Although we cannot provide answers to some of the issues raised, we invite researchers in the field to jointly work on solutions so that the topic of the electrophysiology of word production can thrive on solid grounds. Recently, the field of spoken-word production has seen an increasing interest in the use of the electroencephalogram (EEG), mainly for event-related potentials (ERPs). These are exciting times to be a language production researcher. However, no matter how much we would like our results to speak to our theories, they can only do so if our methods are formally correct and valid, and reported in ways that allow replicability. Inappropriate practices in signal processing and statistical testing, when applied to our investigations, may render our conclusions invalid or non-generalizable. Here, we first present some issues in signal processing and statistical testing that we think deserve more attention when analysing data, reporting results, and making inferences. These issues are not new to electrophysiology, so our sole contribution is to reiterate them in order to provide pointers to literature where they have been discussed in more detail and solutions have been proposed. We then discuss other issues pertinent to our investigations of overt word-production because of the effects (and potential confounds) that speaking will have on the signal. Although we cannot provide answers to some of the issues raised, we invite researchers in the field to jointly work on solutions so that the topic of the electrophysiology of word production can thrive on solid grounds. IMPROVING PRE-PROCESSING A common step in ERP analysis is filtering. In many studies, all we can find regarding the filtering procedure are cutoff values. However, this is incomplete information since a filter has other important parameters that affect the outcome of the filtering procedure. Different software will vary in their default values for these parameters. Researchers should not only try to understand how different filter parameters affect the signal studied (e.g., Widmann et al., 2014), but also at a minimum report the following in addition to the cut-off value (Picton et al., 2000;Gross et al., 2013): software used for filtering, filter type, order, and direction of the filter (forward, backward, or both), and whether any changes were made to default parameters. Another common step is to define a pre-stimulus baseline period, which can then be used to normalize the rest of the signal. This pre-stimulus baseline provides a good indication of the signal-to-noise ratio (SNR) in the data. If an ERP difference post-stimulus is similar in magnitude as pre-stimulus differences, the poststimulus difference is likely noise, not an effect induced by our manipulation (e.g., Woodman, 2010). Therefore, even if one does not apply baseline correction, the signal should always be displayed including a pre-stimulus interval so that the SNR can be evaluated. IMPROVING STATISTICAL ANALYSIS Results that cannot be explained by mere chance are highly informative for our theories. However, by using statistical tests inappropriately, we may make incorrect inferences regarding the probability of our results. An example is the well-known increased family-wise error rate (FWER, the probability of false positives amongst all multiple tests performed at some alpha-level) associated with the common practice of testing multiple time windows for significance (see Supplementary Material for an example). Alternatively, certain time points/windows may be selected for statistical testing on the basis of some criterion. However, a biased selection of this criterion also results in an inflation of false positives (Kilner, 2013). Under a different approach, successive univariate tests are conducted and effects are considered significant if the number of adjacent significant time points exceeds a pre-determined threshold (Guthrie and Buchwald, 1991). Piai et al. (2014) showed the problems associated with the incorrect determination of this threshold, leading to increased FWER. When possible, we should opt for statistical tests that provide nominal FWER control while maintaining statistical power, such as cluster-based statistics for example (Maris and Oostenveld, 2007;Pernet et al., 2014). Other valuable recommendations are provided in Allen et al. (2012) and Rousselet and Pernet (2011). UNSOLVED ISSUES For many years the dominant notion was that muscle activity associated with overt production would contaminate the EEG signal. Recently, that view has changed and the increasing number of ERP studies employing overt production is claimed to support the feasibility of combining EEG with overt speech. However, an increasing number in ERP studies employing overt production does not confirm that measuring ERPs with overt production is unproblematic. Moreover, even though it has been argued that "artifactfree brain responses can be measured up to at least 400 ms post-stimulus presentation" (Ganushchak et al., 2011, p. 5), the question we should ask is what constitutes an artifact in the context of overt production. Myogenic speech-related artifacts may precede speech onset by up to 500 ms, compromising the potentials recorded on the scalp (e.g., Brooker and Donald, 1980). If RTs differ consistently between conditions, the speech-related artifacts could also contaminate the prespeech signal with consistent timing differences, resulting in an artifactual ERP effect. Artifacts aside, another problem is physiological in nature. We are interested in which (and when) differences emerge in the waveforms time-locked to a stimulus as a function of our experimental manipulation. We know that our manipulation elicits a difference between conditions-an effect-in vocal response times (RTs). However, breathing and articulation are functions controlled by the brain. So RT differences are likely to be accompanied by systematic differences between the conditions in the relative timing of speech-related artifacts and of brain activity related to the control of speech that are independent of linguistic effects. This problem is well-known and has been mentioned, for example, by Luck (2005) in his rules of ERP experimental design and interpretation: "Be cautious when the presence or timing of motor responses differs between conditions" (p. 97). Researchers have measured movementrelated cortical potentials preceding mouth opening (Deecke et al., 1986;Yoshida et al., 1999) and speech-related breathing cortical potentials preceding phonation (Tremoureux et al., 2014). Importantly, these cortical potentials may precede mouth opening and phonation by 600 ms or more (Yoshida et al., 1999;Galgano and Froud, 2008). If conditions differ consistently with respect to when participants prepare to move their mouth, which is likely given any systematic RT differences, ERP differences between conditions could emerge as a function of these potentials. In this case, the effect is truly neural, yet not directly reflecting the cognitive function of interest. Finally, suppression of the auditory system may occur pre-speech onset (Ford et al., 2007;Wang et al., 2014). This raises the possibility that speech output efference copy affects the ERPs measured for conditions differing in RT since the timing of auditory suppression would systematically differ between conditions. Electromyogenic (EMG) activity recorded from mouth muscles provides valuable information on this issue. EMG activity, either directly related to the articulation of the response or merely preparatory, can start as early as 250 ms after stimulus onset (Riès et al., 2012(Riès et al., , 2014, as exemplified in Figure 1A1 for one word-naming trial. The lower panel ( Figure 1A2) shows how EMG activity is consistently increased (indicated by the warmer colors) prior to speech onset (indicated by the solid black line) on the single-trial level (see Supplementary Material for details). Neural activity to move these muscles must precede the first measurable excitation on the muscle itself, so we cannot know whether potentials preceding speech are reflecting our cognitive manipulation only, or are already overlapping with the neural signals needed to control the mouth muscles. Figure 1B provides another example of this issue. Participants' ERPs from one same picture-naming condition were split by their median RT, creating two surrogate conditions (mean longest RTs = 992 ms, mean shortest RTs = 729 ms). As Figure 1B1 shows, a simple difference in RT between two "conditions" can result in ERP differences of at least 1 µV starting as early as 200 ms. This early ERP difference is significant with various statistical tests (see Supplementary Material for details), p-values between 0.008 and 0.056. Note that dichotomizing a variable, as we do here, is not a recommended practice in statistics (MacCallum et al., 2002), so our median-split approach is only meant to illustrate this point, but should not be taken as a valid approach to investigate the relation between RTs and the electrophysiology of language production. Of course one could argue that RTs are shorter or longer for some cognitive reason, so the differences shown in Figure 1B simply reflect cognitive processes associated with this slowing down. Moreover, the timing of this ERP "effect" could be taken as an indication that our manipulation tapped a certain cognitive process. The question is whether we should interpret the ERP effect in Figure 1B as reflecting a cognitive function of interest even though the ERPs come from the same cognitive manipulation. Rather, given that the observed ERP waveform is a sum of latent components, we should consider the possibility that our observed ERP effects are the net result of latent components reflecting a manipulated cognitive factor and latent components with consistent timing differences reflecting cortical activity related to low-level aspects of speaking, such as breathing and muscle control. In fact, this remark has been made very recently with respect to the breathing potentials preceding phonation: "the findings [. . . ] indicate the need to take the respiratory component of speech and its cortical determinants into account when conducting and interpreting such studies" (Tremoureux et al., 2014). The extent to which these issues may be especially pertinent to ERP components closer to articulation onset also deserves attention. Note that, although our example shows more positive-going ERPs for trials with shorter RTs, this relation should not be taken as a rule across studies. This is because different studies are likely to obtain different configurations of latent components reflecting breathing, muscle control, and cognitive factors, and these different configurations are likely to yield different observed ERP waveforms (e.g., Luck, 2005). One may ask whether early effects are problem-free in this respect. However, the answer is complicated by the physiological issues described above in combination with technical issues. One of them may be caused by acausal filters (i.e., the filter applied forwards and then backwards). Due to this procedure, later slow components (possibly speech-related potentials and artifacts) can affect earlier parts of the signal (Acunzo et al., 2012), artificially creating an "ERP effect" that seems early enough to be the cognitive component of interest, rather than speechrelated. The extent to which this factor could affect the signal that languageproduction researchers study is to our knowledge largely unknown. Additionally, if the low-pass filter is not appropriately designed for the data in question, speech-related cortical potentials and artifacts may end up smeared for tens of milliseconds before and after the event of interest (e.g., VanRullen, 2011;Widmann and Schröger, 2012), artificially creating differences that are early enough to seem related to the cognitive function of interest. Again, the extent to which this factor could affect the signals we study is largely unknown. In fact, the issues raised with respect to RT and breathing-pattern differences are not exclusive to spoken-word production. Timing differences in manual responding or in breathing and heart rate, and skin conductance (e.g., in studies on emotion) are likely to have some of the problems discussed here. In conclusion, we need to consider that the ERP differences observed in conditions differing in RT are partly reflecting the relative difference in the timing of brain activity related to speaking (breathing and mouth movements) and other speech-related artifacts, in addition to our cognitive manipulation. We should also consider the possibility that our cognitive manipulation is not necessarily reflected in a stimulus-locked ERP component and that the ERP differences we observe reflect speech-related potentials only. Researchers could record mouth EMG activity and breathing rate in addition to scalp EEG to assess whether these pre-speech potentials are overlapping with the ERP effects observed over the scalp. These issues need to be addressed so that our field can move forward on a solid foundation.
3,042.2
2015-01-13T00:00:00.000
[ "Computer Science" ]
The Enforcement of Counter-Radicalism through Educational Values (Analytical analysis of the Book of I'tiqād by Al-Bukhārī) Radicalism is an understanding of social life, far beyond reconciliation and communication, similar to acts of violence. This misconception is indeed very dangerous, particularly if it is made in the name of religion itself. Evidently, the religious principle which always encourages a good and ethical way of life seems to have been misunderstood, perhaps abused. Radicalism might be handled and prevented by implementing the teachings of Imam al-Bukhārī, e.g. the I'tiqād, a clear understanding of religion and outstanding teaching practices. The aim of this research is to analyze the contents of the book upon its values of Islamic teachings concerning harmony through living together under a foundation of educational values towards radicalism. The research employed a full qualitative method and data documentation related to the research, which was then analyzed descriptively. In the meantime, the results of the study revealed that Imam al-Bukhārī, in his book I'tiqād al-Bukhārī, proposed four (4) educational values or elements of peace in Islamic teachings, namely: (1) harmony with the public at large, (2) harmony with the government, (3) tranquility with the Muslims, and (4) harmony with believers of other religions. These educational values could be an instructional approach for the enforcement of counter-radicalism that would be applicable to Islamic education in Indonesia. │ 181 education teachers' role in making preventive efforts to avoid radicalism. Another research was conducted by Rahmawati (2014) on Deradicalization of Religious Understanding in Yusuf Qardhawi's Thought in terms of Islamic Religious Education Perspective. This research focuses on Yusuf Qardhawi's thoughts on radicalism and deradicalization efforts. Research reviewed by Rosanita (2016) regarding the Perceptions of Islamic Religious Education Teachers on Religious Radicalism: Multisite Studies in SMA Negeri 1, SMK Negeri 1, and MA Negeri 1 Kota Mojokerto. This study discusses perceptions, influencing factors, and preventive efforts of Islamic Education teachers at SMAN 1, SMKN 1, and MAN 1 Kota Mojokerto regarding religious radicalism. As a result, the research studied by Maulidah Rohmatika and Umu Arifah Rahmawati is very different from the author's research. Maulidah Rohmatika's research emphasizes teachers' role to prevent understanding of radicalism in their students, while Umu Arifah Rahmawati's research emphasizes more on the discussion of Yusuf Qardhawi's thoughts. The research performed by Devi Rosanita has similarities with this study. The difference with this study lies in the object of the study. Thus, based on the explanation above, the author will examine "The Enforcement of Counter-Radicalism through Educational Values (Analysis of the I'tiqād Book by Al-Bukhārī). This study aims to determine the values of counter-radicalism in the I'tiqād Book by al-Bukhārī and its relevance to Islamic education in Indonesia. RESEARCH METHOD The study of I'tiqād al-Bukhārī book by Imam al-Bukhārī is a library research. In this study, a series of activities were carried out concerning library data collection, reading, taking notes, and processing research materials (Zed, 2004). The data source was books, magazines, journals, newsletters, newspapers, papers, and other media related to this research. The approach used in this research is the bibliographic. This approach uses historical methods to search, analyze, interpret, and generalize facts that are the opinions of experts in a problem (Nazir, 2014). This research focuses on the results of thoughts and ideas written by Imam al-Bukhārī, in his book of I'tiqād al-Bukhārī as the primary source. Apart from these primary sources, this study also uses secondary sources; they are various relevant references [Saraswati] related to the research problem. Data analysis in this study uses analytic descriptive. It was carried out by describing and analyzing (Ratna, 2019). The type of data analysis used in this study is content analysis. It is a technique that allows researchers to study human behavior indirectly through the analysis of their communication [Fraenkel, 2012], and then it will be concluded with inductive reasoning. RESULTS AND DISCUSSION The root word of radicalism is radical. According to its meaning in language, radical is neutral. It can be related to positive and negative things. In KBBI, the term radical is associated with several definitions, starting from the primary meaning to something principle, intensely demanding changes in laws, governance, and means progress in thinking or acting. The radical meaning is linguistically neutral, not directly referring to a particular value or a positive or negative view. The term is different from terror, which means to scare other parties. Terror is always carried out in negative ways and frightens other parties (Muchith, 2016). In practice, radicalism becomes an understanding or flow that requires social and political change or renewal through violence (Ratna, 2019). Radicalism is also considered as an 182 │ understanding or way of thinking that becomes the basis for committing acts of violence or terror (Muchith, 2016). The phenomenon of radicalism and violence occurring in society is a complex reality. Many factors have caused or triggered the emergence of radicalism, such as unfavourable socio-economic conditions, unfairness, globalization, modern imperialism, and narrow religious understanding. The accumulation of these various factors fosters radical attitudes and understandings, which can encourage someone to choose the path to join acts and networks of terrorism (BNPT, 2016). Today, the phenomenon of radicalism has become a challenge for many countries, including Indonesia. Radicalism might happen in every aspect of life. It is encouraged by various reasons, either due to a wrong or narrow understanding of religious teachings, social injustice, poverty, politics, social inequality, etc (Kusmanto et al., 2015). Therefore, radicalism is a serious challenge and threat, so it is necessary to counter it. The effort of countering radicalism is called persuasive deradicalization (D. Syamsuddin, 2016). One of the efforts that can be applied is by enforcing a counter-radicalism through educational values as taught by the ulamas through their works such as the book I'tiqād al-Bukhārī by Imam al-Bukhārī. Short biography of Imam Al-Bukhārī Imam al-Bukhārī full name was Abū' Abdillāh Muḥammad bin Ismā'īl bin Ibrāhīm bin al-Mugīrah bin Bardizbah al-Ju'fī. He was born on a Friday after the Friday prayer 13 Shawwal 194 H in Bukhara (Bucharest). When he was little, his father, Isma'il, had died, so her mother was the one raising him (Wahyudi, 2009). When he was still a child, his father died, and that made him a yatim and raised under his mother's care . Muḥammad bin Aḥmad bin Faḍl al-Balkhi said; I heard my father say, "Muḥammad bin Ismā'īl sight has been taken away from him (blind) since childhood". One time his mother had a dream that she saw Ibrāhīm as said this to her, "O woman, Allah has returned your son's sight because of your frequent prayers. When morning came, it was true that Allah had returned the vision of Muhammad bin Ismā'īl (Adz-Dzahabi, 2008). Since he was a teenager, he raḥimahullāh began to travel to trace the hadith in 210 AH. He moved from place to place in his country, searching for hadiths, and then lived in the Hejaz for six years. After that, he went to Syria, Egypt, Jazirah, Basra, Kufa, and Baghdad. Imam al-Bukhārī was known to have very strong memorization. People said that he could memorize a book with one reading. He was a very zuhud and wara 'person, far from the life of rulers and leaders. He was courageous and generous. His ability to memorize given by Allah SWT, was a tremendous asset to make him an expert in hadith science. He was able to memorize hadiths and understand them in detail (Al-Utsaimin, 2008). Imam al-Bukhārī came to study with the Shaykhs in Basrah, but never wrote the hadiths, so they asked him, "Why don't you write?" then he recited all the hadiths he had heard by memorization, and the total of the hadith was up to more than fifteen thousand (Jauzi, 2006). The ulamas during his period and after his period praised him greatly. Imam Aḥmad said, "Khurasan never took out someone as great as him." Ibn Khuzaimah said, "Under this sky, there is no one who knows better and memorizes the hadith of Rasūlullāh saw more than Muḥammad bin Ismā'īl al-Bukhārī" (Al-Utsaimin, 2008). He is a mujtahid in the field of fiqh. He is meticulous in drawing legal conclusions on a hadith as can be seen in the chapter titles in his ṣaḥīḥ book (Al-Utsaimin, 2008 Overview of the Book I'tiqād Al-Bukhārī The complete title of the book is I'tiqād Abī 'Abdillāh Muḥammad bin Ismā'īl al-Bukhārī fī Jamā'ah min as-Salaf allażīna Rawā' anhum which was passed on by Muḥammad Ziyad bin 'Umar at-Tuklah in 1433 H (Al-Bukhārī, 2012b). This book is the result Imam al-Bukhārī 's thought in 'aqīdah, which was later narrated by Imam al-Lālikā'ī (died 418 H) in the book of Syaraḥ Uṣūl I'tiqād Ahlussunnah wal Jamā'ah. This book contains the results of the agreement of 1080 teachers of Imam al-Bukhārī who were ahlul hadith. The narrator of this book mentions some of Imam al-Bukhārī's teachers from various countries, namely the Hejaz, Mecca, Medina, Kufa, Basra, Wasith, Baghdad, Sham, and Egypt for 46 years (Al-Lālikā'ī, 1995). According to Imam al-Bukhārī, there are ten foundations in 'aqīdah, which the scholars have agreed. The following is a brief description of the ten points in the book I'tiqād al-Bukhārī, namely: First, it contains beliefs that a person can correctly call as Islam; Second, contains about belief in the Al-Qur'an; Third, contains about belief in destiny; Fourth, contains about infidelity; Fifth, contains about the attitude towards the companions of the Prophet Ṣallallāhu 'alaihi wa Sallam; Sixth, contains about bid'ah deeds; Fourth, contains about following the Sunnah of the Prophet Ṣallallāhu 'alaihi wa Sallam; Eighth, contains attitudes towards the authorities; Ninth, contains attitudes towards Muslims; and Tenth, contains about praying to the leader. Educational Values of Counter-Radicalism Based on Book of I'tiqād Al-Bukhārī Deradicalization can be done persuasively through education and giving a correct understanding of various things, including religion. Education is all life situations that affect individual growth that lasts a lifetime. Therefore life is education, and education is life (Budiyanto, 2013). Education takes place in the context of multi-dimensional individual relationships, both in individual relationships with God, fellow humans, nature and even with themselves (Sajidiman, 2012). Education is also a social activity that allows society to exist and develop (Bahroni, 2009) not only on the quality of thinking but also on morals and kinesthetic intelligence (Muchith, 2016). In connection with deradicalization efforts in the social dimension, it seems that it can only be reduced and countered by other social symptoms, which are the anti-radicalism movement. The response of the religious leaders is an expression of counter radicalism. The government and the public's attention to radicalism reflect that radicalism is a serious problem and is attached to it against radicalism (Kusmanto et al., 2015). Counter-radicalization is one of the BNPT strategies to prevent radicalism and terrorism either through various activities such as socialization, dialogue, workshops, intelligence activities, etc, or by empowering civil society forces such as religious organizations, non-governmental organizations (nongovernmental organizations), educational institutions, religious leaders, traditional leaders, mass media, and former terrorists and so on as an effort to counteract radicalism in society (Tahir, 2016). The ulamas have taught the basic principles of religion correctly as taught by the ulamas from the previous eras, the friends of the prophet and the Prophet Muhammad SAW, including through one of the works of Imam al-Bukhārī, namely the book I'tiqād al-Bukhārī. This book contains various aspects of the creed that are very basic to Muslims, including counter-radicalism educational values. The result shows four main pillars of the value of counter-radicalism education in the I'tiqād al-Bukhārī . The four pillars rest 184 │ on the main basic foundation, namely "peace," and its relevance can be drawn to Islamic education in Indonesia. The educational values of counter-radicalism contained in the book I'tiqād Al-Bukhārī include Being peaceful with the public I'tiqād al-Bukhārī stated that Islam had regulated its people always to practice the Sunnah whenever and wherever. Doing good deeds to neighbors, whether they are Muslims or not, is one of the forms of practicing the Sunnah from Rasūlullāh. Allah has arranged for His servants not to hurt their neighbors' hearts or physics, as Allah SWT said: "Worship Allah alone and associate none with Him. And be kind to parents, relatives, orphans, the poor, near and distant neighbours, close friends, needy travellers, and those bondspeople in your possession. Surely Allah does not like whoever is arrogant, boastful." (QS an-Nisā: 36) To be a good Muslim for the community, neighbors, and guests, a Muslim should not say something that will hurt others and behave well to make others feel comfortable. Rasūlullāh Ṣallallāhu 'alaihi wa Sallam said: Meaning: "No one of you becomes a true believer until he likes for his brother what he likes for himself." (As-Suyuti, 1996). From the explanation above, it can be seen that holding on to the Sunnah of Rasūlullāh Ṣallallāhu ʻalaihi wa Sallam, such as being kind to neighbors and guests, contains the value of peace with the public. By following the Sunnah, a peaceful, loving society will be created and conflict in muamalah will be prevented. Being peaceful with the government In I'tiqād al-Bukhārī, Imam al-Bukhārī emphasizes the word of Allah as follow: Meaning: "O believers! Obey Allah and obey the Messenger and those in authority among you.." (QS an-Nisā: 59). Rebellion against the legitimate government can cause many harms; therefore, the ulama forbid the rebellion against the government. Rasūlullāh Ṣallallāhu 'alaihi wa Sallam said: Meaning: "Whoever intends to advise one with authority, then he should not do so publicly. Rather, he should take him by the hand and advise him in seclusion. If he accepts the advice, then all is well. If he does not accept it, then he has fulfilled his duty" . (Albani, 1980)." │ 185 In Islamic teachings, people who can advise the leader are the people who have high knowledge and understand the country's political problem; they are the ulama that has reached the level of ijtihād and are experts in manifesting. The union of ulama and 'umarā' (leaders) will create a peaceful country. In addition, in the term of dealing with a foul leader, Rasūlullāh Ṣallallāhu 'alaihi wa Sallam has advised to be patient. Rasūlullāh Ṣallallāhu 'alaihi wa Sallam telah memberikan nasehat untuk bersabar sebagaimana sabdanya: Meaning: "After me there will be favouritism anad many things that you will not like. They (his Companions) said: Messenger of Allah, what do you order that one should do it anyone from us has to live through such a time? He said: You should discharge your own responsibility (by obeying your Amir), and ask God to cuncede your right (by guiding the Amir to the right path or by replacing him by one more just and God-fearing)". [Al-Mubarakfuri, 1999]." Rasūlullāh Ṣallallāhu 'alaihi wa Sallam said: Meaning: "After me you will see others given preference to you, but you should remain patient till you meet me at the Haud (Al-Kauthar in Jannah) (Al-Asqalani, 2001)." Rasūlullāh Ṣallallāhu 'alaihi wa Sallam also said: Meaning: "Amirs will be appointed over you, and you will find them doing good as well as bad deeds. One who hates their bad deeds is absolved from blame. One who disapproves of their bad deeds is (also) safe (so far as Divine wrath is concerned). But one who approves of their bad deeds and imitates them (is doomed). People asked: Messenger of Allah, shouldn't we fight against them? He replied: No, as long as they say their prayer. (" Hating and disapproving" refers to liking and disliking from the heart.) (Al-Naysabūrī, 2003)." So, it can be concluded that when witnessing the evilness of leaders, the right action to act is to deny it with heart. Rebellion against a leader is only allowed when the Muslims see the real heathenism of the leader. It can only be done when the Muslims have equal or more power than the leader and will not cause greater harm because Islam is a peaceful religion, just like the Prophet's companions' understanding. Being peaceful with fellow Muslims Imam al-Bukhārī in his book I'tiqād al-Bukhārī emphasized one of Allah SWT's words in Surah an-Nisā'ayat 48: Meaning: "Indeed, Allah does not forgive associating others with Him in worship, but forgives anything else of whoever He wills." (QS an-Nisā': 48)" Through his words, Imam al-Bukhārī emphasized the dangers of quickly claiming someone as an infidel. The labeling of infidel to fellow Muslims will result in great dangers, such as the act of murdering someone because an infidel's blood is halal to be killed. Thus, it is an obligation for all Muslims not to label someone as an infidel easily because a Muslim is obliged to be protected by his blood, property, and honor quickly. Shaykh Muammad Nāṣiruddin al-Albānī Raḥimahullāh said, "Every immoral perpetrator, especially them who are spreading the current form of criminals, such as adultery, drinking khamar and others, are included as kufr amali. They should not be condemned as immoral because they commit immorality and justify actions, except after their heart intentions appear. And they believe that they do not believe in what is forbidden by Allah Ta'ālā and His Messenger, when we find out that they have fallen into an offense in their heart, they can be punished by infidels who leave Islam (Z. A. Syamsuddin & Abidin, 2015). There is also a rule to convert individually. That, too, must be returned to the judge in the sharia court. As said by Ibn Taimiyyah Raḥimahullāh: Meaning: "And it is not permissible for a person to disbelieve any of the Muslims even though he has made mistakes and mistakes until he has established blasphemy against him and it is clear to him that blasphemy. Furthermore, whoever believes then it is confirmed that it is not that Islam is lost from him only with doubt, but it can be lost except after upholding blasphemy and eliminating syubhat (Al-Harrani, 2004)." Rasūlullāh Ṣallallāhu 'alaihi wa Sallam also said:
4,166.4
2020-11-11T00:00:00.000
[ "Education", "Philosophy", "Political Science" ]
Gli1-mediated tumor cell-derived bFGF promotes tumor angiogenesis and pericyte coverage in non-small cell lung cancer Background Tumor angiogenesis inhibitors have been applied for non-small cell lung cancer (NSCLC) therapy. However, the drug resistance hinders their further development. Intercellular crosstalk between lung cancer cells and vascular cells was crucial for anti-angiogenenic resistance (AAD). However, the understanding of this crosstalk is still rudimentary. Our previous study showed that Glioma-associated oncogene 1 (Gli1) is a driver of NSCLC metastasis, but its role in lung cancer cell-vascular cell crosstalk remains unclear. Methods Conditioned medium (CM) from Gli1-overexpressing or Gli1-knockdown NSCLC cells was used to educate endothelia cells and pericytes, and the effects of these media on angiogenesis and the maturation of new blood vessels were evaluated via wound healing assays, Transwell migration and invasion assays, tube formation assays and 3D coculture assays. The xenograft model was conducted to establish the effect of Gli1 on tumor angiogenesis and growth. Angiogenic antibody microarray analysis, ELISA, luciferase reporte, chromatin immunoprecipitation (ChIP), bFGF protein stability and ubiquitination assay were performed to explore how Gli1 regulate bFGF expression. Results Gli1 overexpression in NSCLC cells enhanced the endothelial cell and pericyte motility required for angiogenesis required for angiogenesis. However, Gli1 knockout in NSCLC cells had opposite effect on this process. bFGF was critical for the enhancement effect on tumor angiogenesis. bFGF treatment reversed the Gli1 knockdown-mediated inhibition of angiogenesis. Mechanistically, Gli1 increased the bFGF protein level by promoting bFGF transcriptional activity and protein stability. Importantly, suppressing Gli1 with GANT-61 obviously inhibited angiogenesis. Conclusion The Gli1-bFGF axis is crucial for the crosstalk between lung cancer cells and vascular cells. Targeting Gli1 is a potential therapeutic approach for NSCLC angiogenesis. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-024-03003-0. Introduction Non-small cell lung cancer (NSCLC), the most common type of lung cancer, has a high mortality rate [1].Although many improvements have been made in NSCLC therapy, including the introduction of targeting epidermal growth factor receptor (EGFR) mutations or ALK rearrangements, immunotherapy and surgery, the prognosis of NSCLC patients is still dismal.These therapeutic approaches have demonstrated unprecedented survival benefits in selected patients.However, the emergence of drug resistance has blocked their further application.Currently, the survival rate of patients with NSCLC is only approximately 20%.Hence, it is necessary to identify prognostic biomarkers and develop novel therapeutic approaches for NSCLC [2,3]. NSCLC tumors are a highly vascularized tumor.A high microvessel density often indicates a poor clinical prognosis in NSCLC patient [4,5].Tumor angiogenesis is necessary for the occurrence, development and metastasis of NSCLC.Thus, antiangiogenic approaches have been developed for NSCLC therapy.Many antiangiogenic agents targeting the VEGF-A/VEGFR2 signaling pathway, such as bevacizumab, ramucirumab and nintedanib, are currently used for the treatment of NSCLC patients or are in development [6][7][8].Combinations of these angiogenesis inhibitors with checkpoint inhibitors, chemotherapy and EGFR inhibitors have significantly improved the survival of NSCLC patients [9,10].Whereas, only a subset of patients benefits from these angiogenesis inhibitors, and their effects are generally incomplete and temporary.In addition, many patients exhibit antiangiogenic drug (AAD) resistance.Therefore, further elucidation of the underlying mechanisms responsible is necessary for the development of effective therapeutic approaches for NSCLC [6,7]. The majority of AADs used in the clinic primarily affect tumor vascular endothelial cells, with no effects on tumor cells and other perivascular cells.Many patients exhibit patients suffer from resistance after AAD therapy.Recently, crosstalk between tumor cells and vascular cells was identified as a crucial factor for AAD resistance and angiogenesis [11].During AAD therapy, cancer cells accelerate the acquisition of AAD resistance by activating fatty acid oxidation metabolic [12].Glioblastoma cells promote bevacizumab resistance by upregulating YKL-40 and ZEB1 expression, and mesenchymal transition [13].These findings suggest that tumor cell-perivascular cell interaction was crucial for AAD resistance.However, our understanding of intercellular crosstalk between lung cancer cells and perivascular cells is still rudimentary [14,15]. Glioma-associated oncogene 1 (Gli1) is a member of the zinc finger protein family.It is crucial for the activation of the canonical hedgehog (Hh)/Gli1 signaling pathway, which is frequently activated in many cancers, including lung cancer, breast cancer and basal-cell carcinoma.When it is activated, Gli1 translocates from the cytoplasm to the nucleus, leading to the expression of various target genes and signal transduction through downstream signaling pathways [16].Gli1 is a driver of cell survival and CSC stemness features [17].Our previous study showed Gli1 was upregulated in NSCLC and was a critical driver of NSCLC metastasis [18].Moreover, inhibition of Gli1 nuclear translocation obviously suppressed angiogenesis in NSCLC [19], indicating that Gli1 may be involved in angiogenesis of NSCLC.However, whether Gli1 is involved in the intercellular crosstalk between lung cancer cells and vascular cells is still unclear. Herein, we confirmed that Gli1 promoted tumor angiogenesis by regulating the crosstalk between lung cancer cells and vascular cells.Gli1 overexpression in NSCLC cells enhanced the migration, invasion, tube formation of endothelial cells and increased the microvessel density in mouse xenografts.Whereas genetic inhibition of Gli1 in NSCLC cells resulted in the opposite effects on endothelial cells.Gli1 modulation also had a similar effect on pericyte coverage.Angiogenic antibody microarray analysis identified bFGF as a downstream regulator of Gli1.Additionally, bFGF was found to be critical for the Gli1-mediated enhancement of angiogenesis.Moreover, mechanistic research revealed that Gli1 increased bFGF protein expression through promoting bFGF transcriptional activity and prolonging bFGF protein stability.Finally, we also showed that therapeutic blockade of Gli1 using GANT-61 obviously suppressed tumor angiogenesis.The findings of this study suggests that Gli1 is a promising therapeutic target for attenuating angiogenesis in NSCLC.This study also provides intriguing insights into the crosstalk between NSCLC cells and vascular cells during the regulation of tumor angiogenesis. Patient specimens and immunohistochemistry (IHC) Primary NSCLC tissues and adjacent noncarcinoma samples were from patients who underwent curative resection at the Sixth Affiliated hospital of Guangzhou Medical University.These patients were histopathologically diagnosed with NSCLC and had available survival.All patients have signed written informed consent prior to tissue collection.A cohort composed of 90 pairs of NSCLC tissues and the corresponding noncarcinoma tissues was constructed by Shanghai Xinchao Biotech (Shanghai, China).A tissue microarray was applied for the IHC assay to detect bFGF expression.Tumor cells stained with Gli1 or bFGF were calculated per field of view.The expression of bFGF was calculated based on staining intensity and distribution [18].The correlation between Gil1 and expression was analyzed based on the IHC score. Animals Five-week-old male BALB/c (nu/nu) mice and adult female Sprague-Dawley (SD) rats were obtained from Hua Fukang Experimental Animal Center (Beijing, China).All experimental procedures conducted in animal studies were approved by the laboratory animal ethics committee of Guangzhou Medical University. Cell lines and culture conditions Human umbilical vein endothelial cells (HUVECs), human microvascular endothelial cell line-1 (HMEC-1) cells and human brain vascular pericytes (HBVPs) were provided by ScienCell Research Laboratories (Carlsbad, CA).The human NSCLC lines NCI-H1299, NCI-H1703, NCI-H460 and A549 cells were provided by American Type Culture Collection (ATCC, Manassas, Virginia).HUVECs were cultured with ECM.HBVPs were maintained in PM.HMEC-1 cells were cultured with DMEM supplemented with 10% FBS.The human NSCLC cell lines were cultured in RPMI 1640 medium containing 10% FBS and 1% penicillin-streptomycin (v/v).All the cells were maintained in a humidified incubator containing 5% CO 2 . Conditioned medium (CM) generation The stable cell lines including NCI-H1299 shNC /NCI-H1299 shGli1 , NCI-H1703 shNC /NCI-H1703 shGli1 , A549 NC vec- tor /A549 Gli1vector , and NCI-H460 NC vector /NCI-H460 Gli1vector cells were established in our previous study [18].These cells were maintained in RPMI 1640 medium containing 10% FBS and 1% penicillin-streptomycin (v/v).When the cell density reached approximately 80%, the medium was replaced with fresh medium containing 50% RPMI 1640 medium and 50% ECM (v/v) for 24 h to generate the CM of endothelial cells.And the fresh medium containing 50% RPMI 1640 medium and 50% PM (v/v) for 24 h to generate the CM of HBVPs.The media were collected and filtered through 0.22 μm filter (SLCPR33RB, Millipore).Then the supernatants were aliquoted and frozen at -80 °C.The CM was used for further endothelia cells or pericytes related experiments. Angiogenic antibody microarray analysis The human angiogenic antibody microarray was obtained from R&D.The CM of A549 NC vector /A549 Gli1 vector cells was collected and used for angiogenesis factors detection following the manufacturer's instructions.Briefly, angiogenic antibody membranes were incubated with blocking buffer, and were then incubated with 1.5 mL of CM from A549 NC vector /A549 Gli1 vector cells at 4 °C overnight.After washing with PBS, the antibody membranes were incubated with streptavidin-horseradish peroxidase solution for 1.5 h at room temperature.Protein expression was detected by an enhanced chemiluminescence (ECL) kit and a ProScan HT Microarray Scanner (PerkinElmer, MA).The densities of individual spots were analyzed using the Scan Array Express software. ELISA assay ELISA assay was performed to detect the bFGF concentration in the CM of four paired cell lines with Gli1 overexpression or knockdown.These cells were cultured with the corresponding culture medium for 48 h.After that, the medium was collected and centrifuged.The supernatant was used for ELISA assay according to the manufacturer's instructions. Wound healing assay The cells were plated into 6-well plates and incubated until they were approximately 90% confluent.The cells layer was then scratched using a 10 μL sterile pipette tip to generate a wound and then exposed to CM from NSCLC cells for 8 h.The migration ability was indicated by the width of the gap between two edges of the wound, which was measured using a microscope at 0 h and 12 h. Transwell migration and invasion assays HUVECs and HMEC-1 cells (2 × 10 4 ) were incubated with NSCLC cell CM for 48 h.Then, the cells were collected, suspended in 100 µL of serum-free medium and seeded in the upper chamber of a Transwell insert.The lower chamber was filled with 650 µL of fresh medium and the cells were incubated for 24 h.Then the cells were fixed with 4% paraformaldehyde and stained with crystal violet solution.After the cells on the upper surface of the chamber membrane were removed, the cells attached to the lower surface of the chamber membrane were observed using a light microscope. For the Transwell invasion assay, the procedures were largely similar to those used for the Transwell migration assay, except that the chambers membranes were precoated with Matrigel. Tube formation assay The wells of 96-well microwell plates were coated with 50 µL-Matrigel and the plates were placed at 37 °C for solidification.Endothelial cells that were exposed to the indicated CM were seeded in the 96-well plates at a density of 2 × 10 4 cells per well.After incubation for another 6 h, the capillary-like structures were observed with a light microscope. Aortic ring assay The aortic ring assay was performed in a 96-well plate with the wells precoated with 50 µL Matrigel.The thoracic aorta was isolated from female SD rats weighing 200 g in a sterile manner and then cut into approximately 1 mm long rings.Each ring was seeded in Matrigelcoated 96-well plates, and covered with another 80 µL of Matrigel.After 2 h, 100 μL of CM were added to the rings.The CM was replaced every two days.The rings were observed and photographed on Day 8 with microscope (Olympus IX71, Japan).The number of microvessels was quantitated with the image analysis program IPP 6.0. HBVP adhesion assay The HBVPs were exposed to CM for 48 h, and were collected and placed at a 24-well plate with a density of 1 × 10 5 cells/well.After culturing for 1 h, the cells were sequentially fixed with 4% paraformaldehyde and stained with 0.1% crystal violet in sequence.The adherent cells were observed and captured using an Olympus BX53 upright microscope.Image-Pro Plus 6.0 was used to analyze the adhesion ability of HBVPs. Coculture of tubule-like structures of endothelial cells and pericytes HUVECs labeled with PKH26 (red) were placed in Matrigel-coated 96-well plates at a density of 3 × 10 4 cells per well, and incubated for 2 h to allow the cells to form a capillary network.HBVPs pretreated with CM from NSCLC cells were labeled with PKH67 (green) and seeded into the endothelial capillary network at a density of 1 × 10 4 cells per well.The tube structures and recruitment of HBVPs were observed and photographed using an EVOS XL Core imaging system at 0 h, 1 h and 10 h.The time of HBVP addition was set as 0 h. In vivo mouse xenograft assay The four pairs of stable NSCLC cells were subcutaneously injected into the left flanks of 5-6-week-old male nude (BALB/cA-nu) mice (1 × 10 7 cells per mice).When the tumor volume reached approximately 50 mm 3 , the tumors were measured using a caliper and the tumor volume calculated with the formula (0.5 × width 2 × length) for totally 22 days.The mice were then anesthetized and the tumors were excised, weighed and photographed.Subsequently, the tumors were fixed using 4% paraformaldehyde for further histological and immunohistochemical assays. To assess the antiangiogenic effect of GANT-61, NCI-H1299 cells were subcutaneously inoculated into the left flanks of mice.The tumor bearing mice were randomly divided into the vehicle group and 50 mg/kg GANT-61group with 6 mice per group once the tumor volume reached approximately 50 mm 3 .The mice were intraperitoneally injected with saline or 50 mg/kg GANT-61 every other day for a total of 24 days. Immunofluorescence assay of OCT-embedded tissues Tumor tissues fixed in paraformaldehyde were frozen in OCT (Tissue Tek, USA) and subsequently cut into 5-µm-thick sections.The tumor sections were permeabilized with 0.25% Triton-X-100, and then incubated with anti-CD31 and anti-α-SMA primary antibodies.After washing with PBS, the tumor sections were labeled with the corresponding Alexa Fluor 488-or Alexa Fluor 594-conjugated secondary antibodies.DAPI staining was applied for nuclear labeling.Images were acquired with a confocal microscope (LSM 800, Zeiss). Tumor vascular permeability assay Tumor vascular permeability was evaluated as previously described [20].In brief, the mice were intravenously injected with rhodamine-labeled dextran (70-kDa).Twenty-four hours later, the mice were anesthetized.The tumors were excised and fixed, and following by frozen in OCT.Dextran in tumor Sect.(5 µm) was observed and photographed using a confocal microscope.The data are presented as the mean fluorescence intensity values calculated by ImageJ software (National Institutes of Health). Luciferase reporter assay The sequence of the bFGF promoter (1179 bp) was and inserted into the pGL3-Basic-Luc vector to generate the pGL3-bFGF-promoter WT−Luc vector.The two mutated bFGF promoter fragments (from bp -180 to -168 and from bp + 70 to + 82) were constructed by RT-qPCR and were then cloned and inserted into the same reporter vector.The pGL3-bFGF-promoter Mut1−Luc plasmid ("TTT GGG TGG TGC " mutated to "CCC AAA CAA CAT ") and the pGL3-bFGF-promoter Mut2−Luc plasmid ("TGT GGG GGG TGG " mutated to "CAC AAA AAA CAA ") were constructed using a site-directed mutagenesis kit (Genechem, China) according to the manufacturer's instructions.The 293 T cells were seeded in 24-well plates and transfected as indicated with the pGL3-bFGF-promoter WT−Luc / pGL3-bFGF-promoter Mut1−Luc , pGL3-bFGF-promoter Mut2− Luc /pGL3-bFGF-promoter Mut1+Mu2−Luc and Renilla pRL-TK plasmids using Lipofectamine 3000 following with the manufacturer's instructions.Forty-eight hours later, the cells were collected and lysed using cold lysis buffer (Promega).The cell lysates were collected and added to a 96-well plate illuminometer to measure the fluorescence intensity by a microplate reader.Firefly luciferase activity was normalized to Renilla luciferase activity as the internal control. Chromatin immunoprecipitation (ChIP) assay A ChIP assay kit was employed to fulfill ChIP assay following with manufacturer's protocol.In short, the cells were collected and exposed to 1% formaldehyde to crosslink chromatin-associated proteins to DNA.Subsequently, the nuclear pellet was resuspended in ChIP Buffer and then applied for sonication.After incubation of the nuclear pellet suspension with anti-Gli1 antibodyand anti-IgG-conjugated Dynabeads protein A/G at 4 °C overnight, DNA-protein complexes were purified using DNA purification magnetic beads and then eluted with DNA elution buffer.The expression levels of the target genes were determined by RT-qPCR assay.The primers were displayed in Supporting Table 1. bFGF protein stability assay The stability of the bFGF protein was measured by CHX pulse-chase experiments.Briefly, cells were exposed to 2 μM CHX for 0 h, 0.5 h, 1 h, 2 h, 4 h and 6 h.Then, the cells were harvested, lysed with RIPA buffer and then applied for Western blottingto measure the bFGF protein level. Ubiquitination assay The cells were treated with the proteasome inhibitor MG132 (0.5 μM) for 6 h and lysed with immunoprecipitation lysis buffer.Proteins (500 μg per sample) were incubated with anti-bFGF antibody-or normal IgGbound Protein A + G beads at 4 °C overnight.The reaction was terminated by the addition of loading buffer.Ubiquitination was analyzed by Western blotting with antibodies against bFGF and ubiquitin. Statistical analysis Statistical analyses were performed with GraphPad Prism 7 (GraphPad, San Diego, CA, USA) and the data were displayed as the means ± SDs.Mos of in vitro experiments were conducted three times independently.Comparisons between two groups were analyzed using an unpaired t test, and one-way analysis of variance (ANOVA) was used to evaluate the statistical significance of differences among three or more groups.p < 0.05 was considered to indicate statistical significance. Gli1 overexpression in NSCLC cells promoted tumor angiogenesis To clarify the function of Gli1 in angiogenesis, CM from A549 NC vector /A549 Gli1 vector and NCI-H460 NC vector / NCI-H460 Gli1 vector cells was used as a chemoattractant for naive HUVECs and HMEC-1 cells (Fig. 1A and Supporting Fig. 1).The wound-healing assay showed that compared with CM from A549 NC vector or NCI-H460 NC vector cells, CM from A549 Gli1 vector or NCI-H460 Gli1 vector cells significantly promoted the migration of endothelial cells (Fig. 1B and Supporting Fig. 2A).Concordantly, the Transwell migration and invasion assays demonstrated that endothelial cells exposed to CM from A549 Gli1 vector or NCI-H460 Gli1 vector cells exhibited enhanced migration and invasion abilities (Fig. 1C-D).And elevated lumen-forming ability of endothelial cells exposed to CM from A549 Gli1 vector or NCI-H460 Gli1 vector cells were also observed (Fig. 1E).Furthermore, the aortic ring assay demonstrated that the CM of A549 Gli1 vector and NCI-H460 Gli1 vector cells markedly augmented aortic ring sprouting (Fig. 1F).In addition, considering that pericyte coverage is critical for blood vessel maturation [21], we also evaluated the role of Gli1 in pericyte adhesion and recruitment.The pericyte adhesion assay showed that Gli1 overexpression in A549 cells and NCI-H460 cells significantly increased the adhesion ability of HBVPs (Fig. 1G).Consistently, the 3D coculture assay also showed that most of the HBVPs (green) educated by A549 Gli1 vector or NCI-H460 Gli1 vector cells had migrated to tube structures (red) at 2 h.The tubes had developed into stronger and more stable structures at 12 h (Fig. 1H-I). We next confirmed these findings with a xenograft model (Fig. 2A).Compared with the A549 NC vector group, the A549 Gli1 vector group exhibited increased tumor growth, Fig. 1 The CM of Gli1-overexpressing NSCLC cells promoted angiogenesis in vitro and ex vivo.A Flowchart of evaluation the effect of Gli1-overexpressing A549 and NCI-H460 cells on endothelial cell and aorta rings.B-E The effect of CM from A549 Gli1 vector and NCI-H460 Gli1 vector cells on the migration, invasion and tube formation abilities of HUVECs and HMEC-1 cells.HUVECs and HMEC-1 cells were treated with CM for 48 h and subjected for wound healing (B), Transwell migration (C) and Tiranswell invasion (D), and tube formation assays (E).F Gli1-overexpressing NSCLC increased blood sprouting in aortic ring assay.G The CM of Gli1-overexpressing cells promoted the HBVP adhesion ability.H Schematic strategy of coculture tubule-like structure of endothelial cells and pericytes.I CM from Gli1-overexpressing cells enhanced the recruitment of HBVP to tubes formed by endothelial cell.All the data were showed as mean ± SD, n = 3. ** p < 0.01; *** p < 0.001 vs A549 NC vector or NCI-H460 NC vector cells as revealed by the tumor growth curve, tumor weight statistics and tumor appearance (Fig. 2B-C and Supporting Fig. 3).Pathological analysis showed that the necrotic area in A549 Gli1 vector tumors was reduced when compared with that in A549 NC vector tumors, as was the Ki67 index and CD31 staining (Fig. 2D-E).IF assay also revealed that αSMA positive pericyte coverage around the CD31 positive cells were obviously increased in A549 Gli1 vector tumors (Fig. 2F).In addition, the mice were injected with 70 kDa rhodamine-labeled dextran to observe tumor vascular permeability.The results showed that vascular permeability was dramatically increased (Fig. 2G).Similar results were obtained with NCI-H460 cells.Tumors in the NCI-H460 Gli1 vector cells group tended to be larger (Fig. 2H-J) and have a higher Ki67 index, a greater microvessel density (Fig. 2K-L), greater αSMApositive pericyte coverage surrounding CD31-positive cells (Fig. 2M), and more dextran leakage (Fig. 2N).These results strongly suggest that Gli1 overexpression promotes tumor angiogenesis and increases pericyte coverage. Genetic inhibition of Gli1 in NSCLC cells impaired tumor angiogenesis To further verify the function of Gli1 in angiogenesis, we used the CM of NCI-H1299 shNC /NCI-H1299 shGli1 and NCI-H1703 shNC /NCI-H1703 shGli1 cells to educate endothelial cells and rat thoracic aortas (Fig. 3A).The results demonstrated that Gli1 silencing in NCI-H1299 and NCI-H1703 cells posed passive effect on tumor angiogenesis-related processes.CM from NCI-H1299 sh- Gli1 and NCI-H1703 shGli1 cells obviously reduced the migration (Fig. 3B-C and Supporting Fig. 4 A-B), invasion (Fig. 3D and Supporting Fig. 4C), and tube formation (Fig. 3E and Supporting Fig. 4 D) of HUVECs and HMEC-1 cells.Consistently, the aortic ring assay verified that CM from NCI-H1299 shGli1 and NCI-H1703 shGli1 cells also significantly reduced the microvessel density surrounding the aortic ring (Fig. 3F).Moreover, CM from NCI-H1299 shGli1 and NCI-H1703 shGli1 cells also obviously decreased the adhesion ability of HBVPs (Fig. 3G), and the HBVPs recruitment to the endothelial capillaries (Fig. 3H-I). NCI-H1299 shNC /NCI-H1299 shGli1 cells were injected subcutaneously into the nude mice to establish a xenograft model, and the effect of Gli1 on tumor angiogenesis was further evaluated (Fig. 4A).The tumor growth of NCI-H1299 shGli1 was remarkably suppressed, indicated by the tumor volume curve, tumor appearance and tumor weight statistics (Fig. 4B-C and Supporting Fig. 5).A larger necrotic area and fewer Ki67 + cells and microvessels were detected in the NCI-H1299 shGli1 xenografts (Fig. 4D-E).IF analyses and permeability assays also revealed less coverage of αSMA-positive pericytes surrounding CD31-positive cells (Fig. 4F) and less dextran leakage (Fig. 4G) in xenografts formed from NCI-H1299 shGli1 cells.These results were confirmed in NCI-H1703 shNC /NCI-H1703 shGli1 xenografts.The growth of tumors generated from NCI-H1703 shGli1 cells was obviously attenuated (Fig. 4H-J), accompanied by larger necrotic area, a lower Ki67 index and less CD 31 staining blood vessels (Fig. 4K-L).Furthermore, the pericyte coverage and dextran leakage were obviously attenuated in NCI-H1703 shGli1 tumors (Fig. 4M-N).Taken together, these results indicate that genetic inhibition of Gli1 in NSCLC obviously impairs the angiogenesis-related processes and blood vascular function. Basic fibroblast growth factor (bFGF) is necessary for the Gli1-mediated effects on angiogenesis To identify Gli1-regulated angiogenic factors in NSCLC cells, a human angiogenic antibody array was applied to determine the differences in angiogenesis-related protein expression between A549 NC vector and A549 Gli1 vector cells.Interestingly, several proteins, including angiopoietin-1, CXCL16, endothelin-1, bFGF, VEGF, PDGF-BB, IL-1 beta and FGF acidic were obviously upregulated in A549 Gli1 vector cells.Among them, bFGF showed the greatest upregulation (Fig. 5A-B).ELISA assay and Western blotting confirmed that Gli1 increased bFGF expression and secretion (CM), whereas Gli1 silencing decreased bFGF expression and secretion (Fig. 5C-D).The results of IHC staining of tissue microarray (containing 90-paired normal lung tissues and NSCLC tissues) showed that bFGF expression in NSCLC tissues was obviously greater than that in normal tissues (Fig. 5E-F and Supporting Fig. 6).RT-qPCR analysis of 11 representative paired NSCLC tissues and normal tissues also confirmed that bFGF expression was markedly increased in NSCLC tissues (Fig. 5G).Kaplan-Meier analysis revealed that NSCLC patients with a high bFGF level had a shorter overall survival times than those with low a bFGF level (Fig. 5H).In addition, we found that the bFGF level in Gli1-overexpressing xenografts was obviously greater than that in Gli1-negative silenced xenografts (Fig. 5I).Conversely, bFGF expression reduced in xenograft tumors formed from NCI-H1299 shGli1 and NCI-H1703 shGli1 cells (Fig. 5J).Taken together, these results indicate that Gli1 may promote tumor angiogenesis by positively regulating bFGF expression. To verify the necessity of bFGF as a downstream target of Gli1 to mediate angiogenesis, we used a neutralizing antibody to block secreted bFGF in the CM of A549 Gli1vector and NCI-H460 Gli1 vector cells.The wound healing, Transwell migration and Transwell invasion assay validated that the Gli1-mediated increases in the migration, invasion and tube formation of HUVECs and HMEC-1 cells were obviously attenuated by anti-bFGF antibody treatment (Fig. 6A-D and Supporting Fig. 7 A-D).Anti-bFGF antibody treatment decreased the extent to which A549 Gli1 vector and NCI-H460 Gli1 vector cells increased aortic ring sprouting (Fig. 6E).Moreover, similar results were obtained in the pericyte adhesion and 3D-coculture assays.Supplementation with CM from A549 Gli1 vector and NCI-H460 Gli1 vector cells significantly enhanced the adhesion and recruitment of HBVPs to capillaries, but anti-bFGF antibody treatment obviously attenuated these effects (Fig. 6F-G and Supporting Fig. 7E).We also supplemented the CM of NCI-H1299 shGli1 and NCI-H1703 shGli1 cells with recombinant bFGF to further verify the effect of bFGF.The wound healing, Transwell migration and Transwell invasion, and tube Fig. 3 The CM from Gli1-knockdown NSCLC cells suppressed angiogenesis in vitro and ex vivo.A Schematic strategy of endothelial cells and aorta rings treated with CM from Gli1-knockdown NSCLC cells.B-E The CM from NCI-H1299 shGli1 cells or NCI-H1703 shGli1 cells suppressed the migration (B and C), invasion (D) and tube formation (E) abilities of HUVECs and HMEC-1 cell.F The CM from NCI-H1299 shGli1 cells or NCI-H1703 shGli1 cells reduced vessel sprouting in the rat aorta rings.G-I The CM from NCI-H1299 shGli1 cells and NCI-H1703 shGli1 cells inhibited HBVP adhesion (G) and recruitment to newly blood vessel (H and I).All the data were showed as mean ± SD, n = 3. ** p < 0.01; *** p < 0.001 vs NCI-H1299 sh NC or NCI-H1703 sh NC cells formation assays showed that CM from NCI-H1299 shGli1 and NCI-H1703 shGli1 cells obviously reduced the migration, invasion and tube formation of endothelial cells.This loss of in vitro angiogenesis was restored when bFGF was added to the CM of NCI-H1299 shGli1 and NCI-H1703 shGli1 cells (Fig. 6H-K and Supporting Fig. 8A-D).Consistently, the H-N Gli1 knockdown in NCI-H1703 cells suppressed tumor growth (H-J), decreased microvessel density (K and L) and reduced pericyte coverage of blood vessel (M), and dextran leakage (N).All the data were showed as mean ± SD, n = 6.** p < 0.01; *** p < 0.001 vs NCI-H1299 sh NC or NCI-H1703 sh NC cells aortic ring assay (Fig. 6L), HBVP adhesion assay (Fig. 6M) and coculture assay with endothelial (Fig. 6N and Supporting Fig. 8E) also confirmed these results.Collectively, these findings clearly suggest that bFGF is a critical downstream mediator of Gli1 in mediating angiogenesis and blood vessel maturation.Fig. 5 bFGF was involved in Gli1-mediated enhancement effect on tumor angiogenesis.A Human angiogenesis array analysis of CM from A549 NC vector or A549 Gli1 vector cells.And the red box indicates the 8 significant up-regulated protein in CM of A549 Gli1 vector cells when compared with that of A549 NC vector cells.B Summary of Human angiogenesis array analysis of relative signal intensity of upregulated cytokines.C The bFGF protein level in the CM of four paired Gli1-overexpressing or Gli1-knockdown cells were detected by ELISA assay, n = 3. D Western blotting detected the bFGF expression in 4-paired Gli1-overexpressing or Gli1-knockdown NSCLC cells, n = 3. E-F The bFGF expression in 80-paired NSCLC tumor tissues and their corresponding noncancerous lung tissues were detected by IHC.The representative images were showed in (E) and the statistical data were displayed in (F).*** p < 0.001 vs adjacent tissues.G RT-qPCR assay determined the bFGF mRNA in 11-paired NSCLC tumor tissues and their corresponding adjacent tissues.*** p < 0.001 vs adjacent tissues.H Kaplan-Meier analysis of NSCLC patients with a high bFGF expression and a low bFGF expression.I-J bFGF expression in xenograft tumors of 2-paired Gli1-overexpressing NSCLC cell lines and 2-paired Gli1-knockdown NSCLC cells detected by IH.All the data were showed as mean ± SD. *** p < 0.001 vs NC vector group or shNC group Fig. 6 bFGF was critical for Gli1 mediated enhancement effect on angiogenesis and pericyte coverage.A-D Anti-bFGF treatment abolished the simulative effect of CM from A549 Gli1 vector or NCI-H460 Gli1 vector cells on the migration, invasion and tube formation of endothelial cells, indicating by wound healing (A), Transwell migration (B) and Transwell invasio (C), and tube formation assays (D).E Anti-bFGF attenuated the enhancement effect of CM from A549 Gli1 vector or NCI-H460 Gli1 vector cells on vessel sprouting.F, G bFGF neutralizing antibody weaken Gli1-mediated enhancement effect on the adhesion (F) and recruitment of HBVP (G).H-K bFGF supplement rescued CM from Gli1-knockdown NSCLC cells mediated inhibitory effect on the migration (H and I), invasion (J) and tube formation (K) of endothelial cells.L bFGF rescued the vessel sprouting of aortic ring educated by CM of NCI-H1299 sh Gli1 and NCI-H1703 sh Gli1 cells.M-N bFGF treatment enhanced the adhesion (M) and recruitment (N) of HBVP that were treated with CM from NCI-H1299 sh Gli1 or NCI-H1703 sh Gli1 cells.All the data were showed as mean ± SD, n = 3. *** p < 0.001 vs NC vector group or shNC group; ### p < 0.001 vs Gli1 vector group or shGli1 group (See figure on next page.) Gli1 increases the bFGF protein by regulating bFGF transcriptional activity and protein stability To elucidate the link between Gli1 and the transcription of bFGF, several experiments were conducted.RT-qPCR analysis showed that Gli1 positively regulated bFGF mRNA expression.The bFGF mRNA level was dramatically upregulated in A549 Gli1 vector and NCI-H460 Gli1 vector cells.Conversely, the bFGF mRNA level was obviously reduced in the counterpart Gli1knockdown cells (Fig. 7A).The dual luciferase reporter assay showed that Gli1 overexpression significantly enhanced bFGF promoter-driven luciferase activity (Fig. 7B).Two potential Gli1-binding sites in the bFGF promoter region, encompassing bp -180 to -168 and bp + 70 to + 82 (Supporting Table 2), were identified.We mutated these two binding sites and found that luciferase activity in the pGL3-bFGF-promoter Mut1−Luc and pGL3-bFGF-promoter Mut2−Luc groups was strongly decreased in comparison to that in the pGL3-bFGFpromoter WT−Luc group (Fig. 7C).Additionally, the ChIP-PCR assay revealed that the relative enrichment of the bFGF promoter in the anti-Gli1 group was significantly greater than that in the anti-IgG group (Fig. 7D).These results suggest that Gli1 regulates bFGF transcription. As bFGF is a highly labile protein [22], we also evaluated the effect of Gli1 on bFGF protein stability.Cells were stimulated with a nontoxic concentration of CHX for 0-6 h to block nascent protein synthesis, after which bFGF expression was analyzed via Western blotting.The results showed that Gli1 overexpression strikingly stabilized the bFGF protein levels.The bFGF protein in A549 NC vector and NCI-H460 NC vector cells was rapidly degraded, decreasing to an undetectable level within 4 h, but the bFGF protein was detectable until the 6 h time point in A549 Gli1 vector and NCI-H460 Gli1 vector cells.Whereas, Gli1 knockdown accelerated this degradation process in NCI-H1299 and NCI-H1703 cells (Fig. 7E and Supporting Fig. 9).In addition, GANT-61 treatment obviously reduced bFGF expression, but treatment with MG132 (a proteasome inhibitor) attenuated this effect (Fig. 7F).Further ubiquitination assays demonstrated that Gli1 overexpression reduced the degradation of bFGF in A549 cells and NCI-H460 cells, but genetic inhibition of Gli1 accelerated bFGF degradation (Fig. 7G).These results indicated that Gli1 stabilized bFGF by repressing the ubiquitin-proteasome pathway.In addition, considering that bFGF/FGFR1 and bFG/FGFR2 play different roles in endothelial cells and pericytes, we also evaluated the effect of CM on the bFGF/FGFR1 and bFG/FGFR2 signaling pathways.The results showed that the CM from A549 Gli1 vector and NCI-H460 Gli1 vector cells significantly increased the levels of bFGF and FGFR1 in endothelial cells, and upregulated bFGF and FGFR2 expression in HBVPs.However, CM from NCI-H1299 shGli1 and NCI-H1703 shGli1 cells decreased the effect on FGFR1 and FGFR2 levels (Supporting Fig. 10).All these suggest that Gli1 may regulate bFGF protein expression by affecting its transcriptional activity and degradation. Pharmacological inhibition of Gli1 suppresses tumor angiogenesis in vitro and in vivo We further evaluated whether GANT-61 affects tumor angiogenesis-related processes.Transwell migration and invasion assays also revealed that GANT-61 significantly reduced the migration and invasion abilities of HUVECs and HMEC-1 cells in a dose-dependent manner (Fig. 8A-B and Supporting Fig. 11A-B).Consistently, we observed that GANT-61 treatment obviously decreased the number of tube-like structures in the tube formation assay (Fig. 8C and Supporting Fig. 11C) and the microvessel density surrounding the aortic rings (Fig. 8D).GANT-61 also remarkably inhibited the growth of NCI-H1299 tumors, as indicated by the tumor volume and tumor weight (Fig. 8E-F).GANT-61 administration led to a larger necrotic area, fewer Ki67-positive cells and CD31-positive cells, and lower Gli1 and bFGF expression (Fig. 8G-H).Notably, GANT-61 also reduced pericyte coverage and dextran leakage, as indicated by the presence of α-SMA-positive cells surrounding CD31positive cells and the relative fluorescence intensity in the tumor tissues (Fig. 8I-L).Taken together, these findings indicate that GANT-61 significantly suppresses tumor angiogenesis. Discussion Suppressing tumor angiogenesis is a promising therapeutic strategy for NSCLC, and increasing numbers of AADs have been used for NSCLC therapy.However, the emergence of AAD resistance impedes their further development [23].Gli1 is a critical regulator of tumorigenesis and invasiveness [24,25].Our previous study showed that the Gli1 level was increased in NSCLC tissues and Gli1 enhanced NSCLC metastasis by regulating Snail expression [18].As angiogenesis is necessary for tumor metastasis, we speculate that Gli1 may be involved in the angiogenesis process in NSCLC.Therefore, we verified that Gli1 overexpression in NSCLC cells promoted angiogenesis.bFGF was identified as a critical regulator of the Gli1-mediated proangiogenic effect.In addition, Gli1 actively regulated the bFGF protein level by promoting bFGF transcriptional activity and reducing bFGF protein degradation (Fig. 9).Moreover, pharmacological inhibition of Gli1 obviously suppressed tumor angiogenesis.In summary, our study provides novel mechanistic Increasing evidences has confirmed that the crosstalk between NSCLC cells and blood vascular cells is one of most important regulators of AAD resistance.Further elucidation of the underlying molecular mechanisms could aid in the development of therapeutic strategies to overcome AAD resistance [26,27].Gli1 is crucial for the activation of Hh/Gli1 signaling pathway.When Gli1 is activated, it translocates to the nucleus, where it leads to the expression of various target genes and activation of downstream signaling pathways [28].Several studies have suggested that Gli1 contributes angiogenesis.Gli1 + cells promote type H vessel formation during the bone defect healing process [29].Blockade of the Shh/Gli1 signaling pathway significantly suppresses tumor angiogenesis [19] However, the role of Gli1 in mediating the crosstalk between NSCLC cells and blood vascular cells and whether this crosstalk is involved in tumor angiogenesis are still unclear.Our study identified Gli1 as a critical regulator of the crosstalk between NSCLC cells and vascular cells.Gli1 in NSCLC cells promoted the endothelial cell and pericyte motility required for angiogenesis by promoting bFGF expression.In addition, suppressing the Gli1-bFGF axis obviously reduced tumor angiogenesis and growth.Taken together, these findings provide evidence that crosstalk between NSCLC cells and vascular cells is crucial for tumor angiogenesis.Our study also , the tumor bearing mice were randomly intraperitoneally injected with vehicle or GANT-61 (25 mg/kg) every two day for a total of 24 days.At the end of the experiment, the mice were sacrificed and the tumors were removed, weighted and photographed.The tumor volume curve and the images of tumors were showed in (E), the tumor weight statistics was presented in (F), the representative images of IHC assay and the quantitative data were displayed in (G) and (H).I, J The effect of GANT-61 on the pericyte coverage of blood vessel in NCI-H1299 xenografts.K, L The effect of GANT-61 on the dextran leakage of tumor vascular.All the data were showed as mean ± SD, n = 6.* P < 0.05, ** P < 0.01 and *** P < 0.001 compared with the vehicle group suggests that targeting the crosstalk between lung cancer cells and blood vascular cells might be a potential therapeutic strategy to overcome AAD resistance in NSCLC. bFGF is one of most important angiogenic factors in the heparin-binding FGF family of growth factors [30,31].It exerts its proangiogenic effect by activating FGF receptors (FGFRs), including FGFR1, FGFR2, FGFR3 and FGFR4.bFGF has been demonstrated to play dual roles in angiogenesis.bFGF-FGFR1 signaling promotes endothelial cell proliferation and sprouting.Moreover, bFGF-FGFR2 regulates perivascular contents and increases pericyte coverage, leading to the maturation of blood vessels [32,33].In addition, the bFGF-FGFR signaling pathway contributes to resistance to anti-VEGF therapy.Intercellular FGF2 signaling between endothelial cells and pericytes has been demonstrated to be necessary for resistance to anti-VEGF therapy.Moreover, anti-FGF therapy is a potential strategy to compensate for resistance to anti-VEGF therapy [34,35].In our study, we reported that bFGF played a prominent role in the communication between NSCLC cells and vascular cells.bFGF is critical for the Gli1-mediated enhancement effect of endothelial cell and pericyte motility to support angiogenesis and blood vessel maturation.Anti-bFGF antibody treatment obviously abolished the Gli1-mediated enhancement of this motility.Whereas, bFGF rescue experiments showed that bFGF attenuated the Gli1 silencing-mediated inhibition of these processes.Gli1 upregulated bFGF expression by promoting its transcriptional activity and suppressing its protein degradation.Take together, the findings of our study provide strong evidence for the role of the Gli1-bFGF axis in regulating NSCLC angiogenesis.Our study also suggests that the intercellular communication axis between NSCLC cells and vascular cells is critical for tumor angiogenesis.In addition, considering the role of bFGF in resistance to anti-VEGF therapy, we speculate that targeting the Gli1-bFGF signaling pathway might be a potential therapeutic strategy for overcoming resistance to antiangiogenic and anti-VEGF therapies in NSCLC.We will further explore this possibility in future studies. GANT-61 is an inhibitor of Gli1 and Gli2.It inhibits the proliferation of many types of cancer cells proliferation, including glioblastoma, breast cancer and Fig. 9 A schema showing that Gli1-mediated tumor cell-derived bFGF promotes tumor angiogenesis by regulating bFGF expression.The bFGF bind with FGFR1 in the endothelial cells and FGFR2 in the pericytes, and then promotes endothelial cell mediated-and pericytes-mediated motility for angiogenesis, leading to tumor angiogenesis and growth of NSCLC pancreatic cancer cells [36][37][38][39].GANT-61 also inhibits A549 cell proliferation [40].Our recent study also showed that GANT-61 administration suppressed NSCLC metastasis [18].However, whether GANT-61 suppresses angiogenesis in NSCLC is still vague.Herein, we showed that GANT-61 obviously suppressed the migration, invasion and tube formation of endothelial cells.Further research proved that GANT-61 treatment obviously suppressed tumor growth and reduced the tumor vascular density and pericyte coverage of blood vessels in NCI-H1299 cell xenografts.GANT-61 treatment also reduced dextran leakage in tumor tissues.These results indicate that GANT-61 is a potential antiangiogenic agent for treating NSCLC. Altogether, our study demonstrates that the Gli1-bFGF axis is crucial for the crosstalk between NSCLC cells and vascular cells.Our study suggest that Gli1 may be a promising therapeutic target for suppressing angiogenesis in NSCLC.Our findings also provide new mechanistic insights into that the crosstalk between NSCLC cells and vascular cells. Fig. 2 Fig. 2 Gli1 overexpression promoted tumor angiogenesis and increased pericyte coverage of blood vessel.A Flowchart of investigating the effect of Gli1 on tumor angiogenesis and pericyte coverage.B, C Gli1 overexpression in A549 cells promotes tumor growth.The A549 NC vector and A549 Gli1 vector cells were subcutaneously injected to establish a xenograft model.The tumor volumes were measured every two days.At the end of the experiment, the tumors were removed, weighted, photographed and then subjected for further pathological analysis.The tumor volume curve and tumor image were presented in (B), the tumor weight statistics was showed in (C).D, E The results of H&E staining and IHC staining of Ki67, CD 31 and Gli1.The representative images were displayed in (D) and the statistical data were showed in (E).F The IF assay to observe the pericyte coverage of blood vessel.The CD31 antibody and α-SMA were used to recognize endothelia cells and preicyte, respectively.G The results of dextran leakage assay.n = 8.H-N Gli1 overexpression promoted tumor growth (H-J) and tumor angiogenesis (K-L), and increased pericyte coverage of vascular (M) and dextran leakage (N) in the xenograft of NCI-H460 cells.All the data were showed as mean ± SD, n = 6.* p < 0.05; *** p < 0.001 vs A549 NC vector or NCI-H460 NC vector cells Fig. 4 Fig. 4 Genetic inhibition of Gli1 attenuated tumor growth and tumor angiogenesis.A Flowchart of the protocol to evaluate whether Gli1-knockdown NCI-H1299 or NCI-H1703 cells affect tumor angiogenesis and pericyte coverage.B, C Gli1-knockdown NCI-H1299 cells attenuated tumor growth.The tumor volume (B) and tumor weight statistics (C) were showed.D, E Gli1 silencing in NCI-H1299 cells induced large necrotic area, less Ki67 positive cells and reduced microvessel density in xenograft.The representative images of IHC and quantitative data were showed in (D) and (E).F, G Genetic inhibition of Gli1 in NCI-H1299 cells reduced pericyte coverage and dextran leakage in xenograft tissues, n = 7. H-N Gli1 knockdown in NCI-H1703 cells suppressed tumor growth (H-J), decreased microvessel density (K and L) and reduced pericyte coverage of blood vessel (M), and dextran leakage (N).All the data were showed as mean ± SD, n = 6.** p < 0.01; *** p < 0.001 vs NCI-H1299 sh NC or NCI-H1703 sh NC cells Fig. 7 Fig. 7 Gli1 regulated bFGF expression by enhancing bFGF transcriptional activity and reducing bFGF protein degradation.A Gli1 positively regulated bFGF mRNA level.*** p < 0.001 vs NC vector group or sh NC group.B Gli1 enhanced the luciferase of bFGF promoter.*** p < 0.001 vs NC vector group.C Dual-luciferase assay showed that Gli1 regulated bFGF expression by binding to its promoter.** p < 0.01, *** p < 0.001vs P0WT group.D The result of ChIP-PCR assay.** p < 0.01, *** p < 0.001vs IgG group.E The effect of Gli1 on bFGF protein stability.Cells were treated with CHX (2 μM) for 0-6 h, and the cells were collected and subjected for Western blotting.*** p < 0.001vs NC vector or shNC group.F The effect of Gli1 on the degradation of bFGF.G The effect of Gli1 on the ubiquitinations of bFGF.All the data were showed as mean ± SD, n = 3 Fig. 8 Fig. 8 GANT-61 suppressed tumor angiogenesis.A-C The effect of GANT-61 on the migration (A), invasion (B) and tube formation abilities (C) of endothelial cells.D The effect of GANT-61 on the vessel formation measured by aortic ring assay.n = 3. E-H The effect of GANT-61 on the tumor growth and angiogenesis.When the tumor volume reached approximately 50 mm3 , the tumor bearing mice were randomly intraperitoneally injected with vehicle or GANT-61 (25 mg/kg) every two day for a total of 24 days.At the end of the experiment, the mice were sacrificed and the tumors were removed, weighted and photographed.The tumor volume curve and the images of tumors were showed in (E), the tumor weight statistics was presented in (F), the representative images of IHC assay and the quantitative data were displayed in (G) and (H).I, J The effect of GANT-61 on the pericyte coverage of blood vessel in NCI-H1299 xenografts.K, L The effect of GANT-61 on the dextran leakage of tumor vascular.All the data were showed as mean ± SD, n = 6.* P < 0.05, ** P < 0.01 and *** P < 0.001 compared with the vehicle group 3
9,969.4
2024-03-16T00:00:00.000
[ "Medicine", "Biology" ]
Reliability of Certain Class of Transportation Networks The reliabilityof a special class of technical objects - tree-like transportation networks is considered. It is argued that the indicators traditionally used to assess reliability in relation to such objects are not very informative and are ambiguously interpreted from a physical point of view. As a quantitative measure, an indicator of operational reliability is proposed and an engineering method for its calculation has been developed. It uses the features of the object, is based on an analogue of the Y-shaped structure-forming fragment of the network and is reduced to a repeating step-by-step computational procedure, when at each step the results of the calculation are obtained at the previous stage and used as initial data. Relationships are derived that allow extending this approach to the case when at the junction point of network elements not two, but, in general, n elements are combined. The advantages of the introduced indicator of the operational reliability of tree-like transportation networks are discussed and the ways of its further generalization and use are outlined Introduction In many branches of industrial production, transport, housing and communal services, etc. Transportation networks are widely used. In general, a transportation network is understood as functionally interconnected motor roads designed for the purposeful movement of a certain "product" in space. Regardless of the physical content of the concept of "product", transportation networks can be classified according to many criteria, one of which is their configuration. Here, consideration is limited to tree-like networks, a characteristic feature of which is the "collective" function, ie, the movement of the "product" from several inputs, where it enters, to the only output -the place of delivery [1]. It is customary to represent such networks in the form of a simply connected acyclic directed graph; from the point of view of graph theory -"tree". As with any technical facility, one of the most important qualities of the transportation network is its reliability. During operation, network elements can fail, be repaired and re-commissioned. The consequence of this is that for some time T the part of the product arriving at the inputs of the network does not arrive at its output. This circumstance should be taken into account when forming the discipline of object maintenance: determining the frequency of preventive and overhaul repairs, the range and volumes of required spare parts, the timing of replacement or modernization of equipment, staffing in the person of maintenance personnel, and other similar measures. Thus, the study of the reliability of transportation networks is not only of theoretical but also of direct practical interest. The effectiveness of measures to improve the reliability of the transportation network is largely determined by what is meant by this term, i.e. which is considered an indicator of the reliability of objects of this class. In the "classical" theory, the reliability of a repaired (restored) object is usually assessed by a number of formal indicators (probability of no-failure operation, mean time between failures, availability factor, etc. [2,3], which, as a rule, are easily interpreted physically and do not contradict). When it comes to the reliability of a tree-like transportation network, such indicators, while remaining "correct" from the point of view of the classical theory, turn out to be of little informative and, as a rule, require additional explanations and clarifications. constructive and functional reliability [4], and for the development of methods for its engineering calculation. Calculation method The indicator of operational reliability is proposed as such an indicator . It was originally introduced in [5] to assess the reliability of the drainage system of a large city, which is a typical example of a tree-like transportation network, and methods for its generalization and engineering calculations were developed in a number of subsequent publications [6][7][8][9]. The indicator of operational reliability is understood as the ratio of the "product volume" not delivered to the network output due to failures of its elements to the "product volume" received at the network inputs for some time, that is: . (1) The calculation of is carried out using a specially developed method of decomposition and equivalence (MDE) [10]. MDE uses a characteristic feature of transportation networks of the class under consideration -their tree structure. The network is presented as a kind of composition of Yshaped fragments that form its structure. Each such structure-forming fragment is virtually replaced by one fictitious element, the reliability parameters of which are determined from additional considerations. This equivalence procedure is illustrated in figure 1. -dimensionless parameter characterizing the process of "recovery after failure" of the i-th network element; and -the cost of production at the inputs of the Y-shaped network fragment. Applying the MDE to the network shown in Fig. 1, gives for the dummy element 123 [10]: . (2) The calculation of the operational reliability of the transportation network as a whole is reduced to a recurrent sequential procedure, at each step of which all structural elements of the network are extracted (decomposed) with the replacement (equivalence) of each of them with one virtual element. As a result, the entire network turns out to be "folded" into one element, the parameter of which, as shown in [6], is numerically equal to the value calculated by formula (1). In this case, the indicator turns out to be normalized and takes into account both the reliability parameters of the network elements and technological variables -the cost of the product at all its inputs. Calculated examples of using this approach can be found in [1, 6]. Generalization of methodology When using MDE, a Y-shaped, i.e. three-component, fragment of the network is considered as a structure-forming element. Meanwhile, there are cases when not two elements are connected at the point of union, as shown in figure 1, but more. The calculation of the dimensionless parameter of the fictitious element replacing such a fragment of the network can no longer be performed according to (2); you must have an expression that takes into account such cases. First, consider a fragment of the network shown in figure 2a), when three elements (1, 2 and 3) are connected at point A with parameters and and the costs of the product through them and ,respectively. Suppose that element 3 is connected to the network not at point A, but at point B and is separated from it by a virtual element characterized by a parameter (Figure. 2b)). In accordance with the MDE, we now select the Y-shaped fragment I, circled in figure 2b) with a dashed line, and find the value of the parameter of the element that replaces it. Using (2) we have: , which makes it possible to go to the subsystem shown in figure 2 c), which, in turn, is a Y-shaped fragment of the network. Repeating the equivalent procedure with respect to it again, for we get: (4) Now, to go to the original subsystem, it is enough to enter (4) , which corresponds to the zero length of the virtual element connecting points A and B (see figure 2b)). After that, for the dimensionless parameter , we have an equivalent element replacing the network fragment shown in Figure 2a) (Figure 2d)): . (5) Applying this procedure of sequential equivalence, after elementary transformations of the resulting expressions, for the general case when n network elements are connected at one point, we obtain: . (6) Expression (6) makes it possible to use the decomposition and equivalence method for almost any tree-like transportation network. Conclusion Further development of MDE is associated with its application for non-stationary cases, including: for "aging" elements [11,12] and for elements with seasonally changing failure rates [13][14][15].
1,773.8
2021-11-01T00:00:00.000
[ "Engineering" ]
High-Order Hybrid WCNS-CPR Scheme for Shock Capturing of Conservation Laws By introducing hybrid technique into high-order CPR (correction procedure via reconstruction) scheme, a novel hybrid WCNS-CPR scheme is developed for e ffi cient supersonic simulations. Firstly, a shock detector based on nonlinear weights is used to identify grid cells with high gradients or discontinuities throughout the whole fl ow fi eld. Then, WCNS (weighted compact nonlinear scheme) is adopted to capture shocks in these areas, while the smooth area is calculated by CPR. A strategy to treat the interfaces of the two schemes is developed, which maintains high-order accuracy. Convergent order of accuracy and shock-capturing ability are tested in several numerical experiments; the results of which show that this hybrid scheme achieves expected high-order accuracy and high resolution, is robust in shock capturing, and has less computational cost compared to the WCNS. Introduction In computational fluid dynamics (CFD), numerical schemes greatly influence the computational accuracy, efficiency, and robustness. Despite the extra complexity of high-order schemes [1], they can provide more accurate results in delicate simulations compared with traditional second-order schemes. With the increasing demand for delicate simulations, such as large-eddy simulation (LES) of turbulent flows, computational aeroacoustics (CAA), and shock-induced separation flows, high-order schemes have attracted remarkable research attentions in recent years [1][2][3][4][5][6]. Among these high-order schemes, the CPR scheme has several good properties. It is compact, efficient, and applicable to complex unstructured meshes [1,6,7]. It was firstly proposed by Huynh [8] and was generalized to unstructured meshes by Wang and Gao [9], which is now widely applied in scientific researches and engineering predictions [6,[10][11][12]. With specific correction functions, the CPR methods are equivalent to specific discontinuous Galerkin (DG) methods, while their calculation cost is relatively lower because they are based on a differential formulation and avoid expensive integration calculations [8]. However, the shock-capturing techniques of high-order finite element methods, including CPR and DG methods, still can not meet the need of many simulations [1,13]. Shu et al. [14] employed a WENO nonlinear limiter on the CPR scheme, which can maintain high-order accuracy in smooth regions and control spurious numerical oscillations near discontinuities. A parameter-free gradient-based limiter for the CPR scheme on mixed unstructured meshes was developed by Lu et al. [15]. By proposing a p-weighted procedure, new smoothness indicators are developed in the DG methods, which obviously improve the shock-capturing ability [16]. However, obvious oscillations can still be observed near discontinuities. Artificial viscosity methods with subcell shock capturing and cross cell smoothness have been proposed for shock capturing [17,18]. However, they are dependent on some user-defined parameters. Great progress has been made; however, the compact polynomial approximations of solutions in a high-order finite element cell are by design not a good approximation of discontinuities, which easily leads to spurious numerical oscillations near discontinuities. A way around this problem is to develop a hybrid scheme of high-order finite element methods (CPR or DG) and finite difference (FD) or finite volume (FV) methods, which provide better shock-capturing abilities locally near shocks. In this way, high-order finite element methods are adopted in smooth regions, which maintain compactness and high resolution. Meanwhile, FD or FV schemes are adopted to provide robust shock-capturing abilities. Dumbser et al. [19] developed hybrid DG-FV methods and utilized second-order FV to capture shocks. Cheng et al. developed multidomain hybrid RKDG and WENO methods [20,21] with good geometry flexibility and low computational costs. These methods show good shock-capturing ability because they avoid using high-order finite element methods to capture shocks directly, but utilize schemes with better shockcapturing abilities instead. A hybrid shock-capturing scheme is developed based on the zonal hybrid WCNS-CPR scheme proposed in [7]. In [7], some fundamental problems of the WCNS-CPR schemes, such as spatial accuracy and geometric conservation laws, have been studied, which makes the scheme efficient and applicable to complex curved meshes. In this paper, a shock detector and a nonlinear weighted interpolation are introduced to the hybrid WCNS-CPR scheme for simulating problems with discontinuities. The nonlinear weighted approach of WCNS is introduced to the hybrid scheme for both the shock detection and the shock capturing. Considering the robust shock-capturing ability of WCNS, we intend to develop a hybrid shock-capturing scheme to combine merits of two schemes. Different from the zonal hybrid strategy in [7], the two schemes are switched dynamically based on the smoothness of the local flow field. The main contributions of this paper are as follows: (1) Nonlinear mechanisms, including shock detection and shock capturing, are introduced into the hybrid scheme and a hybrid scheme with good shockcapturing ability is a developed (2) A hybrid WCNS-CPR scheme with dynamical switching strategy between the two schemes is proposed, in which the strategy of dynamical scheme switching and interface treatment methods are addressed (3) Numerical experiments are conducted to show the good properties in accuracy, efficiency, and shockcapturing ability of the hybrid WCNS-CPR scheme. The rest of this paper is organized as follows. In Section 2, high-order WCNS and CPR schemes are briefly reviewed. The hybrid WCNS-CPR scheme is constructed in Section 3, where the interface treatments are discussed in detail. Section 4 presents several numerical tests to show the performance of the proposed hybrid method. Finally, concluding remarks are given in Section 5. Brief Review of WCNS and CPR In this section, a brief review of WCNS and CPR is presented based on two-dimensional hyperbolic conservation laws where U is a conservative variable vector and F and G are the flux vectors. 2.1. Brief Review of Third-Order WCNS. Equation (1) is discretized by a finite difference scheme at every node ðx i , y j Þ where E i,j ′ is the approximation of the spatial derivatives. The following fourth-order flux difference operators are used where h is the grid spacing. Here, Þ are the Riemann fluxes at cell edges, as shown in Figure 1. Riemann solvers can be used to compute numerical fluxes, such as Lax Friedrichs, Roe, Osher, AUSM, HLL, and their modifications. We refer to papers [3,4,22] and references therein. To capture discontinuities, the conservative variables at cell edges U R i−ð1/2Þ,j are obtained by the weighted nonlinear interpolation proposed in [23]. The third-order WCNS interpolation has the following steps: Step 1. Divide the third-order three-point stencil Step 2. Construct an interpolation polynomial p ðmÞ ðxÞ in each substencil S ðmÞ i , m = 1, 2, which satisfies p ðmÞ ðx i Þ = U i,j . We have International Journal of Aerospace Engineering Step 3. Calculate the nonlinear weights ω k based on linear weights d m and a smoothness indicator IS k for each substencil S ðmÞ i , m = 1, 2. The linear weights are The improved Z-type nonlinear weights [24,25] are defined by where ε = 10 −6 is a small number, IS k is a smoothness indicator, τ 5 = jIS 0 − IS 1 j, and C q and p z are parameters. We use p z = 2 and C q = 4 to improve the performance of the nonlinear weights. An improved smoothness indicator [25] is employed. The formulations of U L i−ð1/2Þ,j is symmetric with U R i−ð1/2Þ,j by i. Review of Third-Order CPR. For the third-order CPR framework, there are 3 2 solution points ði, j, l, mÞ, l = 1, 2, 3, m = 1, 2, 3, in the ði, jÞ-th cell, as shown in Figure 2. The nodal values of the conservative variable U at solution points are updated by the following differential equation The reconstructed flux F̃i ,j ðξ, ηÞ and G̃i ,j ðξ, ηÞ are expressed as where σ 1 i,j ðξ, ηÞ and σ 2 i,j ðξ, ηÞ are flux correction polynomials Here, correction functions g L ðξÞ, g R ðξÞ, g L ðηÞ, and g R ðηÞ are all cubic polynomials, andF andĜ are Riemann fluxes corresponding to F and G, respectively. Moreover, the solution polynomial is where L l ðξÞ and L m ðηÞ are the 1D Lagrange polynomials in the ξ and η directions. The flux polynomial is Remark 1. In this paper, solution points are the Gauss points, which are positioned at ξ 1 = − ffiffiffiffiffi 15 p /5, ξ 2 = 0, ξ 3 = ffiffiffiffiffi 15 p /5 for K = 3: Correction function for g DG scheme is used, which is expressed as Hybrid Schemes Based on WCNS and CPR 3.1. Strategy of Scheme Switching. The hybrid scheme adopts the nonlinear WCNS with shock-capturing ability in nonsmooth regions and the CPR scheme with high computational efficiency in smooth regions. Thus, a shock detector is needed to judge the location of shocks. Recently, many shock detectors with good properties have been developed [26][27][28]. In this paper, we use the NI (nonlinear indicator) [28] to detect which cells probably contain shocks and then mark them as troubled cells. The formula of the NI shock detector is as follows: where r + 1 represents the number of candidate stencils in WCNS with r = 1 for the third-order WCNS. To maintain that discontinuities do not move out of the troubled cells during the time integration, buffer cells are introduced around the troubled cells, which are also calculated using WCNS. As shown in Figure 3, the direct neighbors of troubled cells are defined as buffer cells, and other cells are called smooth cells. Above all, the strategy of scheme switching is as follows. When the NI shock detector detects shocks, in the cells near shocks (namely, in troubled cells and buffer cells), CPR scheme is switched to WCNS. And when shocks leave the cells, these cells are calculated by the CPR scheme again. That is to say, the shock detector judges three types of cells, and we use WCNS in troubled cells and buffer cells and the CPR scheme in other cells. Interface Treatment. The interface treatment influences the accuracy and stability of the overall scheme and should be discussed carefully. The main difficulty is at the WCNS-CPR interface. We denote the solution points of the CPR scheme by c and the solution points of WCNS by w. 3.2.1. 1D Interface Treatment. First, WCNS in the i + 1-th cell needs information of ghost points fg i,k , k = 1, 2, 3g from the neighboring i-th CPR cell, as shown in Figure 4. The solution values of ghost points of WCNS cell are calculated via CPR interpolation using data at fc i,k , k = 1, 2, 3g. That is, Second, we apply u −ðCPRÞ i+ð1/2Þ,interface provided by i-th CPR cell and u +ðWCNSÞ i+ð1/2Þ,interface computed in i + 1-th WCNS cell with the values of ghost points to calculate the numerical fluxes at interface: In addition, information exchange of switching strategy at different moments should also be considered. When i-th cell is a smooth cell at the n-th time step and it is judged as a buffer cell or a troubled cell in the next time step, information of solution points of WCNS can be obtained by Lagrange interpolation via information of CPR. Now, we consider the opposite situation: the cell is a buffer cell or a troubled cell at n-th time step and it is switched to a smooth cell at the next time step, information should be transmitted from WCNS to CPR. The procedure involves nonlinear interpolation cross cells. We take the ði, 1Þ-th solution point as an example. Step 1. Divide the stencil S = fw i−1,3 , w i,1 , w i,2 g into two sub- i-th cell i+1-th cell International Journal of Aerospace Engineering Step 2. Construct interpolation polynomials u 1 ðc i,1 Þ and u 2 ð c i,1 Þ in S 1 and S 2 , respectively, we have where l 1 and l 2 are linear Lagrange polynomials. Step 3. Calculate the nonlinear weights based on linear weights d 1 and d 2 . The linear weights can be obtained by solving the following equations: Then, nonlinear weights can be calculated by equation (6). 2D Interface Treatment. In this subsection, we consider the interface treatment between ði, jÞ-th CPR cell and ði + 1 , jÞ-th WCNS cell in 2D computational domain in Figure 5(a). The solution values at ghost points of WCNS in the CPR cell are calculated via CPR interpolation using data at fðc i,j,i 0 ,j 0 Þ , i 0 = 1, 2, 3 ; j 0 = 1, 2, 3g. Using dimension by dimension interpolation strategy, we do the Lagrange interpolation in the x direction first, as shown in Figure 5(b), and then in the y direction, as shown in Figure 5(c). Using solution polynomial equation (15) of CPR, we have at the ghost points fg i,j,i 1 ,j 1 , i 1 = 1, 2, 3 ; j 1 = 1, 2, 3g. Then, we calculate the numerical fluxes at cell interface. As shown in Figure 6, Therefore, for the CPR scheme, the numerical flux at f p CPR Meanwhile, the numerical flux at f p WCNS k for WCNS iŝ The 2D information exchange of switching strategy at different moments is similar with the 1D situation. The Nonlinear interpolation is used from information in WCNS to CPR, while the linear interpolation from CPR to WCNS. They are not presented here for clarity. Numerical Investigation In this section, we will test spatial accuracy, shockcapturing performance, and computational efficiency of the hybrid WCNS-CPR scheme. Comparisons with the WCNS are also investigated. We firstly take 1D linear wave equation to compare numerical errors of single and the hybrid scheme. 1D shock-tube problems are solved to demonstrate the shock-capturing ability of the hybrid scheme. Then, 2D isentropic vortex problem in Euler equation systems is solved for testing spatial accuracy and computational efficiency of WCNS and CPR. At last, 2D Riemann problem and double Mach reflection problem are simulated to test the shock-capturing performance of the hybrid scheme and comparison of computational efficiency is conducted in the double Mach reflection problem. In this paper, Riemann fluxes are computed by the Lax Friedrichs solver. The explicit third-order TVD Runge-Kutta scheme is used for time integration. The L ∞ error and L 2 error are computed to measure local solution quality and global solution quality, respectively. The L 2 error is estimated by where U h i and Uðr i Þ are the numerical solution and the exact solution at the i-th solution point. Here, N is the total number of solution points. 1D Linear Wave Equation. Consider the 1D linear wave equation with the initial condition and periodic boundary conditions. L = 6 and time step ΔT = 0:0001 is used to solve the problem till T = 3:0 by different schemes. Firstly, we compare numerical errors of WCNS and CPR on the uniform grid. It can be seen from Table 1 that both WCNS and CPR achieve desired third-order accuracy. In addition, CPR has smaller error than WCNS under the same degrees of freedom (DOFs). Secondly, the performance of the hybrid WCNS-CPR scheme is studied. To study the convergence accuracy of the hybrid scheme, a very small δ NI = 2:0 × 10 −8 is used to avoid that purely CPR scheme is used in this smooth problem. As shown in Figures 7(c)-7(e), the gray line with light blue dots represents values of NI at each solution point. The value of cell indicator (in dark blue line) represents the scheme used in every cell. When the indicator is 1, WCNS is employed. Indicator of value 0 means the cell is smooth, and it is calculated by CPR. As the number of DOFs gets larger, the percentage of WCNS cells decreases. When the number of DOFs becomes 480, the solution is smooth enough and CPR is used in all the cells. From numerical errors in Table 1, we can see that the hybrid scheme achieves expected third-order accuracy. where ρ is the density, v is the velocity, E is the total energy, and p is the pressure, which is related to the total energy by E = ðp/ðγ − 1ÞÞ + ð1/2Þρv 2 with γ = 1:4. Three kinds of shock-tube problems are considered in this subsection. Shock-Tube Problem of Sod. Consider the Sod problem with initial conditions [29] ρ, u, p ð Þ= 0:125,0, 0:1 ð Þ , −5 ≤ x < 0, and Dirichlet boundary conditions. Time step ΔT = 0:0001, DOFs = 210, and δ NI = 0:2 are used to solve the problem till T = 3:0 by the hybrid WCNS-CPR scheme. The reference solution is obtained by the WCNS3 scheme with DOFs = 2100. As shown in Figure 8, shocks and contact discontinuities can be well captured. Near the contact discontinuity (x ≈ 3) and the shock (x ≈ 5), no obvious oscillation is observed, which shows the good shock-capturing ability of the hybrid scheme. In addition, the CPR scheme is adopted in the main computational domain. The use of CPR scheme helps save computational cost since the linear CPR scheme is more efficient than the nonlinear WCNS scheme as will be shown in Section 4.3.1. The reference solution is obtained by the WCNS3 scheme with DOFs = 9000. As shown in Figure 9, CPR is employed in the smooth area. Moreover, the hybrid scheme can capture shocks and discontinuities well. The numerical result of density obtained by the hybrid scheme is in good agreement to the reference, which illustrates the high-resolution property of the hybrid scheme. For an ideal gas, γ = 1:4. Here, ρ,u,v,p, and E are the density, the x direction velocity, the y direction velocity, the pressure, and the total energy, respectively. 2D Isentropic Vortex Problem. In this subsection, the hybrid WCNS-CPR scheme is adopted to solve the isentropic vortex problem [32]. The initial condition is a mean flow, fρ, u, v, pg = f1, 1, 1, 1g, with the isotropic vortex perturbations added to the mean flow. There are Distribution of WCNS cells and CPR cells where r = ffiffiffiffiffiffiffiffiffiffiffiffiffi x 2 + y 2 p and the vortex strength ε = 5. The computational domain is ½−10, 10 × ½−10, 10 with periodic boundary conditions. The problem is solved till T = 1 with time step ΔT = 0:0001 for accuracy test. The parameter is δ NI = 0:001 in this test. Figure 11 shows distribution of different cells, NI in the x direction (NI in the y direction is central symmetric with it), and error distribution. When DOFs are 120 and 240, the central parts of the computational domain (which are in red) are calculated by WCNS. However, when DOFs = 480, all cells are calculated by CPR. This is because that, as the number of DOFs gets larger, NI gets smaller in this smooth problem. When parameter δ NI is lower than the smallest NI in the computational domain, the whole domain is judged as smooth area and only CPR is employed. From Table 2, we can see that the numerical errors of CPR alone are smaller than those of WCNS under the same DOFs. In addition, the WCNS scheme achieves third-order accuracy, while the order of accuracy of CPR is about 2.5. The hybrid WCNS-CPR scheme achieves third-order accuracy. From Table 3, we can see that the computational cost of WCNS with characteristic projection is more than twice than that of CPR under the same DOFs. Considering that the errors of CPR are smaller than those of WCNS as shown in Table 2, the efficiency of CPR is even higher than that of WCNS under the same error. Figure 12, the results of the hybrid scheme in Figure 13(a) capture the flow field accurately with no obvious oscillation. In addition, small-scale structures are better captured by the hybrid scheme. We can see in Figure 13(b) that most of the computational domain is calculated by CPR, which contributes to the high efficiency of the hybrid scheme. Double Mach Reflection Problem. This test is first described by Woodward and Colella [33]. It is aimed at testing the robustness of the high-resolution schemes. A Mach 10 oblique shock is initially set up at x = 1/6 on the lower boundary, and its direction is 60°from the x-axis. Inflow and slip wall boundary conditions are specified at the bottom boundary for x = ½0, 1/6 and x > 1/6, respectively. For the upper boundary (y = 1), a time-dependent boundary based on the analytical propagation speed of the oblique shock is imposed. The simulation is run until T = 0:2 using time step ΔT = 0:0001. Figure 14. The remaining area is accurately calculated by the CPR method as well. Moreover, the total number of cells using CPR is much more than that using WCNS. As a result, the computational efficiency of the hybrid scheme is higher than that of WCNS as shown in Table 4. For DOFs = 600 × 150 and δ NI = 0:9, the hybrid scheme is slightly more efficient than WCNS. However, for DOFs = 1200 × 300 and δ NI = 0:9, the hybrid time is about 1.6 times more efficient than the WCNS scheme. Concluding Remarks In this paper, a new hybrid WCNS-CPR scheme for simulating conservation laws with discontinuities is proposed by introducing nonlinear weighted approach of WCNS to the linear hybrid scheme. Shock detector based on nonlinear weights is used to detect nonsmooth troubled cells and buffer cells. In these cells, WCNS is employed in order to capture shocks, while in smooth cells CPR is used to maintain high computational efficiency. By introducing buffer cells, the scheme interfaces are by design placed in relatively smooth regions and a linear interpolation method is used for the CPR to provide information for the WCNS scheme. Accuracy tests, shock-capturing tests, and computational efficiency tests are conducted. Accuracy tests in 1D and 2D show that the hybrid scheme achieves designed high-order accuracy. Shock-capturing tests illustrate that the hybrid scheme captures discontinuities without obvious oscillations. Computational efficiency tests show that the CPU cost of the hybrid WCNS-CPR scheme is relatively lower than that of WCNS. In the future, the hybrid scheme will be generalized to three-dimensional cases and to Navier-Stokes equations. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest. 12 International Journal of Aerospace Engineering
5,216.4
2020-10-14T00:00:00.000
[ "Computer Science" ]
PITCH IN ESOPHAGEAL SPEECH Most reports on pitch in esophageal speech emphasize that it is low-pitched with a measured fundamental frequency rarely higher than 100 cps. Our investigations show, however, that much esophageal 'phonation' lacks periodicity and,' therefore, a fundamental frequency (i.e. pitch in the accepted sense). An auditory impression of pitch modulation can, nevertheless, be created by physical properties other than a varying harmonic structure. Our sample includes a rare case of truly high-pitched esophageal phonation with a fundamental frequency in the upper limit of the voice an octave higher than the highest reported in the literature. High-pitched phonation apparently requires a vibratory source in a 'mode' different from that of low-pitched phonation and should therefore be distinguished from it in discussing pitch in esophageal voice. Hoe toonhoogte fonasie vereis blykbaar 'n vibrerende bron in 'n vibrasietoestand wat verskil van die van 'n lae toonhoogte fonasie, en moet dus onderskei word van 'n lae toonhoogte in die verhandeling van toonhoogte in esofageale spraak. While the current literature provides a fair consensus as to the limitations of pitch in esophageal speech, certain anomalies and contradictions are nevertheless apparent.Our evidence suggests that this arises from a failure to recognize that a measurable fundamental frequency (the physical correlate of pitch perception in normal voice) is often lacking, or only intermittently present, in the typical 'low-pitched' esophageal 'voice'.It is nevertheless necessary to account for the weak auditory impression of pitch variation in esophageal voice even without a measurable fundamental frequency.In distinguishing between pitchless 'phonation' and the esophageal voice with sustained, true pitch, we find in the data offered in the literature (in particular, Kytta 7 ) and in our own sample, evidence of an upper limit to the pitch range in esophageal voice at least an octave higher than the highest frequency previously reported (see below).Such high pitch is only achievable by a small number of laryngectomees who have both a high-pitch and low-pitch 'mode'.The suggestion that there are two apparently disjoint modes in esophageal phonation leads us to N investigate the possibility of two correspondingly different states of the cricopharyngeus muscle in vibration. METHODOLOGY The Kay Sonagraph (Model 6061 -B) was used to analyse the physical properties of esophageal voices in the manner described in Kerr and Lanham 6 .As indicated there, narrow-band spectrograms show harmonic structure in narrow lines on the horizontal axis (see, for example, spectrogram C2, narrow, 12-18 in this paper); wide-band spectrograms show each release burst in a sequence of vibratory cycles by vertical lines whose regularity and timing can be measured on the horizontal axis (± 12.33 cm = 1 second).Fundamental frequency is measured as the interval between successive lower harmonics using frequency-scale magnification for greater accuracy (possible error is in the region of ± 5 Hz).All of our spectrograms are cuts from the flow of speech in normal conversation or, in one case, from an attempt at singing.Three of our cases are presented spectrographically here.One of them (Case C) provided still radiographs drawn from extensive still and cineradiographic investigation of the site and mechanism of this case's two voices -'low' and 'highpitched' phonation.These will be found in Kerr and Lanham 6 , (p. 100.) Case A (female, aged 61) underwent laryngectomy in 1953 when more than the usual segment of trachea was removed.She received speech therapy over a period of 6 months, but no great improvement was recorded.Case Β (female, aged 61) underwent laryngectomy in 1950; the infrahyoid muscles were preserved and overlapped to strengthen pharyngeal repair.She received speech therapy shortly after the operation and was greatly helped by it.Case C (female, aged 62) underwent laryngectomy in 1951 with surgery similar to that of Case B.She received speech therapy after the operation, but believes that it was largely due to her own efforts that she 'found her voice'. . Reports on pitch in esophageal speech give measured fundamental frequencies and pitch ranges which are low in comparison with normal laryngeal speech (a full octave Lower according to Snidecor and Curry 10 ), but the extent of the pitch range may actually be greater than that of normal speakers -a differ-ence of 13.21 tones against 10.5 tones according to these authors.Obvious anomalies do, however, arise in reports of this kind.After reporting the abnormally wide frequency range of superior esophageal speakers in their sample, Curry and Snidecor 1 continue: '. .. the esophageal speakers were nonetheless considered to have a restricted pitch range .. .' and 'The frequency measured indicated a considerably greater movement than was apparent in pitch to the listener.'In citing a statement by Van den Berg: 'The speech of a clever patient sometimes gives the illusion of agreeable changes of pitch which objectively are not present' these authors do not fully explore the obvious contradiction.Kytta 7 states: 'The auditory observation group did not in fact notice any appreciable frequency variation .. .However, measurement of the fundamental frequency showed an unexpectedly large variation, 3-4 tones...'. Anomaly and contradiction is, however, partly explained by the conclusions to which our studies have led us: (a) Much 'low-pitched' esophageal phonation is actually pitchless, but does have measurable rates of vibration without periodicity (i.e.repeated cycles) and it is these aperiodic vibrations which are often measured, (b) The voices of many speakers have pitchless stretches interspersed with stretches in which periodicity is achieved and lower harmonics are discernible, (c) An auditory impression of pitch modulation can be achieved by varying prosodic properties other than a harmonic pattern, (d) The most effective pitch modulation is achieved by a small number of laryngectomees who can produce 'highpitched' phonation with pitch ranges considerably higher than those reported in the literature. In this article the main focus is on high-pitched phonation which has properties sufficiently distinct to warrant separation from the common lowpitched esophageal phonation.These properties apparently correspond to a different state or mode of the vibratory source.The major differences are found in pitch range and in a sustained harmonic structure with most of the energy concentrated in harmonics rather than in concomitant aperiodic components.Spectrographic evidence of high-pitched phonation is found in the spectrograms below: C2 (narrow) and, less distinctly, CI (narrow).These demonstrate a clear harmonic structure up to 2 KHz, a measurable fundamental varying between 215 and 317 and, therefore, pitch in the accepted sense of the term.(To our knowledge, the highest reported frequencies in esophageal speech are an estimate of 185 cps by Damste 3 and a measured 135.5 cps by Curry and Snidecor 1 .) Our data on low-pitched esophageal voices show that the majority, including some highly intelligible ones, either totally lack a measurable fundamental frequency, or intermittently, even spasmodically, present short stretches of a syllable or less with a few lower harmonics and considerable aperiodic energy between discrete harmonic energies.A minority are capable of sustaining a discernible harmonic structure over several syllables and all these vibrations occur at rates which are within the 'pitch ranges' reported for esophageal speech, e.g.: 32 to 72 in Kytta 7 , 17 to 135 in Curry and Snidecor 1 , 50 to 100 in Van den Berg and Moolenaar-Bijl 13 .The output of these slow rates of vibration of the pseudoglottis is, therefore, aperiodic more often than periodic, but auditory impressions of pitch variation in esophageal speech are not necessarily dependent on periodicity; our evidence suggests that differ-, ences in intensity, duration, etc., can compensate in a limited way in giving an impression of pitch modulation (see our discussion below on Case B's attempt at singing).Plomp 8 reports that in normal laryngeal speech, pitch is determined by the lower harmonics; Fry 5 states that it is the fundamental frequency which determines pitch, but the ear interprets the interval between lower harmonics as the pitch if there is no energy present in the first harmonic.As true pitch is a potential property of esophageal speech (particularly when it is 'high-pitched') we propose that esophageal phonation be recognized as pitchless where there is no evidence of lower harmonics and a fundamental frequency, or these are only spasmodically present over very short stretches. Characteristics of low-pitched esophageal speech In attempting to characterise low-pitched esophageal phonation we use Kytta's data and our own.Kytta's discussion reveals the basis for discrepancies between his interpretation of pitch properties and ours: In the spectrogram in Fig. 3, p.31, labelled 'Spectrographs analysis of the fundamental frequency of the fundamental' we see a vibratory pattern producing releases which are bursts of noise irregular in time and amplitude, and varying in complexity. Our spectrogram B1 (narrow) shows a very similar pattern which we recognize as common in low-pitched esophageal speech (all Kytta's spectrograms on pp.57-63 are of the same type).But Fig. 3, and our spectrogram B1 (narrow) above, both show a vibratory pattern lacking periodicity and devoid of harmonic structure, and a measurable fundamental; they do, however, show measurable rates of vibration and it is apparently this which Kytta measures in stating the limits of the pitch range of his subjects.Vibratory rates in this mode can be considerably faster (varying around 92 at Β1 1-2), but do not necessarily acquire periodicity, nor a higher pitch because of this.Fast vibration in this mode (up to roughly 110 in our data) tends to lose release bursts in continuous turbulence (see B2, wide and narrow, 1-3 and Kytta's HIIPI 0-0.25 on p. 63).In some, possibly a minority of esophageal speakers phonating in this mode, discrete lower harmonics (usually below 1 KHz) do indeed emerge intermittently over short stretches -often shorter than a syllable, but a good deal of energy in random aperiodic components partly obscures them.(In our data these intermittent harmonics occur in the range of 60 to 130 cps.)In respect of the quantity of periodicity in low-pitched esophageal phonation, yielding a measurable first harmonic, we find it difficult to accept Curry and Snidecor's 2 finding that 59.5% out of 61.4% total phonation is periodic. 3• Characteristics of high-pitched esophageal phonation. High-pitched esophageal phonation is characterized by sustained harmonic structure over more than one syllable; high concentration of energy in the lower harmonics as distinct from accompanying noise; and the pitch range, the lowest limit of which is probably in the region of 150 cps and the highest close to 400 cps (the latter frequency is seen in spectrogram DVI in Kerr and Lanham 6 , p. 92).In addition to our Case C, Kytta provides spectrographs evidence of high-pitched esophageal phonation similar to Case C (but lacking the vibrato in the voice of the latter) on p. 50 (the utterance VAARA) and p. 54 (TLODCO).Fundamental frequency in the former is roughly 170 cps at 0.25 and 162 cps at the peak of the first syllable in the latter.Kytta AN ANALYSIS OF THE THREE VOICES In briefly reviewing our three cases presented here we note that radiography locates the vibratory source at the cricopharyngeus for all.(The postulation of an upper pharyngeal vibrator for Case C's high-pitched phonation made in Kerr and Lanham 6 , has been refuted by recent cineradiography.)The voices of Cases A and Β are confined to low-pitched phonation with no evidence of discernible harmonics in the voice of the former and virtually none in the latter. Case A's voice is croaky, low in power and intelligibility and gives no auditory impression of pitch modulation.There is believed to be some loss of muscular function in the inferior pharyngeal constrictor.Spectrogram A1 shows very slow, highly irregular rates of vibration; at A1 1-3 there are 9 vibrations over some 17 csecs at rates ranging between 30 and 61.Case Β on the other hand has a highly efficient voice of considerable power and high intelligibility although unpleasantly harsh.Vibrations are faster and more regular; a variation of ± 11 around a mean rate of 92 at Β1 1-2 and ±2 around a mean rate of 45 at B1 11-12. Case B's voice is capable of weak impressions of pitch modulation and the sung and spoken versions of the same utterance are shown in spectrograms B1 and B2 respectively as evidence of the physical properties of 'pitch' in aperiodic low-pitched esophageal speech.The sung version differs in the ' following respects: (a) Where the highest pitch is attempted (make at B2 5-8) the syllable duration is more than doubled, (b) The rate of vibration increases by nearly 40% at B2 5-8 and approximately 30% at B2 11 (the vowel nucleus of her), (c) The intensity of the prominent syllables is greater by 3.3 db for the vowel nucleus of make and 1.3 for the nucleus of home, (d) More energy is located in higher frequencies (above 3 KHz); an impression of this difference can be gained at B2 1-3.Apart from (b), which does not create discernible harmonics to any extent, the other differences, probably collectively, contribute to the auditory impression of higher pitch.Case C has two 'voices' (low and high-pitched esophageal phonation) and spectrograms C2 are produced in order to show how they alternate in the continuum of speech and reveal the basis of the strong impression of pitch modulation which, over the telephone, can deceive even the most experienced ear as to the true nature of the voice.As seen at C2 3-8, where the second, high-pitched voice breaks and the first voice takes over, Case C's first voice gives no evidence of a harmonic structure.The first voice is heard as lowpitched and it is the alternation between the two voices which contributes to the strong impression of wide pitch variation.Case C reports that her second voice is the product of strenuous, sustained effort in the months after laryngectomy and obviously requires a much higher degree of neuromuscular control; the vibrato effect best seen in CI (narrow), is evidence of this.With declining morale in recent years Case C tends to lapse into her first voice for longer and longer stretches. The general impression of Case C's voice is that it is a high-pitched woman's voice (spectrographically at C2 13-17 it is strikingly similar to female laryngeal phonation), but extreme, often abrupt, variation in power is a feature.Pitch change is often sharp, sudden and somewhat unpredictable.The abrupt changes usually correspond to a change from one voice to another; spectrogram C2 shows this with C2 at 1 and 12-21 being the second voice with a onesyllable break into the first voice at 3-8. STATE AND FUNCTION OF THE ESOPHAGEAL SPHINCTER IN VIBRATION The site and mechanism of the two modes of phonation in Case C's voice may be examined by reference to radiographs on p. 100 in Ken and Lanham 6 .In that article they are numbered D1 -D4.D2 and D3 represent phonation in the first (low-pitched) and second (high-pitched) modes respectively, D4 outlines the cricopharyngeus immediately after phonation has ceased.In D3 the shadow over the lower three-fourths of a club-shaped head of the cricopharyngeus outlines a channel of air which rises and tapers abruptly to a point of occlusion in the upper quarter of the clubhead.The constriction ring within the esophageal sphincter is apparently located here.In D2 the constriction ring is less easily identifiable, but an air channel is seen above the protruding fold (marked a) and forward of the lower third of the inferior anterior edge of the clubhead.Blurring at the upper half of this anterior edge suggests a comparatively large mass of the inner margin of the sphincter involved in vibration, which is not the case in D3.This latter state is consistent with highfrequency vibration in which only a short stretch of a tightly constricted inner margin is involved. In an attempt at simulating the second mode of vibration in the form of a labial trill, L.W.L. produced, in the manner described below, a spectrogram fairly similar to CI (narrow) 6-10, which showed a fundamental around a mean of 400 cps with undulating harmonics.It was necessary to make a very tight labial closure with strong intra-oral pressure bursting through a very short stretch (about .5 cm) which produced a high-frequency labial trill.Essential requirements are that each lip present a stiff, tense, edge which is relatively thin.With more flaccid, thicker edges labial trilling becomes slower and less regular, a longer stretch is involved in vibration and periodicity is easily lost.For high-frequency trilling the level of effort required to achieve the balance between air pressure, muscle tension and the configuration of the vibrating edge appears to match that required for Case C's second voice and is as difficult to maintain at an even level. In comparison with radiograph D4 (Kerr and Lanham 6 ) in which there is no vibration, phonation in D2 and D3 is seen to involve a shortening of the 'neck' behind the clubhead of the cricopharyngeus and a lifting and flattening of the head (contrast the drooping head in D4).The head is flattened in the vertical plane more in D3 than in D2 suggesting tighter constriction. Taking account of the differences shown in the radiographs referred to above, we suggest that first-mode and second-mode vibrations differ in mechanism in the following ways: In the former a relatively thick, fairly relaxed inner margin of the sphincter is drawn into closure or close approximation in a muscular movement not much different from that required for esophageal burping in a normal speaker; consequent vibrations involve a comparatively large mass of the cricopharyngeus.Second-mode vibration, however, requires a much more highly controlled movement which produces greater tension along the inner margin and presents a stiffer, firmer edge at the point where vibration takes place.Much less of the margin is involved in high-frequency, low amplitude vibration with only small quantities of esophageal air released in the process.This could account for the significantly longer 'duration of air charge' in this type of phonation.The distended upper end of the esophagus suggests high intra-esophageal pressure. CONCLUSION In the discussion above it has not been our intention to present a case for two mutually exclusive categories of esophageal phonation, but to highlight a type of esophageal phonation of which only a limited number of laryngectomees is capable.This 'second mode' of pseudoglottal vibration is the product of a relatively intact musculature and considerable effort both in acquiring and maintaining it.We.suggest, however, that in one or more of the parameters of esophageal phonation the two modes are discontinuous states and not merely different points on the same scales.There is, for example, no evidence that over any Voiced' stretch in esophageal speech a gradational move from one to the other mode is possible (by progressively increasing muscle tension along the margin of the esophageal sphincter).Spectrogram C2 (wide) shows how Case C moves from one mode to another by a discrete abrupt change; in fact, 2 Van den Berg's 'high pitch' is not equatable to our second mode in esophageal phonation.His upper limit of the fundamental frequency is 85 cps and is said to coincide with high intensity and stronger air flow.Reproduced by Sabinet Gateway under licence granted by the Publisher (dated 2012) we have no evidence at all of harmonic energies in Case C's first, low-pitched voice and, therefore, no evidence of an ability to shift gradually from low to high pitch.Other investigators have suggested that one 'setting' of the pseudoglottis in phonation cannot be phased gradually into another.Van den Berg et al. 12 , state: '. .. he always separated the section with a high pitch 2 from that with a low pitch by a new breath and a new injection of air.. .'.There is evidence, therefore, that for stretches of, possibly, the duration of an air charge, the esophageal speaker is locked into a particular mode of phonation and, if a harmonic structure is present, into maintaining more or less the same pitch. /A 1 . COMPARISON OF FINDINGS IN STUDIES DEALING WITH PITCH IN ESOPHAGEAL SPEECH Fundamental frequencies and pitch ranges.
4,578
1975-12-31T00:00:00.000
[ "Physics" ]
Two Pairs of 7,7′-Cyclolignan Enantiomers with Anti-Inflammatory Activities from Perilla frutescens Perilla frutescens (L.) Britt. (Labiatae), a medicinal plant, has been widely used for the therapy of multiple diseases since about 1800 years ago. It has been demonstrated that the extracts of P. frutescens exert significant anti-inflammatory effects. In this research, two pairs of 7,7′-cyclolignan enantiomers, possessing a cyclobutane moiety, (+)/(−)-perfrancin [(+)/(−)-1] and (+)/(−)-magnosalin [(+)/(−)-2], were separated from P. frutescens leaves. The present study achieved the chiral separation and determined the absolute configuration of (±)-1 and (±)-2. Compounds (+)-1 and (−)-1 have notable anti-inflammatory effects by reducing the secretion of pro-inflammatory factors (NO, TNF-α and IL-6) and the expression of pro-inflammatory mediators (iNOS and COX-2). These findings indicate that cyclolignans are effective substances of P. frutescens with anti-inflammatory activity. The present study partially elucidates the mechanisms underlying the effects of P. frutescens. Introduction Chirality is very important in pharmacological research and drug development [1]. Drugs are mainly derived from natural products or by chemical synthesis, and, hence, often mixtures of several isomers are produced. Racemates are two enantiomers that exhibit identical physical and chemical properties, except for optical activities. As chiral separation techniques advance rapidly, people come to realize that some enantiomers may possess different properties with respect to pharmacological effects, toxicity and pharmacokinetics [2][3][4]. Some chiral medicines even exert opposite effects. For instance, (+)-picenadol is a potent opiate agonist, while (−)-picenadol acts as an opioid antagonist [5]. Therefore, it is important to study the enantiomers of chiral substances. Perilla frutescens (L.) Britton (Labiatae) is a perennial herb that is extensively dispersed and cultivated not only in China, but also in other Asian countries [6]. As a well-known traditional edible and medicinal herb, P. frutescens has been used to treat common colds with cough and nausea and food poisoning such as fish or crab to humans [7]. Recent studies have demonstrated that about 90 compounds are derived from P. frutescens, including terpenoids, lignans and flavonoids [8]. Most of them have various pharmacological effects, including antimicrobial [9], anti-allergic [10], anti-inflammatory [11], antioxidant [12] and anti-depressive activities [13]. As part of our systematic study of bioactive products from traditional Chinese medicines, the isolation, structural elucidation and anti-inflammatory activity evaluation of the isolated compounds from P. frutescens were explored in the current study. Further analysis of the HSQC, 1 H, 1 H-COSY, HMBC and NOESY data confirm the planar structure and relative configuration of compound 1. Interestingly, no cotton effects are observed in its ECD spectrum, which indicates that 1 might be isolated as a racemic mixture. Subsequently, compound 1 was further enantioseparated on a chiral column to afford ( [11]. Moreover, magnosalin's absolute configuration has not been ascertained. Interestingly, X-ray diffraction data analysis of compound 2 ( Figure 3) suggests that it contains two racemates. The (+)-2 and (−)-2 enantiomers successfully obtained by chiral Further analysis of the HSQC, 1 H, 1 H-COSY, HMBC and NOESY data confirm the planar structure and relative configuration of compound 1. Interestingly, no cotton effects are observed in its ECD spectrum, which indicates that 1 might be isolated as a racemic mixture. Subsequently, compound 1 was further enantioseparated on a chiral column to afford ( Further analysis of the HSQC, 1 H, 1 H-COSY, HMBC and NOESY data confirm the planar structure and relative configuration of compound 1. Interestingly, no cotton effects are observed in its ECD spectrum, which indicates that 1 might be isolated as a racemic mixture. Subsequently, compound 1 was further enantioseparated on a chiral column to afford ( [11]. Moreover, magnosalin's absolute configuration has not been ascertained. Interestingly, X-ray diffraction data analysis of compound 2 ( Figure 2, the absolute configuration of (+)-2 is identified as 7R,8S,7 R,8 S based on the good agreement with the calculated ECD spectrum of (7R,8S,7 R,8 S)-2, naturally elucidating the absolute configuration of (−)-2 as (7S,8R,7 S,8 R)-2. Consequently, the latter configuration (7S,8R,7 S,8 R) is clarified as the reported compound (−)-magnosalin [11], whereas the former one (7R,8S,7 R,8 S) corresponds to an unreported compound labeled (+)-magnosalin. Physicochemical Properties and Spectroscopic Data of Compounds 1 and 2 Perfrancin (1) Table 1 shows 1 H and 13 C NMR data. The Effects of Compounds (+)-1 and (−)-1 on NO Production in RAW 264.7 Macrophages NO has been considered an important mediator whose production increases in the case of inflammation [15]. Therefore, inhibition of excessive NO production is commonly used to assess anti-inflammatory effects of compounds [16]. Levels of NO production in the supernatant of RAW 264.7 macrophages activated by LPS were investigated using the Griess reagent Kits. Figure 5 shows that, in comparison with the control group, NO production was higher in the LPS-treated group. Administration of (+)-1 or (−)-1 inhibits NO synthesis in RAW 264.7 macrophages was activated by LPS in a dose-dependent manner. The IC50 values of (+)-1 and (−)-1 are 21.61 ± 2.35 and 28.02 ± 1.93 μM, respectively. In addition, we used curcumin as a positive control (IC50 = 20.37 ± 0.77 μM). Effects of Compound (+)-1 on TNF-α and IL-6 Production in RAW 264.7 Macrophages Macrophages were cultured in the presence of compound (+)-1 and LPS to further characterize the anti-inflammatory effects of compound (+)-1. The concentrations of TNFα (p < 0.01, Figure 6A) and IL-6 (p < 0.01, Figure 6B) rose to high levels following treatment Effects of Compounds (+)-1 and (−)-1 on NO Production in RAW 264.7 Macrophages NO has been considered an important mediator whose production increases in the case of inflammation [15]. Therefore, inhibition of excessive NO production is commonly used to assess anti-inflammatory effects of compounds [16]. Levels of NO production in the supernatant of RAW 264.7 macrophages activated by LPS were investigated using the Griess reagent Kits. Figure 5 shows that, in comparison with the control group, NO production was higher in the LPS-treated group. Administration of (+)-1 or (−)-1 inhibits NO synthesis in RAW 264.7 macrophages was activated by LPS in a dose-dependent manner. The IC 50 values of (+)-1 and (−)-1 are 21.61 ± 2.35 and 28.02 ± 1.93 µM, respectively. In addition, we used curcumin as a positive control (IC 50 = 20.37 ± 0.77 µM). macrophages at 3.13, 6.25, 12.5, 25 and 50 μM. Therefore, compounds (+)-1 and (−)-1 at 3.13-50 μM were used in subsequent experiments. Effects of Compounds (+)-1 and (−)-1 on NO Production in RAW 264.7 Macrophages NO has been considered an important mediator whose production increases in the case of inflammation [15]. Therefore, inhibition of excessive NO production is commonly used to assess anti-inflammatory effects of compounds [16]. Levels of NO production in the supernatant of RAW 264.7 macrophages activated by LPS were investigated using the Griess reagent Kits. Figure 5 shows that, in comparison with the control group, NO production was higher in the LPS-treated group. Administration of (+)-1 or (−)-1 inhibits NO synthesis in RAW 264.7 macrophages was activated by LPS in a dose-dependent manner. The IC50 values of (+)-1 and (−)-1 are 21.61 ± 2.35 and 28.02 ± 1.93 μM, respectively. In addition, we used curcumin as a positive control (IC50 = 20.37 ± 0.77 μM). Effects of Compound (+)-1 on TNF-α and IL-6 Production in RAW 264.7 Macrophages Macrophages were cultured in the presence of compound (+)-1 and LPS to further characterize the anti-inflammatory effects of compound (+)-1. The concentrations of TNFα (p < 0.01, Figure 6A) and IL-6 (p < 0.01, Figure 6B) rose to high levels following treatment Effects of Compound (+)-1 on iNOS and COX-2 Protein Expression in RAW 264.7 Macrophages Many critical enzymes are related to the establishment and progression of inflammation, including iNOS and COX-2 [17,18]. We carried out Western blot analysis on LPStreated RAW 264.7 macrophages with or without compound (+)-1 to determine protein expression levels. LPS treatment at 1 μg/mL for 24 h markedly increased iNOS (p < 0.01, Figure 7A and Table S3) and COX-2 (p < 0.01, Figure 7B and Table S3) protein expression, in comparison with the control group. Compound (+)-1 treatment significantly reversed the increased expression of iNOS (p < 0.01) and COX-2 (p < 0.01) at 50 μM in RAW 264.7 macrophages activated by LPS. Effects of Compound (+)-1 on iNOS and COX-2 Protein Expression in RAW 264.7 Macrophages Many critical enzymes are related to the establishment and progression of inflammation, including iNOS and COX-2 [17,18]. We carried out Western blot analysis on LPS-treated RAW 264.7 macrophages with or without compound (+)-1 to determine protein expression levels. LPS treatment at 1 µg/mL for 24 h markedly increased iNOS (p < 0.01, Figure 7A and Table S3) and COX-2 (p < 0.01, Figure 7B and Table S3) protein expression, in comparison with the control group. Compound (+)-1 treatment significantly reversed the increased expression of iNOS (p < 0.01) and COX-2 (p < 0.01) at 50 µM in RAW 264.7 macrophages activated by LPS. Effects of Compound (+)-1 on iNOS and COX-2 Protein Expression in RAW 264.7 Macrophages Many critical enzymes are related to the establishment and progression of inflammation, including iNOS and COX-2 [17,18]. We carried out Western blot analysis on LPStreated RAW 264.7 macrophages with or without compound (+)-1 to determine protein expression levels. LPS treatment at 1 μg/mL for 24 h markedly increased iNOS (p < 0.01, Figure 7A and Table S3) and COX-2 (p < 0.01, Figure 7B and Table S3) protein expression, in comparison with the control group. Compound (+)-1 treatment significantly reversed the increased expression of iNOS (p < 0.01) and COX-2 (p < 0.01) at 50 μM in RAW 264.7 macrophages activated by LPS. Discussion As lignans, no more than 200 7,7 -cyclolignans have been identified in nature, which are mainly reported from the Labiatae [19][20][21], Asteraceae [22,23], Piperaceae [24,25] and Ginkgoaceae families [26]. In total, Labiatae accounts for less than 30 compounds. Two pairs of 7,7 -cyclolignan enantiomers, (±)-perfrancin (1) and (±)-magnosalin (2), were separated from the leaves of P. frutescens. Perfrancin has been reported only once before in organic synthesis [14], and this is the first report of perfrancin from natural plants. Magnosalin has been isolated from Magnolia salicifolia [27], Piper sumatranum [24] and P. frutescens [11]. However, absolute configurations of these compounds have not been determined. In the current study, we found that two pairs of lignan enantiomers were further separated through chiral HPLC separation, and compounds' absolute configurations were successfully determined by X-ray diffraction analyses and ECD calculations. As an essential inflammatory mediator, NO is overproduced in macrophages stimulated by LPS [28]. As reported before, compound 2 is regarded as an inhibitor of NO synthase [11]. In our present study, 7,7 -cyclolignan enantiomers (+)-1 and (−)-1 were assessed for their anti-inflammatory effect against the release of NO in RAW 264.7 macrophages stimulated by LPS. They have similar inhibitory effects on NO production. In acute and chronic diseases, TNF-α is an essential factor in the inflammatory reaction [29]. Another cytokine, IL-6, also plays a critical role in inflammation [30,31]. Thus, a therapeutic strategy to reduce inflammation could be suppressing the production of inflammatory mediators and inhibiting the release of pro-inflammatory cytokines. In the present study, LPS treatment significantly increased the secretion of TNF-α and IL-6, which was markedly suppressed by compound (+)-1. Inflammatory processes are initiated and sustained by iNOS and COX-2, which play key roles in the production of NO [32,33]. Many studies have confirmed that the expression of iNOS and COX-2 is highly upregulated during infection [34][35][36]. Our Western blot results revealed that the excessive expression of iNOS and COX-2 induced by LPS in RAW 264.7 macrophages was significantly inhibited by compound (+)-1. Thus, our findings provide further evidence that compound (+)-1 has anti-inflammatory activity. Extraction and Isolation All chemical materials are shown in Supplementary Materials. After air-drying the P. frutescens leaves (10 kg), we extracted them with 95% EtOH (3 × 50 L) for 3 × 2 h under reflux. A yellow residue (2.1 kg) was obtained after evaporating the EtOH extract under reduced pressure, which was suspended in H 2 Statistical Analysis Each experiment was conducted three times. Data are presented as the mean with SEM. Multiple groups were analyzed by ANOVA combined with Tukey's post hoc test. Statistical significance was defined as p < 0.05.
2,836.4
2022-09-01T00:00:00.000
[ "Biology" ]
Reassessment of the role of plasma membrane domains in the regulation of vesicular traffic in yeast The Saccharomyces cerevisiae plasma membrane has been proposed to contain two stably distributed domains. One of these domains, known as MCC (membrane compartment of Can1) or eisosomes, consists of furrow-like membrane invaginations and associated proteins. The other domain, called MCP (membrane compartment of Pma1), consists of the rest of the membrane area surrounding the MCC patches. The role of this plasma membrane domain organization in endocytosis is under debate. Here we show by live-cell imaging that vesicular traffic is restricted to the MCP and the distribution of endocytic and exocytic sites within the MCP is independent of the MCC patch positions. Photobleaching experiments indicated that Can1 and Tat2, two MCC-enriched permeases, exchange quickly between the two domains. Total internal reflection fluorescence and epi-fluorescence microscopy showed that the enrichment of Can1 at the MCC persisted after addition of its substrate, whereas the enrichment of Tat2 disappeared within 90 seconds. The rates of stimulated endocytosis of Can1 as well as Tat2 were similar in both wild-type cells and pil1Δ cells, which lack the MCC. Thus, our data suggest that the enrichment of certain plasma membrane proteins in the MCC does not regulate the rate of their endocytosis. Introduction The yeast plasma membrane contains two membrane domains, which have a stable distribution. One of the domains consists of patches distributed throughout the cell surface and the other is formed by the network-like membrane area surrounding the patches (Malinska et al., 2003;Young et al., 2002). A recent electron microscopy study showed that the patches defining the first domain correspond to furrow-like membrane invaginations which are ~300 nm long, ~150-250 nm deep and ~50 nm wide (Stradalova et al., 2009). This membrane domain has been called eisosomes (Walther et al., 2006) or MCC (membrane compartment of Can1) because the arginine permease Can1 is enriched in the patches . Later reports distinguished the MCC from the underlying cytosolic protein clusters or eisosomes (Fröhlich et al., 2009;Grossmann et al., 2008). For simplicity, we will use here the term MCC to collectively refer to the membrane furrows and all associated transmembrane and cytosolic proteins. In total 22 proteins (9 transmembrane proteins and 13 cytosolic proteins) have been shown to localize to this domain (Deng et al., 2009;Grossmann et al., 2008). A key protein in the maintenance of MCC organization is Pil1. Deletion of the PIL1 gene prevents the formation of the membrane furrows and the concentration of the associated proteins (Grossmann et al., 2008;Stradalova et al., 2009;Walther et al., 2006). The H + -ATPase Pma1 is excluded from the MCC and localizes to the remainder of the plasma membrane surrounding the MCC patches (Malinska et al., 2003). This domain was thus termed MCP (membrane compartment of Pma1) . The hexose transporter Hxt1 and the general amino acid permease Gap1 are evenly distributed to both MCC and MCP domains (Lauwers et al., 2007;Malinska et al., 2003). The functional significance of the plasma membrane domain organization in yeast is not well understood. However, it has been suggested that the MCC has a direct role in regulating endocytosis (Grossmann et al., 2008;Walther et al., 2006). Walther and colleagues (Walther et al., 2006) proposed that eisosomes mark static sites of endocytosis, thus regulating the localization of endocytic events. However, Grossmann and co-workers (Grossmann et al., 2008) suggested that endocytosis takes place exclusively outside the MCC, in the MCP area, and that endocytosis of certain transmembrane proteins is regulated by sequestering them in the MCC where they are protected from endocytosis. Consequently, these suggested links between the MCC and endocytosis contradict each other. We have used quantitative live-cell imaging to study the relationship between the domain organization of the yeast plasma membrane and vesicle trafficking processes in detail. We show that both endocytosis and exocytosis are excluded from the MCC and distributed within the MCP area independently of the MCC pattern. Using total internal reflection fluorescence (TIRF) microscopy and fluorescence recovery after photobleaching (FRAP) experiments, we show that the absence of vesicle traffic in the MCC is unlikely to be a mechanism that significantly protects transmembrane proteins enriched in the MCC from endocytosis. Consequently, our results suggest that the MCC and MCP membrane organization is not important for the regulation of vesicular traffic in yeast. Endocytic sites are localized within the MCP area and their distribution is independent of the MCC pattern To analyze the spatial relationship between endocytic sites and the plasma membrane domains, we imaged cells that expressed Abp1-GFP, a marker for sites of clathrin-mediated endocytosis (Kaksonen et al., 2003) and Pil1-mCherry, a component of eisosomes, as a marker for the MCC (Walther et al., 2006;Grossmann et al., 2007). In this study, we consider the MCC area as the membrane areas labeled by Pil1-mCherry and the MCP area as the remainder of the plasma membrane devoid of Pil1 signal . To obtain optimal spatial separation of the plasma membrane domains, we focused to the bottom surfaces of cells. In smallbudded cells, most of the endocytic events are concentrated in the bud, preventing the detection of individual endocytic sites by light microscopy. Furthermore, MCC patches are practically absent from small buds . We therefore limited our analysis to large-budded or unbudded cells. The acquired movies showed that endocytic events were excluded from the MCC ( Fig. 1A; supplementary material Movie 1), which is in agreement with observations by Grossmann and colleagues (Grossmann et al., 2008). To test whether MCC patches could regulate the localization of endocytic events within the MCP area, we measured the distance of the centroid positions of Abp1-GFP patches to the centroid position of the respective closest MCC patch. In total 383 endocytic events were detected in 12 cells. Distances smaller than 100 nm were absent, which confirms that none of the considered endocytic events colocalized with a MCC patch (Fig. 1B). We then wanted to compare the distribution of endocytic events to a distribution of points whose positions are independent of the MCC pattern. We therefore calculated the distances from the center of each pixel within the cell surface area to the centroid position of the respective closest Pil1-mCherry patch (Fig. 1B,C). Interestingly, for distances longer than 100 nm, the distance distribution of endocytic sites closely followed the distance distribution of cell pixels, which shows that the number of endocytic events that took place within the MCP area at a certain distance to a MCC patch was tightly connected to the number of pixels in this MCP area, i.e. the area size. Our quantification thus indicates that endocytic events are distributed within the MCP independently of the MCC pattern. In other words, the likelihood that an endocytic site forms at a certain position within the MCP area is independent of the distance to the closest MCC patch. To control for the accuracy of the positional measurements we imaged two proteins, Sla1 and Abp1, which transiently localize to endocytic sites (Kaksonen et al., 2003). We acquired two-color movies and determined the centroids of Sla1-GFP and Abp1-mCherry patches at the time points when they reached their respective maxima and measured their distances. 79% of the Sla1-GFP patch centroids were within 100 nm from the centroid of the respective Abp1-mCherry patch that corresponded to the same endocytic event (Fig. 1C). We aimed to further test the exclusion of endocytic events from the MCC. It was recently shown that the protein kinases Pkh1 and Pkh2 (Pkh1/2) are involved in eisosome organization via phosphorylation of Pil1 and Lsp1, the main protein components of eisosomes, and that inhibition of Pkh1/2 kinase activity can lead to the formation of extended net-like eisosome structures, thus reducing the MCP area (Fröhlich et al., 2009;Luo et al., 2008;Walther et al., 2007). We therefore created pkh1Dpkh2 depletion strains to test whether endocytosis is still limited to the reduced MCP area. In these strains, PKH1 is deleted and PKH2 is controlled by a tetO2 regulatable promoter (Belli et al., 1998). After depletion of the Pkh1/2 kinases, we found only a minor fraction of cells (less than 20%) that exhibited stable net-like Pil1-mCherry structures, whereas about 80% of the cells had Pil1-mCherry clusters that were motile at the plasma membrane (supplementary material Movie 2, right panel). We observed similar frequencies of motile and stable net-like Pil1-mCherry structures in a previously characterized pkh1 ts pkh2D strain, that has PKH2 deleted and PKH1 replaced with a temperature-sensitive allele (pkh1 D398G ) (Friant et al., 2001;Inagaki et al., 1999), both at the permissive (25°C) and non-permissive temperature (37°C) (data not shown). Importantly, in the cells with stable net-like Pil1-mCherry structures, endocytosis was still excluded from the area marked by these structures (Fig. 1D; supplementary material Movie 2, left panel). We also visualized the localization of Can1-GFP after depletion of the Pkh1/2 kinases. In wild-type cells, the arginine permease Can1-GFP is enriched in the MCC patches (Malinska et al., 2003). Interestingly, neither the cells with net-like Pil1-mCherry structures, nor the cells with motile Pil1-mCherry clusters exhibited any clear enrichment of Can1-GFP into the area marked by the Pil1-mCherry structures. Instead, after the depletion, the Can1-GFP distribution appeared dispersed at the plasma membrane, similarly to what has been shown in pil1D cells (supplementary material Fig. S1, Movie 3) . Hence, the Pil1-mCherry structures observed after Pkh2 depletion are probably not fully functional eisosomes, but they still exclude endocytic events perhaps as a result of simple steric hindrance of vesicle coat assembly by the membrane-associated eisosome proteins. Exocytic sites are localized within the MCP area and their distribution is independent of the MCC pattern Endo-and exocytosis are spatially closely coupled in yeast cells. The majority of exocytic events take place in the bud to support polarized cell growth. Similarly, endocytic events are concentrated in the growing bud to support membrane recycling and polarity maintenance. To analyze the spatial distribution of exocytosis with regard to the MCC and MCP pattern, we imaged cells expressing Pil1-mCherry and Sec3-GFP or Exo70-GFP. Sec3 and Exo70 are components of an exocytic docking complex, the exocyst. Sec3 and Exo70 are thought to serve as landmarks for exocytic vesicle docking at the plasma membrane (Finger et al., 1998;He et al., 2007). In small-budded cells, exocytosis, similarly to endocytosis, is too concentrated in buds to resolve individual Sec3-GFP or Exo70-GFP patches by light microscopy. However, in large-budded or unbudded cells Sec3-GFP and Exo70-GFP formed transient diffraction-limited patches at the plasma membrane that are likely to be individual sites of exocytosis (supplementary material Movie 4). Similarly to endocytic sites, they formed exclusively in the MCP area of the plasma membrane ( Fig. 2A). In analogy to the analysis of endocytic events described above, we measured the centroid positions of the Sec3-GFP and Pil1-mCherry patches and calculated the distance to the closest MCC patch for 165 exocytic events (Fig. 2B). Very small distances that would indicate colocalization with a MCC patch were absent from the distance distribution of exocytic sites. Similarly to the distribution of endocytic sites, the distribution of exocytic events resembled the distance distribution of cell pixels for distances longer than 100 nm. We then imaged cells that expressed Exo70-GFP and Pil1-mCherry and determined the distance to the closest MCC patch for 262 exocytic events in 16 cells. The distance distribution was very similar to that obtained with Sec3-GFP (Fig. 2C). In summary, our analyses indicate that exocytosis, similarly to endocytosis, is restricted to the MCP area and that the likelihood of an exocytic event occurring at a certain position within the MCP is independent of the distance to the closest MCC patch. Vesicular cargo exchanges between the MCC and MCP Three plasma membrane permeases, Can1, Tat2 and Fur4, have been shown to concentrate in the MCC Malinska et al., 2003;Malinska et al., 2004;Grossmann et al., 2008). These permeases are trafficked to the plasma membrane by exocytosis and they are removed in a regulated manner from the plasma membrane by endocytosis (Beck et al., 1999;Galan et al., 1996;Grossmann et al., 2008;Lin et al., 2008). As both trafficking events are confined to the MCP area, we tested whether the 330 Journal of Cell Science 124 (3) permeases can move between the domains. To study this, we imaged cells expressing Can1-GFP and Pil1-mCherry. Consistent with earlier studies, Can1-GFP was concentrated in most MCC patches in wild-type cells (Malinska et al., 2003). However, the permease was also present in the MCP area. To estimate the fraction of the total plasma membrane Can1 molecules in the MCC, we analyzed confocal images of cells expressing Can1-GFP and Pil1-mCherry. The analysis showed that only about 28% of the total plasma membrane Can1 molecules are located in the We then performed FRAP experiments to test whether the Can1-GFP molecules concentrated in MCC patches exchange with the MCP pool. To study Can1 dynamics in the MCP, we first bleached areas within the MCP area. Can1-GFP fluorescence recovered in the MCP with a half time of 73±5 seconds (n8; Fig. 4A-C,G; supplementary material Movie 5). We then photobleached Can1-GFP patches that colocalized with Pil1-mCherry-labeled MCC patches. Owing to the size of our bleach spot (diameter ~600 nm), some of the surrounding MCP area was also bleached. The Can1-GFP signal measured in the MCC patch area recovered on average with a half time of 344±17 seconds (n10; Fig. 4D-G; supplementary material Movie 5). Because the 488 nm wavelength used for photobleaching also photobleached the Pil1-mCherry signal, we showed that more than 10 minutes after the recovery of the Can1-GFP MCC signal, there was still almost no recovery of the Pil1-mCherry signal (Fig. 4D-F), which demonstrates the high stability of Pil1 in eisosomes (Walther et al., 2006). Thus, the FRAP experiments indicate that Can1 molecules continuously exchange between the MCC and the MCP in cells that are not induced to endocytose Can1. Similar recoveries were observed for Tat2-GFP (supplementary material Movie 6). The fast recovery rates indicate that the Can1 and Tat2 molecules in the MCC exchange with the MCP pool on average in about 5 minutes. This suggests that the accumulation of these cargoes in the MCC is unlikely to provide significant protection against endocytic turnover. Endocytosis of Can1 and Tat2 is regulated normally in the absence of the MCC To test whether the plasma membrane domain organization regulates endocytosis of the MCC-enriched permeases as suggested by Grossmann and colleagues (Grossmann et al., 2008), we analyzed the endocytosis rates of Can1 and Tat2 in wild-type cells and cells where the MCC pattern was disrupted by PIL1 deletion. Few large clusters of the MCC proteins Lsp1-GFP and Can1-GFP were reported to form at the plasma membrane in some pil1⌬ cells (B)Distribution of distances between the patch centroid positions of the analyzed Sec3-GFPlabeled exocytic sites and the centroid position of the respective closest MCC patch marked by Pil1-mCherry (red bars). 165 exocytic events were analyzed (minimum distance, 71 nm; mean ± s.d., 364±154 nm). The black bars show the distribution of distances between the centers of the pixels within the cells and the centroid position of the respective closest MCC patch marked by Pil1-mCherry. 13,718 pixels were analyzed (minimum distance, 1 nm; mean ± s.d., 394±190 nm). In total eight cells were evaluated. (C)Distribution of distances between the patch centroid positions of the analyzed Exo70-GFP-labeled exocytic sites and the centroid position of the respective closest MCC patch marked by Pil1-mCherry (red bars). 262 exocytic events were analyzed (minimum distance, 77 nm; mean ± s.d., 407±176 nm). The black bars show the distribution of distances between the centers of the pixels within the cells and the centroid position of the respective closest MCC patch marked by Pil1-mCherry. 27,797 pixels were analyzed (minimum distance, 1.5 nm; mean ± s.d., 431±223 nm). In total, 16 cells were evaluated. Fig. 3. Localization of Can1-GFP to both the MCC and the MCP. Representative example of confocal images that were used to calculate the fraction of the total plasma membrane Can1 molecules in the MCC area tõ 28%. Note that Can1-GFP is enriched at MCC patches, but there is also a prominent signal in the MCP area. Scale bar: 2mm. Walther et al., 2006). However, we observed these clusters only in cells from cultures that had reached a cell density of OD 600 >0.8. At lower culture densities, the MCC patches were completely absent and we only observed diffuse cytosolic or plasma membrane distributions of Lsp1-GFP and Can1-GFP, respectively. We used addition of arginine to induce endocytosis of Can1-GFP in cells from exponentially growing cultures (OD 600 0.4) and quantified the progress of endocytosis over time by determining the ratio of GFP fluorescence signal between the plasma membrane and the cell interior (Fig. 5A,B; supplementary material Fig. S2). Endocytosis of Can1-GFP was dependent on the arginine stimulus in both wild-type and pil1D cells, and the rate of endocytic uptake was also indistinguishable in both strains. Interestingly, we observed that when the cultures were allowed to continue growing, Can1-GFP endocytosis was initiated without addition of arginine when the culture reached a certain density (OD 600 0.8). Quantification of the rate of this spontaneous endocytosis in wild-type and pil1D cells did not reveal any significant differences between the strains (Fig. 5C). We additionally analyzed the rate of Can1 endocytosis in wild-type and pil1D cells after triggering endocytosis with cycloheximide (Grossmann et al., 2008;Lin et al., 2008). The rate of endocytosis was slower in this case, but again, we determined similar rates for wild-type and pil1D cells (supplementary material Fig. S3). We then analyzed the endocytosis rate of Tat2-GFP in wildtype and pil1D cells. Endocytosis of Tat2-GFP was triggered by addition of tryptophan into the growth medium (Fig. 5D). As with Can1-GFP, Tat2-GFP endocytosis was dependent on the substrate trigger in both strains and we did not detect significant differences in the endocytosis rates between the strains (Fig. 5E). These results imply that the regulation of Can1 and Tat2 endocytosis is not significantly affected by the enrichment of the permeases into the MCC. MCC enrichment of permeases does not correlate with their endocytosis We then tested whether the MCC enrichment of Can1 and Tat2 is influenced by addition of their substrates. We triggered Can1-GFP endocytosis by addition of arginine and monitored its distribution on the plasma membrane. Notably, for up to 1 hour after the addition of arginine, we could still observe a clear enrichment of Can1 in the MCC by epifluorescence microscopy (Fig. 6A). Owing to the increasing intracellular signal and the decreasing plasma membrane signal, the enrichment of Can1-GFP in MCC patches became difficult to visualize with epifluorescence microscopy at later time points. However, using TIRF microscopy, we could show that the MCC enrichment of Can1-GFP persisted and was still evident even in cells that had already trafficked the vast majority of Can1-GFP molecules into the vacuole (Fig. 6B). The TIRF images in Fig. 6B illustrate that the Can1-GFP signal at the plasma membrane decreased significantly, correlating with the progress of endocytosis, whereas the enrichment in the MCC persisted. To quantify this observation, we determined the ratio of the Can1-GFP fluorescence signal in the MCC and the MCP from TIRF images at different time points after induction of endocytosis. Vesicular traffic and membrane domains The average plasma membrane signal of Can1-GFP decreased as expected, whereas the ratio between the MCC and the MCP remained unchanged (Fig. 6C). These results imply that the Can1-GFP molecules are lost from both membrane domains at similar rates during endocytosis. Interestingly, in contrast to Can1, the enrichment of Tat2-GFP in MCC patches was lost when its substrate tryptophan was added into the medium (Fig. 6D). To determine how fast the MCC enrichment is lost, we continuously monitored the surface of cells expressing Tat2-GFP and the MCC marker Lsp1-mCherry, and added tryptophan to these cells. The MCC enrichment was lost within ~90 seconds after addition of tryptophan ( Fig. 6E; supplementary material Movie 7). Identical results were obtained by using Pil1-mCherry as a MCC marker (data not shown). The MCC enrichment was restored within 5 minutes after washing out the added tryptophan (data not shown). After the loss of MCC (D)Representative Tat2-GFP images of wild-type and pil1D cells before and ~180 minutes after Tat2 endocytosis was induced by addition of tryptophan. (E)The ratio of mean Tat2-GFP fluorescence intensity between the plasma membrane and the cell interior (mean ± s.d., n≥12) is shown for induced wild-type cells, induced pil1D cells, non-induced wild-type cells and non-induced pil1D cells. The times relate to the time of tryptophan addition (0 minutes). Tryptophan was added to cell cultures at a cell density of OD 600 0.4. Scale bars: 5mm. In each row the Can1-GFP epifluorescence image at the cell center (Epi-Center) is followed by Can1-GFP surface image (Epi-Surface), Can1-GFP TIRF image (TIRF, all images in the column with same contrast settings), Can1-GFP TIRF image [TIRF(CA), images in the column have individually adjusted contrast], Pil1-mCherry surface image (Epi-Surface), overlay Can1-GFP TIRF image (green) and Pil1-mCherry surface image (red) (Overlay). The first row shows a cell before addition of arginine, the second row shows a cell 50 minutes after the addition of arginine, the third row shows a cell that had already endocytosed a significant portion of Can1-GFP (350 minutes after the addition of arginine), the bottom row shows a cell that had already endocytosed most Can1-GFP molecules (410 minutes after the addition of arginine). The MCC enrichment persisted, as can be seen in the TIRF and overlay images. Scale bar: 2mm. (C)The mean fluorescence intensity at the plasma membrane of Can1-GFP-expressing cells was quantified from TIRF images (compare third column in 6B) at different times after Can1 endocytosis had been induced by addition of arginine (mean ± s.d., n (number of cells) ≥13). As a comparison, the ratio of mean Can1-GFP fluorescence signal between MCC patches and the corresponding local MCP areas that was quantified from the same TIRF images is shown [mean ± s.d., n (number of MCC patches) ≥18]. (D)Representative surface-view images of wild-type cells expressing Tat2-GFP and Lsp1-mCherry are shown before and 44 minutes after Tat2 endocytosis had been induced by addition of tryptophan. Scale bar: 5mm. (E)Surface-view images of a wild-type cell expressing Tat2-GFP and Lsp1-mCherry are shown at different time points that refer to the time of addition of tryptophan to a concentration of 0.35 mM (see supplementary material Movie 7). Scale bar: 2mm. enrichment, the distribution of the Tat2-GFP signal was highly dynamic and not fully homogeneous (supplementary material Movie 7). However, distribution of the Tat2-GFP signal appeared uncorrelated with the MCC patches after addition of tryptophan. In summary, the addition of substrates induces the uptake of both Can1 and Tat2, but affects their MCC enrichment in different ways, suggesting that the two processes are not functionally connected. Discussion Walther and co-workers (Walther et al., 2006) suggested that eisosomes mark sites of endocytosis. This proposal was mainly based on experiments done in pil1D cells. In these cells, the other main eisosome protein Lsp1 formed few large clusters, which were reported to colocalize with endocytic markers. However, our quantitative analysis in wild-type cells showed that endocytic events are excluded from the MCC (Fig. 1). This result is consistent with the observation of Grossmann and colleagues (Grossmann et al., 2008) that the endocytic proteins Rvs161 and Ede1 do not colocalize with a MCC marker. Furthermore, we showed that exocytosis is also excluded from the MCC. Hence, the MCP area is the domain of vesicular traffic at the yeast plasma membrane. Importantly, our distance distribution analyses showed that the likelihood that an endocytic or exocytic site forms at a certain position within the MCP area is independent of the distance to the closest MCC patch (Fig. 1B,C and Fig. 2B,C). Therefore, the MCC does not seem to have an influence, either positive or negative, on the selection of sites of vesicular traffic within the MCP area. Grossmann and colleagues (Grossmann et al., 2008) suggested that MCC patches negatively regulate the endocytic uptake of MCC-enriched transmembrane transporters by sequestering them away from the MCP area where endocytosis takes place. So far, three plasma membrane permeases, Can1, Tat2 and Fur4, have been shown to be enriched in the MCC Malinska et al., 2003;Malinska et al., 2004;Grossmann et al., 2008). In support of the negative regulation by the MCC, faster Can1-GFP turnover rates were reported in cells lacking MCC patches than in wild-type cells (Grossmann et al., 2008). In this study, we made several observations that do not support the idea that the MCC enrichment regulates the endocytosis of the permeases. First, our quantitative analyses of Can1 and Tat2 endocytosis rates did not show any significant differences between wild-type and pil1⌬ cells, which lack MCC patches and have dispersed plasma membrane distributions of Can1 and Tat2. Notably, similarly to wild-type cells, endocytosis of the permeases in pil1⌬ cells was still dependent on a specific trigger (a threshold culture density or addition of substrate) (Fig. 5). Consequently, a mechanism other than the enrichment in the MCC has to protect the permeases from endocytosis in the absence of endocytosis trigger. Furthermore, we estimated that only about 28% of the total plasma membrane Can1 molecules are located in the MCC in noninduced wild-type cells. Protection from endocytosis due to localization within the MCC is thus limited to the minority of the permease molecules. In addition, our FRAP experiments in noninduced cells showed that Can1 and Tat2 molecules exchange between the MCC and the MCP and stay in a MCC patch on average for only about 5 minutes. Consequently, despite the enrichment in MCC patches, the permease molecules frequently diffuse to the MCP area, where they would be available for endocytosis. Finally, we observed, that the release kinetics of the permeases from the MCC do not correlate with their endocytosis rates. For Can1 we showed by TIRF microscopy that its MCC enrichment was retained when endocytosis was induced by arginine. Instead, the plasma membrane signal for Can1-GFP decreased uniformly in both the MCC and the MCP. However, the MCC enrichment of Tat2 was lost within 90 seconds after addition of tryptophan, so that the permease became distributed more evenly between the two domains. The loss of MCC enrichment of Tat2 was thus much faster than the uptake of the Tat2 pool from the plasma membrane by endocytosis. In summary, our results indicate that the enrichment of the permeases in the MCC does not have a major role in regulating their endocytosis. Therefore, we consider it more likely that endocytosis is regulated by mechanisms that are independent of the plasma membrane domain organization. Interestingly, for all three permeases, Can1, Tat2 and Fur4, ubiquitylation has been demonstrated to be a major regulator of their endocytic turnover (Beck et al., 1999;Galan et al., 1996;Lin et al., 2008;Nikko et al., 2008). As a further test for the exclusion of endocytic events from the MCC, we created Pkh1/2 depletion strains. Pil1 is known to be phosphorylated by Pkh1/2 (Luo et al., 2008;Walther et al., 2007). Inactivation of Pkh1/2 kinases has been reported to result in enlarged net-like eisosome structures, which were suggested to be due to a defect in either eisosome assembly (Luo et al., 2008) or disassembly (Fröhlich et al., 2009;Walther et al., 2007). We observed that most of the Pkh1/2-depleted cells exhibited normal size but highly motile Pil1-mCherry structures at the plasma membrane. Only a subpopulation of the cells (<20%) exhibited enlarged, stable net-like Pil1-mCherry structures (supplementary material Movie 2). Importantly, the enlarged net-like Pil1-mCherry structures always excluded endocytic events (Fig. 1D, supplementary material Movie 2). However, neither the motile nor the net-like Pil1-mCherry structures were able to enrich Can1-GFP signal, which was homogeneously distributed at the plasma membrane after depletion of Pkh1/2 (supplementary material Fig. S1, Movie 3), which is reminiscent of the normally homogenous plasma membrane distribution of Hxt1 and Gap2 (Lauwers et al., 2007;Malinska et al., 2003). Based on these observations, we hypothesize that the lack of Pkh1/2 activity leads to abnormal eisosomes that might be defective in bending the plasma membrane into furrows. This could explain the motility of the small Pil1 structures and the lack of permease enrichment. Our finding in Pkh1/2-depleted cells that the Pil1-mCherry structures still excluded endocytosis, even though they were not functional eisosomes as judged by the enrichment of Can1 permease, supports the idea that the local exclusion of vesicular traffic events in the MCC is due to a simple steric hindrance effect of the abundant membrane-bound eisosome proteins, which might prevent the assembly of the endocytic coat or the docking of an exocytic vesicle. Strains and media Yeast strains (supplementary material Table S1) were grown in standard rich medium (YPD) or in synthetic medium (SD) supplemented with adenine, uracil, histidine, leucine, lysine and methionine. All the yeast strains were cultured at 25°C and shaking at 220 r.p.m. EGFP or mCherry tags were integrated chromosomally at the C-terminus of each gene, and gene deletions were generated by replacing the entire gene ORF with a selection cassette (Janke et al., 2004). Strains carrying several fusions or deletions were created by mating. The correct chromosomal integration was checked by PCR. Wild-type and mutant cells expressing EGFP-or mCherrytagged proteins had growth properties indistinguishable from those of untagged cells. Induction of Can1 and Tat2 endocytosis and depletion of Pkh2 Endocytosis of Can1 was triggered either by addition of arginine to a concentration of 5 mM or cycloheximide to a concentration of 50 mg/ml to yeast cultures of OD 600 0.4 (unless other OD is stated). Endocytosis of Tat2 was triggered by addition of tryptophan to a concentration of 5 mM (unless other concentration is stated). For the depletion experiments with pkh1Dpkh2 cells, in which PKH2 is controlled by a tetO2 regulatable promoter (Belli et al., 1998), a liquid culture was diluted (~OD 600 0.3) and split into two samples. To one sample, doxycyclin was added to a concentration of 25 mg/ml. After 12 hours shaking at 30°C, the depleted and control cells were imaged. Live-cell imaging For microscopy, cells were grown in SD at 25°C until early log phase. Cells were adhered to concanavalin-A-coated (0.1 mg/ml) circular coverslips (25 mm diameter, Menzel-Gläser) and imaged in SD in a custom-made holder. Imaging was done at room temperature using an Olympus IX81 microscope equipped with a 100ϫ NA 1.45 objective, Orca-ER camera (Hamamatsu), and electronic shutters and filterwheels (Sutter Instrument). TIRF microscopy was carried out using a 60ϫ/1.49 objective and a 488 nm solid-state laser (Coherent). The FRAP experiments were done with a custom-built set-up that focuses the 488 nm laser beam at the sample plane. The CCD camera, the filter wheels and the shutters were controlled by Metamorph software (Universal Imaging). Confocal imaging was done with a Zeiss LSM 710 microscope using a 63ϫ/1.40 objective. GFP fluorescence was excited with the 488 nm line of the argon laser and mCherry fluorescence was excited with a 561 nm solid-state laser. Image analysis Image analysis was carried out using custom-written, MATLAB programs. Images were background subtracted and all movies were corrected for photobleaching. To segment the endocytic and exocytic sites and MCC patches from the background, the images were first smoothed by 3ϫ3 pixel averaging. Global thresholding was used to identify the cell area, followed by adaptive thresholding with a 7ϫ7 pixel window to segment the endocytic and exocytic sites and the MCC patches within the cell area. The centroids of the segmented patches were determined and the centroids from consecutive frames were connected into tracks. The automatic segmentation and tracking was confirmed by visual inspection of the movies. The position of an endocytic or exocytic event was considered to be the centroid from the frame at which the fluorescence intensity of the marker was the brightest. The position of an endocytic or exocytic event was then compared with the centroid positions of the Pil1-mCherry patches in the corresponding frame to determine the distance to the closest MCC patch. To get the distance distribution of Sla1-GFP patches to Abp1-mCherry patches the position of each Sla1-GFP patch was compared with all Abp1-mCherry patches found in the same frame, or in one of the following five frames of the time-lapse movies. For the distribution of distances between the centers of cell pixels and the respective closest MCC patch, the cell area in focus was marked by hand for each cell. The center of each pixel within this cell area was then compared with the centroid positions of the Pil1-mCherry patches in the first movie frame to determine the shortest distance. A small shift in the alignment of the channels due to chromatic aberration was corrected based on images of immobilized microbeads that fluoresce at both green and red wavelengths. To calculate the mean fluorescence intensity ratios between the MCC and the MCP for Can1-GFP and Pil1-mCherry, confocal surface view images of 15 cells were analyzed. For each cell, the cell area in focus was marked by hand using ImageJ software. The MCC area was detected by segmentation of the Pil1-mCherry image. The cell area outside the MCC area was considered the MCP area. The mean fluorescence intensity values were corrected for the autofluorescence background determined by imaging cells that only expressed Pil1-mCherry or Can1-GFP. The mean fluorescence intensity ratio between the MCC and the MCP was 5.3 for Pil1-mCherry and only 1.37 for Can1-GFP. On average, the MCC area was 20.42% of the total cell area. The 20.42% is probably an overestimation caused by optical blurring of the small MCC patches as a result of the resolution limit of light microscopy. To further correct for potential additional background signal due to scattering, we imaged Pil1-GFP. We assumed that 100% of Pil1-GFP would localize to the MCC patches and considered the signal measured in the MCP as background. The ratio between the MCC and the MCP for Pil1-GFP was 5.3, as in case of Pil1-mCherry. We then used this ratio to correct the Can1-GFP signal to calculate the final intensity ratio of Can1-GFP distribution between the MCC and the MCP to 1.85. Note that this correction might lead to overestimation of the concentration of Can1-GFP in the MCC. The final fraction of total plasma membrane Can1 molecules in the MCC area was then calculated by: Fraction(intensity ratio*MCC area)/(1+intensity ratio*MCC area) In the FRAP experiments, the photobleached area was marked by hand with a custom-written MATLAB program. In case of MCC patch bleaches, a circle with a radius of 4 pixels around the patch centroid position was used as bleach area. The curves of several individual experiments were averaged and the averaged curve was fitted to: where I is intensity, C eq is the equilibrium content, k off is off-rate and t is time, to calculate the recovery half times (t 1/2 ln2/k off ) (Sprague and McNally, 2005;Sprague et al., 2004). To quantify the progress of Can1-GFP and Tat2-GFP endocytosis (Fig. 5B,C,E; supplementary material Fig. S3), the ratio of mean fluorescence intensity between the plasma membrane and the cell interior was determined for several cells from GFP images focused to the center of the cells, and taken at the indicated time points after the induction of endocytosis. The image area containing the signal from the plasma membrane was manually marked using ImageJ. The area within the plasma membrane area was defined as the cell interior. To determine the ratio of Can1-GFP fluorescence signal between MCC patches and the local MCP area from TIRF images at different time points during the progress of arginine-induced Can1 endocytosis (Fig. 6C), the MCC patches were determined by segmentation of the Can1-GFP TIRF images. The determined MCC patches were confirmed by comparison with Pil1-mCherry epifluorescence images. Based on the centroid positions of the MCC patches, the ratio was calculated with a custom-written MATLAB program. The MCC patch area was defined as a circle with a 3 pixel radius around the pixel containing the centroid position of the considered MCC patch. The local MCP area was defined as a circle of 6 pixel radius around the same central pixel, excluding the MCC patch area. ImageJ software (http://rsb.info.nih.gov/ij/) was used for general manipulation of images and movies.
8,388.8
2011-02-01T00:00:00.000
[ "Biology" ]
Residual Predictive Information Flow in the Tight Coupling Limit: Analytic Insights from a Minimalistic Model In a coupled system, predictive information flows from the causing to the caused variable. The amount of transferred predictive information can be quantified through the use of transfer entropy or, for Gaussian variables, equivalently via Granger causality. It is natural to expect and has been repeatedly observed that a tight coupling does not permit to reconstruct a causal connection between causing and caused variables. Here, we show that for a model of interacting social groups, carried from the master equation to the Fokker–Planck level, a residual predictive information flow can remain for a pair of uni-directionally coupled variables even in the limit of infinite coupling strength. We trace this phenomenon back to the question of how the synchronizing force and the noise strength scale with the coupling strength. A simplified model description allows us to derive analytic expressions that fully elucidate the interplay between deterministic and stochastic model parts. Introduction In many scientific problems, causal interactions of different processes or process components are clear from a mechanistic understanding of the system dynamics and reflected by model formulation. In other situations, the interaction structure of a large and/or complex system, e.g., the assembly dynamics of a chemical system [1,2], of an ecological [3,4] or social community [5,6] or of neuronal populations [7,8], may not be clear in advance. Then, the empirical reconstruction of causal interactions from multivariate time series can be attempted using a suitable concept of causality. Since the seminal works of Wiener [9] and Granger [10], this reconstruction is based on the rationale that a causal interaction can be inferred from an increased prediction error when excluding the causing process component from the prediction of the caused component. Operationalizing this idea leads to Granger causality (GC), a measure commonly used to compare causal interactions of different process components. The quantification of a predictive information flow is also behind the definition of transfer entropy [11], which quantifies the predictive information transfer from the causing to the caused component. As shown by Barnett et al. [12], Granger causality and transfer entropy are equivalent for Gaussian variables. From the outlined definition, it is plausible to expect that a stronger coupling will enhance the predictive information flow. On the other hand, it can be expected that an increasing coupling strength will eventually synchronize the caused with the causing variable. The approach to the synchronization manifold can be monitored by the synchronization index (phase locking value) or, for a linear dynamics, by the cross-correlation coefficient. Once the dynamics is tied to the synchronization manifold, a reconstruction is no longer possible because in a synchronous mode of motion leader and follower cannot be told apart-early reports of this insight [13,14] were followed by repeated observations of the phenomenon in the context of various directionality indices and synchronization measures (e.g., [15][16][17][18][19]), which underpins a generic phenomenon. Combining the synchronization triggered decline of directionality measures with the initial increase in the small coupling regime immediately suggests unimodal curve describing the relation between directionality and coupling strength, where any directionality index vanishes for zero (uncoupled) and infinite (tight coupling limit) coupling strength. We note that, to the best of our knowledge, all reports of this phenomenon are solely based on numerical or empirical data and subsequent estimation of directionality indices. Moreover, directionality indices are mostly constructed as differences (asymmetries) between directed coupling indices, thus quantifying a two-way net directionality. Quite often estimators of the one-way directionality indices are plagued by a considerable bias and, in particular, non-zero values for uncoupled variables. Computing the net directionality index this bias may (or may not) be removed by forming differences. Therefore, an analytic treatment of a one-way directionality index vs. coupling strength relation for a tractable but generic dynamics seems desirable. In the following, we address such a system dynamics that follows from a socio-dynamic model originally formulated in the framework of a master equation [20]. In the thermodynamic limit, the description is shifted to the level of a Fokker-Planck equation [21] with related drift and diffusion coefficients. The resulting expressions show that the coupling strength enters the diffusion matrix which means that random fluctuations scale up together with increasing coupling strength. Our analytic derivation leads to the interesting result that even in the tight coupling limit a residual directionality remains. Since the resultant dynamics can effectively be mapped to a vector-autoregressive process (linear stochastic process with Gaussian variables), the residual Granger causality can be interpreted as a residual predictive information flow remaining for tight coupling. Although we use a specific model at the outset of our presentation, we argue that the finding of a residual GC is a generic effect. Granger Causality in a Nutshell To set the stage, we briefly recall how GC is defined in the framework of a (two-dimensional) vector autoregressive process VAR[∞] (of order infinity). In this case, two time series are fitted by the following full model with white noise covariance matrix In describing stable (non-divergent) time series by a VAR, the dynamical coefficients a j , b j , c j and d j will obey a stationarity condition [22]. The prediction error of X 2 , using both its own past and the past of X 1 , is quantified by Σ 22 , the variance of η t . Predicting X 2 only from its own past means to use a reduced univariate autoregressive processes description for the second time series In passing, we note that the parameters of this reduced model are not fitted independently but derived from Equation (1) for example through a spectral factorization [23]. Moreover, based on a state-space formulation of GC, a closed form involving partial covariances can be obtained for finite order vector-autoregressive models [24]. Now, the prediction error of X 2 without inclusion of X 1 is quantified by Ξ 2 , the variance ofη t . Since in this step no information is added (but rather removed), Ξ 2 will never be smaller than Σ 22 , guaranteeing that Granger causality GC (from X 1 → X 2 ) defined as the quantity log( VAR processes belong to the model class of time discrete linear stochastic processes. In a recent work, we extended the concept of GC to diffusion processes which constitute a special class of time continuous stochastic processes. Diffusion processes can be formulated either via a Langevin type stochastic differential equation system or via the related Fokker-Planck equation (FPE) [25]. In the framework of a local linearization of the dynamics and a stroboscopic map (i.e., the temporal integration of the dynamic flow over a time increment ∆t), this allows establishing a connection between drift and diffusion coefficients of the FPE and a local GC. Averaging the local GC with respect to the invariant measure yields a global GC for the original diffusion process. For details, we refer the reader to our recent publication [26]. Weidlich's Socio-Dynamic Model Here, we apply this connection to a stochastic socio-dynamics model proposed by Weidlich [20] to describe opinion formation in a system with two groups, leaders (Group 1 comprising n 1 members) and followers (Group 2 with n 2 members). Individuals of each group i = 1, 2 can adopt one of two opinions j = 1, 2. Opinion diversity across the community is coded by variables n i,j specifying the number of individuals of group i sharing opinion j. Since it is assumed that everybody has an opinion, n i,1 + n i,2 = n i . Eventually, the system is considered in the "thermodynamic limit" n 1 , n 2 → ∞ with n 1 /n 2 = , which suggests a change to quasi-continuous variables Members of group i change their opinion from j to j with a transition rate p j→j that may depend on the present opinion configuration {y α,β } (with α, β = 1, 2). In line with Weidlich, we employ the following expressions with a straightforward interpretation: without coupling (µ = 0) leaders and followers have a diametrical preference for one opinion over the alternative one; while leaders flip opinion without reference to the population of followers (y 2,j ), with increasing coupling followers enhance the rate of switching their opinion to the opinion populated by leaders (y 1,j ). This means that followers are uni-directionally forced by leaders with the coupling strength scaled by the parameter µ. These rates enter a master equation describing the evolution of a statistical ensemble of interacting social groups. After another change of variables x i = y i,2 − y i,1 , reflecting directed imbalance and opinion diversity within group i, the related Fokker-Planck equation can be written down [20]. It is completely specified by its drift vector and its diffusion matrix Notice that the diffusion matrix is positive definite since −1 ≤ x 1 , x 2 ≤ 1. The inverse scaling with n 1 and n 2 , respectively, means that diffusion vanishes in the "thermodynamic limit" n 1 , n 2 → ∞ (with = n 1 /n 2 ). Moreover, since the parameter p (controlling the intrinsic opinion flip rate) scales both drift vector and diffusion matrix in the same way, it may well be absorbed in a reparameterization of time. Finally, we point out that in the step from the master equation to the FPE the coupling parameter µ does not only enter the drift vector but also the diffusion matrix. Simulation Results Being equipped with drift vector and diffusion matrix, we can compute the GC of the socio-dynamic diffusion process (as outlined in [26]) for various values of the coupling parameter µ. This means we compute GCs locally for subsampled vector-Ornstein-Uhlenbeck processes with local additive noise; global GC is then obtained as the statistical average of local GCs with respect to a stationary distribution estimated from the simulation (1,000,000 data points for each µ value). Additionally, we compute the GC of a global vector-Ornstein-Uhlenbeck process with constant average-noise. The results are plotted with circle and crosses, respectively, in Figure 1. (6) and (7), respectively); crosses, GC of the global VAR(1) (Equation (8)) with average-noise; solid line, analytic GC (Equation (20)) of the minimalistic model. As expected, for µ = 0, the GC is zero because in this uncoupled situation inclusion of x 1 cannot reduce the prediction error of x 2 . With increasing coupling, parameter µ GC initially increases up to a maximum around µ = 3 before it decreases again. Surprisingly, we find that a linearized dynamics with average-noise (notice that the only non-linearity ∼ x 1 x 2 resides in D 22 ) with state space averaged additive noise gives rise to the same values (plotted in Figure 1 with crosses). Therefore, the non-monotonic characteristic of GC requires neither a non-linear dynamics nor multiplicative noise. Analytic Description To explain the non-monotonic behavior of the GC, we connect the diffusion process specified by Equations (6) and (7) with a VAR(1), i.e., vector-autoregressive process of first order. The general connection is elaborated in [27] and proceeds via the VAR(1) written as where and θ t is a two-component vector of zero-mean Gaussian white noise residuals with a covariance matrix Σ given by the following matrix integral where Γ is the Jacobian matrix of the diffusion process, i.e., of its drift part. The above connection is derived for the standard vector-Ornstein-Uhlenbeck process with additive noise, which corresponds to a diffusion matrixD (2) ij that is constant in state space. By contrast, in Weidlich's model we see that the diffusion coefficients D (7)) vary with x 1 and x 2 . Since the gradients scale with the inverse of n 1 and n 2 , respectively, for rather large n 1 , n 2 , the noise is weakly multiplicative [26]. Therefore, we may consider Σ(x 1 , x 2 ) as a local covariance matrix. A rigorous derivation would now strive to compute local GC maps and perform a subsequent average over state space (with respect to the stationary probability distribution, essentially a bivariate Gaussian). This route seems feasible but resultant expressions appear far from being handy. Therefore, we approximate state dependent diffusion coefficients by substituting mean values for state variables, i.e., (cf. Equation The mean values can be read from Equation (6) x 1 = 1 2 and with the consequence that we approximate In line with these simplifications, we now consider the case of a VAR(1) with the following parameterization where the parameters a, b, c are to be identified with the entries in Equation (9), i.e., where we have used the abbreviation h = p∆t. The covariance matrix of residuals is obtained by inserting σ 2 1 and σ 2 2 (µ) of Equation (13) into Equation (10). The integration can be performed analytically and leads to the following expressions Notice that With η := n 1 /n 2 2(1−e −2h ) ∼ being small (since n 1 n 2 and h 1), this matrix has the eigenvalues which shows that even in the tight coupling limit the covariance ellipsoid of residuals possesses two finite semi-axes. Equation (14) defines a VAR(1) that was recently considered by Barnett and Seth as a minimal model that allows deriving an analytic expression for GC that can be found in [28] and was restricted to the special case that Σ was the identity matrix, i.e., θ 1,t and θ 2,t were uncorrelated (Σ 12 = Σ 21 = 0) and both with unit variance (Σ 11 = Σ 22 = 1). Relaxing these restrictions and following the same steps outlined by Barnett and Seth we can derive an explicit formula for GC that is structurally identical to Barnett's formula even for the most general, bivariate VAR(1) process in Equation (8) with bidirectional coupling (A 12 Through Equations (20) and (21) in conjunction with Equation (15), we have everything at hand to evaluate the dependence of GC on the coupling parameter µ. Note that the GC does not depend on the parameter a, and for a fixed covariance matrix is a monotonically increasing function of c growing to infinity; indeed, in a simple autoregressive model, the parameter c would be naturally considered (a monotonous function of) the coupling strength µ. However, in our example, the dependence of the c on coupling parameter µ is more complicated, as it also affects the covariance matrix. Moreover, the dependence is such that there is a delicate balance for infinite coupling strength µ between the synchronizing influence of the driver and the diffusion term leading to the finite residual GC. The result is plotted in Figure 1 (with solid line) and shows that the analytic formula perfectly matches the numerical values (plotted with circles and crosses) extracted from the simulation. The matching of our analytical results, based on Equation (11), and global GC is surprising but fortunate. The coincidence of global GC and average-noise GC, on the other hand, is not surprising since, for weakly multiplicative noise, the latter is in line with our analytic approach. The analytic formula allows considering the asymptotic limit µ → ∞, which is given by inserting into Barnett's formula (Equation (20)). Interestingly, n 1 and n 2 again enter this expression only through the ratio = n 1 /n 2 . For the chosen parameters, the resulting asymptotic value lim µ→∞ GC is approximately 0.027, which is larger than zero. As the scaling with ∼ η indicates, the asymptotic residual GC is a consequence of the residual variance in the direction of the second semi-axes of the covariance ellipsoid in Equation (18). Residual fluctuations remaining in the tight coupling limit enable an improvement of prediction. These residual fluctuations reflect the fact that even for infinite coupling strength synchronization is not complete. To show this, we compute the cross-correlation function by solving the Yule-Walker equations [22]. As can be seen in Figure 2, the maxima of the curves rise with increasing coupling strength while the related lag approaches zero (in discrete steps) reflecting instantaneous response of the caused variable. However, our analytic treatment reveals that even in the tight coupling limit µ → ∞ the maximal cross-correlation is 0.995 and not one; the reason is, again, the balance between the synchronizing force (exponential relaxation) and the increasing noise strength σ 22 (µ). We note that this value is not plagued by estimation errors. The analytic approach also puts us in the position to sort out how the different system properties contribute to the formation of a maximum in the GC. In particular, we can investigate the effect of coupling dependent diffusivity through replacing σ 2 2 (µ) (cf. Equation (13)) by the constant value σ 2 2 (µ = 0) = σ 2 1 . Through these investigations, we arrive at the following conclusions: • For uncoupled systems (µ = 0), GC is zero because c = 0. • The initial increase of GC is caused by the exponential increase of c towards b with increasing coupling c (cf. Equation (15)). Residual variance of the process component X 2 is reduced via information transferred from the driver X 1 . • Simultaneously with the information transferred from the driver intrinsic fluctuations, modeled by increasing diffusivity σ 2 2 (µ), scale up (asymptotically ∼ µ). Nevertheless, the residual variance Σ 22 approaches a constant because tighter coupling to X 1 (as expressed by the denominator µ + 1 in Equation (16)) compensates increasing intrinsic noise. In the limit µ → ∞, the residual variance Σ 22 is the residual variance Σ 11 of the driver plus an extra term resulting from the balance between intrinsic noise and tight coupling, i.e., lim µ→∞ Σ 22 = 1 + n 1 /n 2 1−e −2h Σ 11 • From the imbalance of diffusivities follows the ratio which means that for = 0.01 the residual variance Σ 22 is increasing roughly by the factor 100. As can be read from Equation (21), this tends to reduce c 2 with increasing µ. From the opposing trends -c 2 increases with µ while 1/Σ 22 decreases with increasing µ -the peak results as the point where the dominance of both trends changes. We note that analytic expressions can be derived since we have all ingredients at hand, however, explicit formulas are quite lengthy and do not provide deeper insights. In all cases considered, we found that √ det Σ varied only mildly with µ and the variation of b 2 is of minor importance for Barnett's formula (Equation (20)). Discussion The increase of GC with µ is a natural consequence of enhanced information transfer with tighter coupling to the driver. The asymptotic decrease and the formation of a maximum, on the other hand, follow from the fact that the caused variable X 2 asymptotically adopts the residual variance of the driver which is two orders larger because n 2 /n 1 = −1 = 100, meaning that "every leader has one hundred followers". This allows predicting that no such maximum can be found if n 1 ≈ n 2 or for n 1 > n 2 , i.e., if the leader group is of comparable or even larger size than the follower group-a statement that we can confirm through our analytic analysis. It is interesting to note that the asymptotic residual GC, i.e., an improvement of prediction in the tight-coupling limit, remains even in the thermodynamic limit n 1 , n 2 → ∞, i.e., when the diffusion constants approach zero, provided = n 1 /n 2 > 0. This results from the scaling behavior of the variance Σ 22 . The question whether the found residual GC is a property of the specific socio-dynamic model or a generic property of a larger class of systems can be answered in the following way: our results follow from a minimalistic linear system (vector-Ornstein-Uhlenbeck process or VAR(1)), which means it is not a fancy effect of non-linear dynamics. In this setup, the rGC results from the linear asymptotic scaling behavior of σ 2 2 (µ) (Equation (13)), which balances the µ in the denominator of the first term in Equation (16) resulting from integrating the exponentials (in Equation (10)) and reflecting the faster relaxation with increasing coupling strength. The question as to which other systems might give rise to a residual GC thus leads back to the linear scaling of the diffusion coefficient D (2) 22 (µ). As the derivation in Weidlich's original paper [20] shows, this is a consequence of the ansatz of coupling groups via a coupling parameter that enters the flipping rates (Equation (5)) additively and linearly. Therefore, we conclude that a residual GC will not be restricted to the specific details of Weidlich's model but be a quite generic aspect of a stochastic dynamics described by a similar master equation. The observed non-monotonic behavior of GC with coupling strength does not rely on non-linear dynamics or multiplicative noise. A peak of the information flow was pointed out and a finite value of the information flow, at large couplings, was also reported in [29]. However, it is interesting to consider how our insights can be transferred to a general diffusion process. Our conclusions drawn from the minimalistic system (see Equation (14) with Equation (15)), i.e., a linear dynamics with additive noise, can be carried over to the class of nonlinear diffusion processes with weakly multiplicative noise in the framework of a local linearization generating a GC map in state space. Subsequent averaging with respect to the invariant measure on the attractor leads to the global GC of the system, a concept that we put forward in a recent publication [26]. In this way, the global GC vs. coupling relation will be computable and the resultant curve will depend on the details of the considered system. The question whether a non-monotonic GC vs. coupling relation can result for a system with diffusion constants that do not vary with coupling strength remains open and will be dealt with in a forthcoming publication. Finally, one may ask whether the non-monotonic GC curve plays any role in natural systems. We speculate that coupling enhanced predictability might be useful for the performance of cognitive functions and that certain brain regions are tuned to optimal synaptic coupling. In this context, a scale up of effects through coupling large neuronal ensembles can also be imagined. Summary We analytically investigated the non-monotonic behavior of Granger causality, defined on a socio-dynamic model of leaders and followers, with respect to coupling strength. Our derivations finally substantiate frequent observations of this phenomenon for various directionality indices and agree with the common sense that synchronization impedes a reconstruction of causal relationships. To make this point, it was necessary to extend Granger causality of Barnett's minimal model to a general vector-autoregressive process of order one. Fortunately, we were able to demonstrate that a tight coupling in the master equation can map not only to the dynamics but also to the noise and therefore does not automatically imply synchronization and a vanishing Granger causality.
5,381.6
2019-10-01T00:00:00.000
[ "Physics" ]
Conformal Invariance of (0,2) Sigma Models on Calabi-Yau Manifolds Long ago, Nemeschansky and Sen demonstrated that the Ricci-flat metric on a Calabi-Yau manifold could be corrected, order by order in perturbation theory, to produce a conformally invariant (2,2) nonlinear sigma model. Here we extend this result to (0,2) sigma models for stable holomorphic vector bundles over Calabi-Yaus. Introduction The study of strings propagating on Calabi-Yau backgrounds has a long and rich history. While it was known from early on that Ricci-flat Kähler manifolds provided supersymmetric solutions of the classical supergravity equations, there was a period of confusion regarding their fate once α ′ -corrections were taken into account. On the one hand, [1] seemed to prove that the Ricci-flat metric of a Calabi-Yau led to a finite nonlinear sigma model (NLSM), with (2,2) supersymmetry, to all orders in worldsheet perturbation theory. However, explicit computation of the (2, 2) NLSM beta-function showed that it had a non-zero contribution at four-loop order [2]. The resolution of this seeming paradox was presented in [3], and boiled down to the freedom of adding finite local counterterms at each order in perturbation theory. In more detail, the authors of [3] showed that one could add corrections to the Ricciflat metric at each loop order, so that the full, all-order beta function is satisfied. This can be interpreted as a (non-local) field redefinition of the metric, or as a modification of the subtraction scheme, so that Ricci-flatness amounts to finiteness to all orders. The discussion of the previous paragraph was entirely in the context of (2, 2) NLSMs on Calabi-Yau backgrounds, but it is very natural to ask how it carries over to the heterotic setting with only (0, 2) supersymmetry. In fact, given the intense focus on phenomenological heterotic models at the time of [3], one easily wonders why such an investigation did not occur decades ago. One likely explanation 3 is that around the same time it was shown that generic (0, 2) models suffered from non-perturbative instabilities [4], and so investigating their perturbative renormalization may have seemed inconsequential. 4 However, we now know that the superpotential contributions from individual worldsheet instantons can actually sum to zero in many (0, 2) models [7][8][9], 5 thereby eliminating those instabilities. While this provides renewed motivation for studying the issue of finiteness of (0, 2) models on Calabi-Yau backgrounds 6 , during the interim powerful spacetime arguments emerged that answered the question in the affirmative to all orders in perturbation theory [13]. In this note, we revisit this question and confirm the expected result directly using worldsheet techniques. In section 2 we review (0, 2) models, their beta-functions, and setup our conventions. Section 3 is the main body of this work where we prove our claim. We summarize our results and outline some open questions in section 4. 3 We thank Jacques Distler for explaining some of this historical context and providing his insights. 4 Investigations early on, however, did not reveal any obstructions at the first few orders [5,6]. 5 See, however, [10] for some important limitations on the extent of these claims. 6 There has also been renewed interest in heterotic geometry from the target space perspective, and the corresponding moduli problem, see e.g. [11,12]. 2 Review of the (0, 2) NLSM and beta functionals 2.1 (0, 2) superspace and superfields We will work in Euclidean (0, 2) superspace with coordinates (z,z, θ,θ). We assign θ a U(1) R charge q R = +1 andθ a charge q R = −1, and take the fermionic integration measure such that d 2 θ θθ = 1. The covariant derivatives and supercharges are given by: with non-trivial anti-commutators Chiral fields are annihilated byD, and the scalar and Fermi chiral fields have the component Anti-chiral fields on the other hand are annihilated by D, and they have corresponding We assign q R = 0 to all the (lowest components of the) chiral fields. To simplify the details, we will demand the existence of a non-anomalous global U(1) L symmetry, under which Φ,Φ, Γ,Γ have charges q L = 0, 0, +1, −1, respectively. It is straightforward to generalize to cases without the U(1) L symmetry. The classical (0, 2) NLSM The most general, renormalizable, U(1) R × U(1) L invariant, (0, 2) supersymmetric action is given by The (1, 0)-form K = K i dφ i (with complex conjugate K * = Kīdφī) is the (0, 2) analog of the (2, 2) Kähler potential, and is only defined up to shifts by holomorphic (1, 0)-forms: . The action is also invariant under K → K + i ∂f , for any real-valued function f . Holomorphic redefinitions of the Fermi fields, Γ a → M a b (Φ)Γ b , imply that the Hermitian metric h ab is only defined up to transformations of the form h → M † hM. 7 In particular, rescaling h ab by a constant factor leads to an equivalent theory; this will be relevant at a later stage. Expanding the supersymmetric sigma model action (2.6) in components, 8 we obtain where the couplings in the above action are determined by K i and h ab : and the connection Γ − is defined by Γ ± = Γ± 1 2 H, where Γ is the usual Christoffel connection for the metric g and H = dB is the tree-level torsion. In particular, we have along with their complex conjugates, and the rest of the components vanish. Geometrically, the sigma model action describes maps of a two-dimensional worldsheet Σ into a target manifold M, equipped with a metric, g, and B-field. (0, 2) supersymmetry guarantees that M is a complex manifold [14,15], with fundamental form ω = ig i dφ i dφ, related to H by The right-moving fermions, ψ, transform as sections of (the pullback of) T M, coupled to the H-twisted connection Γ − . The left-moving fermions, γ, transform as sections of a (stable) holomorphic bundle E → M, where E is equipped with a Hermitian metric h αβ , and holomorphic connection A i with curvature F i . To simplify our analysis we will assume that E is stable, as opposed to just poly-stable or semi-stable. The fact that U(1) L is anomaly-free translates into the statement that c 1 (E) = 0. Similarly, if U(1) R is anomalyfree then c 1 (M) = 0 and the theory flows to a non-trivial (0, 2) SCFT in the infrared. 7 Non-Hermitian metrics, with components h ab and hāb, are forbidden by the U (1) L symmetry. 8 The auxiliary fields appear in the action L aux = (F a + A a i b ψ i γ a )h ab (Fb − Ab ēψ γē), which vanishes on shell. Beta functionals of the (0,2) NLSM The one-loop beta functionals for (0, 2) NLSMs were derived in [16] by demanding (0, 2) super-Weyl invariance of the quantum effective action. Crucially, they found three beta functions for the theory, which are in 1-1 correspondence with the spacetime BPS equations of the heterotic string: Given the relation between (0, 2) worldsheet and N = 1 spacetime supersymmetries, it is rather natural that manifestly (0, 2) supersymmetric beta functions are related to the spacetime BPS equations. We will restrict ourselves to flat worldsheets, Σ = C, and therefore will not be sensitive to the coupling of the dilaton or its corresponding beta functional. In other words, we will only study the conditions for finiteness of the SCFTs that the NLSMs flow to, rather than tackle the more involved question of (super-)Weyl invariance of the full heterotic string worldsheet. Therefore, the (0, 2) NLSMs we consider are determined by two only coupling functionals: K i and h ab , each with their corresponding beta functional β K i and β h ab . Let us write β (1) for the one-loop contribution to the beta functionals and ∆β for the sum of all higher loop contributions: β = β (1) + ∆β. 9 We will compare our approach to [16] further in what follows. For our starting point, we will take the one-loop NLSM beta functionals to be given by where c and c ′ are some known constants whose precise values we do not require. Our β coincides with that of [16], and its vanishing clearly implies the spacetime gaugino equation (2.12). Our β K is a little more subtle, because it corresponds to a linear combination of the two remaining beta functionals of [16]. 10 One member of that pair reads While the ∆β are scheme dependent, once we fix a renormalization scheme their expressions are unique. 10 It bears pointing out that iβ i is the induced connection on the canonical bundle [17]. When Γ − i is flat, then ∇ (−) has SU (n) holonomy as required by the gravitino equation (2.10). where ϕ is the dilaton field, and this is equivalent to the dilatino equation (2.11). Since our interest is in (0, 2) SCFTs defined on the plane, it should come as no surprise that we are not sensitive to (2.15). Instead, we will treat (2.15) as a constraint that defines ϕ for our models (at least at leading order in α ′ ). We remark that (2.15) can always be solved locally, and so it is indeed a valid definition of ϕ in a CFT. If we wish to promote such a solution to string theory, we must further ensure that ϕ is globally defined, and this may obstruct such a lift. 11 Fortunately, when including worldsheet supergravity the field ϕ is intrinsically defined by its coupling to the worldsheet curvature, and this issue is avoided. Using the constraint (2.15), it is straightforward to show that our choice of β K reproduces the correct beta functionals for the physical couplings g and B: We take this result as support of our starting point (2.13)-(2.14). We now wish to state the (0, 2) generalization of the key lemma from [3]. Lemma 2.18. Let E → M be a stable holomorphic vector bundle with data (g,B,Ã), derived fromK andh, with vanishing one-loop beta functionals. Then, there exist couplings with corresponding data (g, B, A) such that 20) In particular, the full beta-functionals for (g, B, A) can be made to vanish. Proving the lemma 3.1 Restriction on H The validity of a perturbative loop/α ′ expansion for a general (0, 2) NLSM is doubtful at best. It has been shown by many authors that the existence of non-vanishing H-flux in the tree-level action leads to string scale cycles in the geometry, and therefore a breakdown 11 For example, this occurs for torsional NLSMs on S 3 × S 1 . See Appendix C of [18] for more details. of a large-volume expansion [19][20][21][22][23]. Therefore, we will content ourselves with the more conservative goal of proving the above lemma in cases where H vanishes classically. More precisely, we will assume the validity of α ′ perturbation theory, where the α ′ → 0 limit is smooth 12 and that H → 0 in this limit. In such situations, it well known that M is Calabi-Yau to leading order in α ′ . However, except in the case of the standard embedding with A = Γ, H-flux will be generated by loop effects such as the Bianchi identity, and (2.9) will ensure that the full metric, g, is no longer Kähler. Proving the lemma locally In the simplified setting where H = 0 classically, it is straightforward to prove the lemma. The general approach will be within the framework of α ′ perturbation theory. We will first show that the lemma holds to lowest order in α ′ , and then extend this to higher orders inductively by using the results of the earlier steps. In this way, we will prove the lemma to all order in α ′ . We will first tackle β K . First, note that (2.9) allows us to write Next, we can rewrite (2.19) as where we have expanded g =g − δg and used the fact thatg is Kähler. As in [3], this equation may be solved iteratively for δK, order by order in α ′ , in terms of the input data c −1 ∆β K . 13 To lowest order, this equation is simply 4) 12 This rules out the only known class of truly torsional backgrounds, based on the total space T 2 → K3, discovered in [24], along with their related generalizations. 13 For instance, the second-order correction will depend on the tree-level data and quadratically on the one-loop correction. where we defined δB ij = ∂ [j δK i] . We can see that the righthand side is the sum of ∂-exact and co-exact pieces: where ∂ † = * ∂ * is the adjoint of ∂. Because the target space M is simply connected, the Hodge decomposition theorem tells us that every 1-form has such a decomposition. In particular, the left hand side can be expressed as for some scalar function f and a (2, 0)-form χ. The factor of 2 is merely used for convenience. Note that f is only defined up to the addition of a constant, while χ is only well-defined modulo a co-closed 2-form, γ. Applying the Hodge decomposition once again to γ, we see that γ must be co-exact, and so χ ∼ χ + ∂ † ρ for ρ a (3, 0)-form. Clearly, the equations (3.5) and (3.6) are equivalent to the pair where again f ∼ f +const. and χ ∼ χ+∂ † ρ. On a simply connected manifold, any 1-form is completely determined by its divergence and exterior derivative, therefore δK is in principle fixed to lowest order. To see this in detail, first we can decompose δK itself as We can assume k is real scalar function, since any imaginary component only adds a total derivative to the action, as noted below (2.6). Then we have becauseg is Kähler and on a Kähler manifold ∆ ∂ = ∆∂ = 1 2 ∆ d . Here we have introduced the Laplace-Beltrami operator ∆ ∂ = ∂∂ † +∂ † ∂, with similar expression for the other Laplacians. Next, we can use the ambiguity κ ∼ κ + ∂ † α to remove the co-exact piece of κ from its Hodge decomposition, ensuring that κ is ∂-exact and therefore closed. In other words, ∂(δK) = ∂∂ † κ = ∆ ∂ κ, and so (3.7) can be written These equations can always be solved locally by inverting the Laplacian, provided f and χ contain no zero-modes of ∆ ∂ . For χ this is trivial, as there are no harmonic (2, 0) forms on M, while for the function f we can use the ambiguity f ∼ f + const to ensure that f has no constant term. We will discuss the existence of global solutions in the following section. Thus we obtain the lowest order to solution for δK: where f and χ are determined from the input data c −1 ∆β K via (3.6). Plugging this back into (3.3), we can iteratively solve for δK order by order in the expansion. Turning now to β h , in a similar fashion we rewrite (2.20) as To lowest orders in δh and δg, this can be written as where D 2 A is gauge-covariant Laplacian, and we have used β h(1) ab (g,Ã) =g i F iab = 0. Since we have assumed the bundle E is stable, we have dim H 0 (End E) = 1 (see, for example, Cor. 1.2.8 of [25]), so D A has a unique zero-mode corresponding to the uniform rescaling of h ab . However, as mentioned in section 2.2, such scaling of the bundle metric can be absorbed into the Fermi fields. So up to this unphysical ambiguity, we can solve (3.13) (locally) to find δh uniquely in terms of the lowest order solution for δg from the previous paragraphs and the input ∆β h . As before, the lowest order solution can be plugged back into (3.12) to get an iterative solution for δh. In summary, we have demonstrated how to construct local, perturbative solutions for δK and δh, order by order in the α ′ /loop expansion, in terms of ∆β K and ∆β h . In the following section we will show that these local solutions actually patch together into welldefined global ones. This will complete the proof that the corrected sigma model data, K =K − δK and h =h − δh, have vanishing beta functionals. Proving the lemma globally With the existence of a local solution to the set of equations defining K i and h ab shown, we will now argue that this solution is globally defined. The key point to notice is that the solutions for δK i and δh ab are determined entirely in terms of the higher loop beta functionals ∆β K and ∆β h . We will now show that the ∆β are globally defined, so that δK and δh are as well. The arguments presented in this section closely parallel original ones of [3] for the simpler (2, 2) case. We will begin with ∆β K . First, recall that must have a covariant form, since it provides the metric equations of motion, and so defines a global tensor on M. On the other hand, β K itself need not be globally defined, as a shift by a holomorphic one-form is allowed. So when we go from one patch to another, ∆β K must satisfy: for set of (anti-)holomorphic functions f i and g. Furthermore, we must consider the possibility for gauge transformation between the patches. Since β g i is a globally defined singlet, then ∂ ( ∆β i) must also be a globally defined singlet. So we get the relationship between the two patches from a gauge transformation as for some other (anti-)holomorphic functionsf andg. Let us focus on ∆β K i , though similar arguments apply for ∆β K  . On general grounds, we expect ∆β K to be expressible as a product of derivatives of K i , K and h ab , with indices contracted by the metrics g i and h ab . There are four classes of factors that could make up our beta function, To simplify our analysis, we will make one further reasonable assumption: the B-field, have anomalous transformation properties, because of the Green-Schwarz mechanism, these effects will cancel out of the full beta functions. Thus we will not need to examine anomalous transformations of the above factors, and we can restrict our attention to conventional diffeomorphisms and gauge transformations only. When we consider a spacetime transformation between patches of one of the factors listed above, two types of terms can arise. We will follow [3] and refer to these types as homogeneous and inhomogeneous. Homogeneous terms are the standard tensor transformations, with each spacetime index contracted with a Jacobian matrix, while the inhomogeneous terms involve higher derivatives between the coordinates on the patches. Since we contract all the indices except i using g jk , the homogeneous terms will all cancel, save for a single Jacobian factor associated with the uncontracted i index. In other words, the homogenous terms satisfy the transformation law (3.15) with f i = 0. What about the inhomogeneous terms, can they generate a non-zero f i ? Note that an inhomogeneous term with a factor of the form ∂ r z ′k ∂z j 1 ...∂z jr (3.23) will have more indices on the bottom then on the top. Since we must contract all indices except i, this implies that the inhomogeneous terms must have factors involving the inverse metric. This is a function of both z andz, which cannot appear on the righthand side of (3.15). However, we could consider inhomogenous terms with a factor of the form ∂ r z ′k ∂z i ∂z j 1 ...∂z j r−1 . (3.24) If r > 1, then the same argument from above applies. However, we could have a term that involves ∂ 2 z ′k ∂z i ∂z j (3.25) Since this has a balanced set of indices, we can imagine contracting this with a function with only holomorphic dependence. The only term available would be another Jacobian factor, giving us two possibilities. The first would involve a term like The other would be The same argument as before applies for diffeomorphisms. In fact it is even simpler, as all of the spacetime indices must be contracted, and the argument is very similar to the original one found in [3]. Gauge transformations will be a bit different. Since the factors (3.32) and (3.33) do not have any gauge dependence, they are invariant under gauge transformations. So we will only need to worry about (3.34). However, we can see relatively immediately that there are no constant matrices C ab that transform under the correct representation, except the trivial case C ab = 0. In summary, we have shown that ∆β K , and ∆β h are globally defined tensors, under both spacetime diffeomorphisms and gauge transformations. Our proof required assuming that the full beta functions β K and β h only depend on the B-field though the invariant three-form H. We do not expect this assumption to affect our result. Since the ∆β are the data that determine δK and δh, we conclude that these corrections are also globally defined. Thus, the full beta functions for the corrected couplings K =K − δK and h =h − δh can be made to vanish. Summary and outlook In this short note, we have shown that the (0, 2) NLSMs on Calabi-Yau backgrounds, equipped with a stable holomorphic bundle, can be made to be finite to all orders in perturbation theory. We have closely paralleled the approach of [3] in the (2, 2) setting, by demonstrating the existence of a set of counterterms to the sigma model couplings that make the beta functions vanish to all orders. There are a few unresolved issues, that we leave as open problems for future study. The first question is how our results carry over when the (0, 2) NLSMs are coupled to worldsheet supergravity. This will require studying the full Weyl symmetry, along with couplings to the dilaton field and its associated beta function. This would appear to be a prerequisite for going beyond the restriction of demanding H = 0 at the classical level, and only allowing for H to be generated by loop effects. While tree-level H-flux is problematic for a controlled α ′ -expansion, it may be possible to avoid these complications since we are only interested in the general structure of the theory, and not necessarily perturbing about some particular torsional background. Finally, we have assumed that anomalous transformations will not appear in the beta functions. While we believe this to be true, a cautious reader might be wary. Since the anomaly only shows up at one-loop, they could potentially affect the two-loop beta functions and this should be checked directly.
5,408.2
2018-01-12T00:00:00.000
[ "Mathematics" ]
Epidermal Growth Factor in Healing Diabetic Foot Ulcers: From Gene Expression to Tissue Healing and Systemic Biomarker Circulation Lower-extremity diabetic ulcers are responsible for 80% of annual worldwide nontraumatic amputations. Epidermal growth factor (EGF) reduction is one of the molecular pillars of diabetic ulcer chronicity, thus EGF administration may be considered a type of replacement therapy. Topical EGF administration to improve and speed wound healing began in 1989 on burn patients as part of an acute-healing therapy. Further clinical studies based on topically administering EGF to diff erent chronic wounds resulted in disappointing outcomes. An analysis of the literature on unsuccessful clinical trials identifi ed a lack of knowledge concerning: (I) molecular and cellular foundations of wound chronicity and (II) the pharmacodynamic requisites governing EGF interaction with its receptor to promote cell response. Yet, EGF intraand perilesional infi ltration were shown to circumvent the pharmacodynamic limitations of topical application. Since the fi rst studies, the following decades of basic and clinical research on EGF therapy for problem wounds have shed light on potential uses of growth factors in regenerative medicine. EGF’s molecular and biochemical eff ects at both local and systemic levels are diverse: (1) downregulation of genes encoding infl ammation mediators and increased expression of genes involved in cell proliferation, angiogenesis and matrix secretion; (2) EGF intervention positively impacts both mesenchymal and epithelial cells, reducing infl ammation and stimulating the recruitment of precursor circulating cells that promote the formation of new blood vessels; (3) at the subcellular level, upregulation of the EGF receptor with subsequent intracellular traffi cking, including mitochondrial allocation along with restored morphology of multiple organelles; and (4) local EGF infi ltration resulting in a systemic, organismal repercussion, thus contributing to attenuation of circulating infl ammatory and catabolic reactants, restored reduction-oxidation balance, and decreased toxic glycation products and soluble apoptogenic eff ectors. It is likely that EGF treatment may rearrange critical epigenetic drivers of diabetic metabolic memory. INTRODUCTION Diabetic foot ulcers (DFU) are one of the most feared complications of diabetes. It is a common cause of nontraumatic amputation, resulting in signifi cant disability, morbidity and mortality. [1] An ulcer is the distal expression of an impaired healing process with a high rate of recurrence, so that patients who have temporarily achieved wound closure are considered to be remission rather than healed. [1] The glycemic imbalance and other diabetes-related factors contribute to sculpt an epigenetic blueprint that results in a sort of "stagnant transcriptome" [2,3] in which precocious senescence, proliferative refractoriness, and apoptosis appear to be critical drivers resulting in wound chronicity. [4] These biological deterrents have been related to a substantial reduction in availability and activity of several growth factors, as major players of internal and peripheral tissue repair. [5,6] The diabetic wound microenvironment is hostile to the chemical integrity and bioavailability of local growth factors (GF) and ultimately, to their role in the healing process. Examples of these growth factors include EGF, Platelet-Derived Growth Factor (PDGF), Transforming Growth Factor beta-1 (TGF-β 1), and Insulin-Like Growth Factor I (IGF-1). [7][8][9] The expression and transduction signaling of EGF and PDGF receptors are also impaired within the diabetic environment. [9] Accordingly, as described for the molecular mechanisms operating in peripheral insulin resistance, it may be that diabetic wound cells exhibit reduced tyrosine kinase activity, accounting for loss of function of the growth factor receptor, which predisposes cells to proliferative arrest and senescence. [10] In 1962, Stanley Cohen announced EGF isolation and purifi cation from salivary glands. EGF was shown to induce precocious development and maturation of epidermal tissue and its appendages when injected into newborn mice. In other words, EGF induced maturational reprogramming of chronologically imprinted events. This is the most studied growth factor in wound healing, given its ability to promote epithelial and mesenchymal cell proliferation. [11] Yet, circulating EGF levels are reduced by IMPORTANCE This article describes the molecular mechanisms of epidermal growth factor (EGF) pharmacological activity, and links gene response to organ system homeostasis. The depicted cascade of events may underlie the clinical effi cacy of locally-infi ltrated EGF in restoring the healing response of high-grade diabetic foot ulcers. To our knowledge, this is the fi rst comprehensive description of how EGF reverts the chronic wound phenotype in a meaningful clinical scenario when properly delivered to responsive cells. Special Article diabetes, [12] contributing to development of local and systemic complications. [13,14] Consequently, EGF and other defi cient growth factors are exogenously administered as a replacement therapy in diabetes, as an attempt to restore physiological healing processes. [13,14] Topical administration of recombinant human EGF dates back more than 30 years. Initially, it was thought to be an encouraging alternative to combat the torpid healing of problem wounds. [15] However, the history of GF pharmacology in wound healing suggests that EGF's clinical introduction was rather precocious, at a time when basic knowledge on the biology of chronic wounds remained elusive. Initial clinical trials proved disappointing, as topical EGF administration failed to enhance a healing response in chronic wounds, [16] even in acute, experimentally induced wounds in healthy volunteers. [17] In line with the notion that EGF reverses the proliferative arrest that characterizes chronic wounds, [18,19] we introduced EGF administration through local infi ltration to treat high grade DFU (for review see [18]). It was our hypothesis that intralesional infi ltration could circumvent the limitations confronted during years of topical EGF administration. The infi ltration protocol calls for an EGF liquid formulation to be injected locally in the wound, at a depth of 6 mm to 10 mm, 3 times a week for 5 to 8 weeks, targeting the wound bottom and dermo-epidermal junction. The decision to use this delivery mode resulted from insights accumulated from animal models and ex vivo and in vitro experiments, further enriched by valuable conclusions obtained by others. [18,[20][21][22] These studies were possible given the availability of high-purity recombinant human EGF manufactured at the Genetic Engineering and Biotechnology Center, Havana, Cuba. [23] A nationwide clinical development program started in Cuba in 2001, [24] which ultimately included pharmacovigilance studies that confi rmed the safety and effi cacy of EGF delivery by intralesional infi ltration. Almost 20 years of clinical practice have shown a 75% probability of complete granulation response, 61% of complete healing; 16% absolute and 71% relative reduction of amputation risk. Furthermore, recurrences were reported as an exceptional event upon a 12-month followup period. [25,26] Despite years of international research, GF prescription for healing problem wounds remains controversial. [27] Although GF therapy is not yet included in International Working Group on the Diabetic Foot (IWGDF) recommendations, (www.iwgdfguidelines.org), EGF intralesional infi ltration has nevertheless been internationally validated and recommended as adjuvant therapy for high-grade DFU, considering its benefi ts in resuming a normal healing process with reduction of amputation rates. [28][29][30][31][32] This article summarizes the major molecular, cellular and biochemical fi ndings supporting the clinical effi cacy of EGF intralesional infi ltration for DFU in the commercially available pharmaceutical formulation Heberprot-P. The drug is included in the Cuban national medication registry since June 2008 and has off ered the only pharmacological alternative for the treatment of high-grade, complex diabetic ulcers. Gene transcriptional response in granulation cells Although not found on hematopoietic cells, the EGF receptor is widely expressed in mammals and has been implicated in the expression of a myriad of genes during various stages of embryonic development of both epithelial and mesenchymal tissues. [33][34][35][36][37] Accordingly, EGF administration modifi es the course of the cutaneous healing process by promoting migration and proliferation of both epithelial and mesenchymal skin cells where its receptor expression is enhanced. [15,38] Camacho and colleagues [39] described changes in the expression of several genes encoding proteins involved in wound healing. The investigation was part of a clinical trial (IG/FCEI/ PD/0911 in the Cuban Public Registry of Clinical Trials, http:// registroclinico.sld.cu/en/trials/RPCEC00000117-En) and included paired granulation tissue biopsies from 29 patients meeting the following criteria: Wagner grade 3-4 lesions, clinical responders with complete re-epithelialization at the end of treatment, and high-quality RNA samples for diff erential expression studies. Of the 29 patients, 10 were randomly chosen as the minimum sample size able to detect a 1.5-fold RNA expression diff erence relative to the basal constitutive value (paired control) just before treatment (biopsy identifi ed as T0). EGF (75 μg dose) was infi ltrated intralesionally 3 times/week. A second biopsy (T1) was collected at the end of treatment week 2. Paired comparisons between T1 and T0 biopsies revealed a signifi cant increase in cell proliferation modulators Cyclin-Dependent Kinase 4 (CDK4), P21 and TP53, in collagen synthesis and Extracellular Matrix remodeling gene products (Collagen type I, alpha 1 chain, Matrix Metalloproteinase 2 and TIMP2), and a concomitant reduction of some infl ammation markers, including NFKB, Tumor-Necrosis Factor-alpha (TNF-α) and interleukin 1 alpha (IL-1α). Local cell proliferation, synthesis and secretion of wound matrix proteins, and downregulation of infl ammation mediators such as TNF-α, are critical events for physiological healing. [10] The authors concluded that the observed increase in P21 and TP53 is a cellular feedback mechanism limiting the intensity and duration of the EGF-induced proliferative signal. A molecular action mechanism was postulated from these fi ndings ( Figure 1). [39] Irrespective of the diff erences between samples collected from diabetic ulcers and neonatal keratinocytes cultured from healthy donors, the data from Blumenberg [40] on EGF eff ects on transcriptomes validate the induction of keratinocyte proliferation and motility associated with feedback mechanisms controlling EGF eff ects. In concurrence with Blumenberg's study, our data indicate that EGF eff ects are modular and multifaceted rather than all-or-nothing events. This is the fi rst clinical study addressing the transcriptomic eff ect of EGF in a model of human diabetic ischemic ulcers. EGF intervention to ameliorate the histological aspect of neuropathic and ischemic lesions Ischemic diabetic lesions are characterized by a hyaline aspect matrix and paucity of functional neovessels, as well as angiogenesis defects (Figure 2A). In sharp contrast, neuropathic lesions appear to granulate earlier, exhibiting a poor collagen matrix deposition, an image similar to a spider web of thin collagen fi bers, which react weakly to Mallory Special Article staining. Additionally, reduced density of extracellular matrixproducing cells has also been noted ( Figure 2B). As opposed to ischemic ulcers, small capillaries are observed, often surrounded by peripheral fi brin cuff s, suggesting hyperpermeability. [41] Following 9 to 12 EGF infi ltration sessions (third/fourth week of treatment), granulation tissue of both ischemic and neuropathic origins exhibited substantial clinical amendment, with a consistent increase of functional small-caliber vessels across the ischemic tissue ( Figure 2C). The granulation tissue matrix of neuropathic lesions becomes densely indurated by thicker and compact collagen bundles accompanied by increased productive cellularity ( Figure 2D). In both scenarios, the infl ammatory infi ltrate appears substantially reduced. [42] Therefore, EGF positively impacts the local microenvironment of both pathogenic classifi cations of DFU. It is of relevant therapeutic signifi cance that EGF infi ltration changes the biology of ischemic ulcers. Given its angiogenic eff ect, in addition to creating de novo vessels, [43] EGF acts as a cytoprotective agent, enhancing cell and tissue survival in otherwise lethal episodes like ischemia/reperfusion and hypoxia. [21,[44][45][46][47] This drives a hypothesis that agonistic stimulation of EGF receptor (EGFR) triggers survival signals that may depend on translational modifi cations, with tyrosine phosphorylation being the most common. [48,49] Zhang and colleagues recently conducted a thorough characterization of molecular mechanisms underlying EGF's eff ect on diabetic wounds. [50] The authors implemented a full-thickness wound model in type-2 diabetic rabbits. The EGF-induced eff ect after one month of daily dermal delivery is reminiscent of the microscopic outcomes identifi ed in patient biopsies: (1) increased granulation tissue with elevation of clustered fi broblasts, (2) abundant extracellular matrix, indurated by dense and ordered collagen bundles, (3) increased active vessels and (4) attenuation of the infl ammatory infi ltrate. Interestingly, and aside from the histological fi ndings, EGF treatment induced the transcription of its own gene with an increased EGF-mRNA accumulation. [50] The EGF-induced modifi cations in problem wounds with diff erent pathogenic ingredients suggest that locally-infi ltrated EGF stimulates both mesenchymal and ectodermal cell responses, expressed by proliferation, migration, secretion, angiogenesis and survival. Accordingly, EGF infi ltration is a DFU-specifi c therapy that may synchronize local cellular behaviors, thus reversing the chronicity phenotype. [51] EGFR intracellular traffi cking: EGF induces its own receptor expression in granulation tissue fi broblasts By means of immunoelectron microscopy of ulcer fi broblasts, Falcón-Cama [52] characterized EGFR time-point kinetic intracellular traffi cking. EGF locally infi ltrated into Wagner's 3 and 4 neuropathic ulcers translated into: (a) Signifi cant increase of EGFR membrane expression 15 minutes after EGF infi ltration as compared to T0; (b) Immediate EGFR endocytosis; (c) Translocation and biodistribution to diff erent cytoplasmic organelles from 15 minutes to 24 hours after infi ltration; d) Nuclear translocation of EGFR and its binding to DNA, which appeared to last from minute 45 to 24 hours after treatment; (e) Concomitant activation of proliferating cell nuclear antigen (PCNA) gene transcription which appeared to last for about 24 hours after treatment; (f) Substantial EGFR accumulation in mitochondria, which peaked between hours 6 and 24 after infi ltration; and (g) EGFR accumulation bound to extracellular matrix-secreted collagen fi bers, along with abundant appearance of exosomal extracellular vesicles. Patients who responded to EGF intralesional infi ltrations exhibited gene expression changes that assisted in resuming and sustaining wound healing, and a reversion of the chronic phenotype. The clinical response was mediated by an elevation of angiogenesis, cell proliferation, and matrix fi brogenic ingredients coding genes and a reduction in infl ammation related genes, stimulating the tissue repair process. (Reproduced from [39] under CC4 license). Special Article Most importantly, ultrastructural characterization of the fi broblastlike cells 24 hours after EGF exposure revealed signifi cant changes, suggesting organelle repair as compared to T0. [52] Figures 3A and 3B refl ect how EGF resulted in eff ective treatment for control of the rough endoplasmic reticulum (RER) dilation. At 24 hours after EGF intervention, RER tubules and cisternae appeared far less dilated as compared to T0. Similarly, mitochondria were also a target of EGF eff ect (Figures 3C and 3D). The latter show a far less dilated organelle in which matrix cristae are observed. The presence of two adjacent organelles may suggest an active process of mitochondrial fi ssion. Although prior evidence had indicated that EGF can induce the expression of its own receptor, [53] current research provides the fi rst evidence concerning EGFR transcriptional induction, internalization and intracellular traffi cking kinetics in response to a therapeutic intervention with an EGFR ligand in a clinical setting. [52] This intense EGF-induced cellular response is consistent with its broad biological activity. In vitro models have documented that EGFR activation upon EGF binding induces the phosphorylation of 2244 proteins at 6600 catalytic sites, [54] the expression of 3172 genes and 596 proteins which are signifi cantly altered in epithelial cells. [55] In vitro evidence shows that full length EGFR translocates to the cell nucleus after ligand binding, [29,56] where several functions are performed. [57] First, EGFR operates as a co-transcription factor regulating the expression of cyclin D1, a proximal driver of cell proliferation. [58,59] EGFR interacts with DNA-dependent protein kinase, leading to the repair of DNA double-strand breaks. [60] Furthermore, nuclear EGFR phosphorylates chromatin-bound PCNA, thus increasing its stability and eventually enhancing cell proliferation. [61] Intracellular PCNA is related to antiapoptotic activity, which may act as one of the multiple mechanisms mediating EGF pro-survival eff ects in a variety of cell populations. [62] Supporting this notion is the identifi cation of the mitochondrion as another EGFR translocation compartment. Mitochondria are the hub of cellular metabolism, survival and death; they modulate not only apoptosis, but also autophagy. EGFR translocates to mitochondria where it phosphorylates cytochrome c oxidase subunit II , resulting in decreased cyclooxygenase activity, thus eventually preventing apoptosis. [63] EGF is also involved in mitochondrial fission, [64] fusion [65] and ultimately, in control of cellular response to stress, where it plays a pro-survival role. [37] EGF infi ltration sequentially activates EGFR in dormant ulcers, fi broblasts, and in its intracellular traffi cking, promotes fi broblast proliferation, migration and survival. [59,66,67] The fact that EGF may reduce RER dilation, ameliorate mitochondrial damages, and stimulate proliferation of fi broblasts in DFU drives speculation that EGFR stimulation may mitigate senescence-related traits. Although this hypothesis has yet to be experimentally verifi ed, evidence from our group and others support this possibility. [68,69] Locally infi ltrated EGF reduces diabetic dyshomeostasis Oxidative stress not only promotes the onset of diabetes but also exacerbates the disease and its complications. Brownlee [70] proposed oxidative stress as a major operator in the pathophysiology of diabetes and its complications. [71] Hyperglycemia has been invoked to promote oxidative stress through free radical generation and ensuing deterioration of antioxidant defense systems. [71] Chronic wounds are considered a prooxidative organ superimposed upon a preexisting dysmetabolic host (the diabetic patient). [18,72] In a small cohort of diabetic neuropathic ulcers, García-Ojalvo and colleagues addressed whether an improved systemic reduction-oxidation (redox) balance is associated with healing response in patients infi ltrated with EGF. [72] The rationale for the above study was supported by previous experiments demonstrating that EGF reduced levels of oxidative stress biomarkers, ultimately attenuating cytotoxic damage. [73][74][75][76] After 3 to 4 weeks of EGF treatment (9 to 12 infi ltration sessions), 4 circulating biomarkers (erythrocyte sedimentation rate, IL-6, soluble FAS and pentosidine) were signifi cantly reduced, while antioxidant parameters increased. Histological images of granulation tissue biopsies collected prior to the initial EGF infi ltrative intervention and after the 9th intervention. Images are representative of the two major etiopathogenic forms of diabetic lower extremity disease: ischemic and neuropathic. 2A: Representative of a clean, ischemic diabetic granulation tissue bed before the fi rst local EGF infi ltration. Granulation tissue exhibits a "hardened" hyaline matrix with a general scarceness of functional neovessels. Nonfunctional capillaries are seen since early stages (enclosed). 2B: Representative of an early granulation tissue matrix, collected from a neuropathic lesion exhibiting poor extracellular matrix accumulation, scarce collagen deposition and a limited productive cellularity before EGF treatment. These are all histological hallmarks of protracted, poor healing of neuropathic wounds. 2C: Image showing the transformation of the wound matrix composition, with substantial angiogenic response induced by the local EGF infi ltration with patent large vessels (arrows) across the microscopic fi eld of an ischemic lesion. 2D: Accumulation and organization of a substantial amount of new extracellular matrix material is conspicuous. There are functional vessels across the wound area after EGF infi ltration. Biopsies from 2C and 2D were collected upon the 9th EGF infi ltration session. Figure 2 conclusively denotes that EGF infi ltration may positively impact on the healing biology of both ischemic and neuropathic wounds. All samples are 5 μm sections and Mallory stained X 40. Original unpublished images. Special Article Notably, at least 50% of patients showed a favorable response for each evaluated marker. EGF's molecular eff ect was simultaneously associated with a positive clinical response in terms of granulation, contraction and re-epithelialization. This was the fi rst clinical validation of in vitro and animal data indicating that EGF's cytoprotective eff ect is at least partially mediated by correcting the redox balance. [18,73,[76][77][78] A more recent study by García-Ojalvo and colleagues [79] confi rmed previous observations concerning the systemic impact of locally-infi ltrated EGF on reestablishment of a physiological redox balance. Moreover, the new data indicates that EGF's eff ect extends to reduction of diabetic endovascular pro-infl ammatory markers. Within three weeks of treatment, patients showed signifi cant reduction of: erythrocyte sedimentation rate, IL-6 circulating levels, soluble FAS and the glycoxidation product pentosidine, as well as a signifi cant reduction of oxidative and nitrosilative stress markers ( Table 1). The fact that EGF infiltration reduced circulating levels of IL-6 is highly significant in diabetes. IL-6 is perhaps the bestreputed bona fide cytokine, pathogenically involved in the primary event of insulin resistance, in the morbidity caused by multiorgan complications, and in the onset of a poor healing response. In vitro studies by our group reproducibly show that DFU-derived fibroblasts exposed to lipopolysaccharides exhibit a highly significant increase of IL-6, which returned to basal levels, similar to those of untreated cells, after adding EGF (Yssel Mendoza-Marí, manuscript in preparation. April 2020). Simply said, dampening IL-6 circulating levels could contribute to restoration of metabolic homeostasis in diabetic patients. [80][81][82] Aside from IL-6, EGF intervention also reduced serum levels of soluble FAS and the chemokine Macrophage Inflammatory Protein (MIP1-α). Although further studies are clearly needed, collectively this evidence suggests that EGF may have assisted in reduction of insulin resistance, attenuation of endovascular inflammation and reduction of apoptotic rates; thus attenuating premature diabetic organ senescence. [83] Conclusively, EGF treatment exhibits broad systemic pharmacodynamics that go beyond the reestablishment of redox balance. CONCLUSIONS The discovery of growth factors initiated a new era in wound healing biology and held out hope for recalcitrant wound treatment. EGF, the prototypic and founding member of the EGFR ligand family, led to use of topical administration of growth factors for wound healing. Evidence suggests its role in tissue repair was already apparent in the early 1960s in Stanley Cohen's work subjecting rabbits to corneal burns followed by treatment with homemade natural EGF eye drops. [11] Despite the initial promise and years of research, growth factors have not garnered a defi nitive acceptance in the 3B. Eff ect of EGF intervention after reverting cisternae dilation and ameliorating the preexisting distortion after 24 hours of treatment. 3C. Electron microscopy image of a ballooned mitochondria showing its notable dilation and disappearance of internal cristae before the fi rst EGF intervention. A fi ne granulation is dispersed within the mitochondrial matrix. 3D. Twenty four hours after the EGF treatment, it is noticeable the substantial transformation of the mitochondria. Mitochondria are far less ballooned with matrix content and exhibiting some cristae. Special Article Peer Reviewed clinical toolbox for wound management. Lessons learned over the past decades reinforce the importance of growth factor stability, which allows for suffi cient residence time within the wound matrix to achieve the expected pharmacodynamic response. Cleverly engineered formulations are emerging that may yet vindicate growth factors' intrinsic biological potential. The intralesional infi ltrative procedure, despite its simplicity, safeguards EGF bioactivity for prolonged periods, thus emphasizing the concept that spatio-temporal control of EGF availability is fundamental for clinical success. This pioneer growth factor has proved to modify gene and protein expression, phosphorylate catalytic sites, modulate organelle homeostasis and, at an organismal level, reverse changes in infl ammatoxic markers involved in progression of diabetic complications. The latter may represent the systemic eff ects of EGF, accompanied by amelioration of the wound chronicity phenotype. Again, wound-host bidirectional communication is underscored. A research challenge is elucidation of the molecular foundations that may explain the unusual EGF trait of helping to prevent ulcer recurrence over the long term. [25,26] We hypothesize that infi ltrated EGF exerts a local 'rejuvenating' eff ect by replacing senescent cells or by dismounting or reversing the fi broblasts' epigenetic senescence program. Thus, EGF may potentially act as a senolytic agent for diabetic wounds, promoting neodermal resilience and tolerance to physical and mechanical stress. In conclusion, two decades of clinical and basic research on EGF therapy for problem wounds have shed light on the utility of growth factors with broad pharmacological potential in regenerative medicine; the time has come to focus on how, when and where to deliver their messages to their targets.
5,475.4
2020-07-01T00:00:00.000
[ "Medicine", "Biology" ]
Defining the students ’ need in Standardized Language Proficiency Tests The debate between prescriptivists and descriptivist continuous to date, which interestingly affects the way the standardized language proficiency tests (should) work. The notion of correctness in such high stakes test raters attracts more attention in relation to fairness of using specific criterion in the assessment. The present paper discusses the belief of prescriptivism and contrasts it with the view of descriptivist – especially to what actually occurs in the Teaching English as a Foreign/Second Language. Therefore, the paper clarifies whether prescriptionist features are prominent in the learner approximations and need to be taught explicitly, and clarifies whether the learner errors encompass other elements and describing the target language to the learners is more important. There are four prescriptivist pronouncements discussed – splitting infinitive, stranding preposition, the use of will and shall, and the use of who and whom. The study found that there are two pronouncements that break the rule. Therefore, English practitioners – teachers, should ‘open’ themselves to both views and able explain explicitly to the students both historical overview and its standing position of the views to date. As for assessor, a tendency of using the exact, predictable, and stable rule are indeed significantly important. They, however, also need to realize the inevitable evolution of language and in that regard descriptivist should not receive any false judgement especially in the high stakes test. Introduction Whenever a language is learned and used, we may often find an ultimate question that addresses the state of a language; whether the language should be viewed as based on descriptivism with how language is used, or through prescriptivism with how language should be used.This debate does not only happen in the practical use of everyday life of English usage, but also affect high stakes tests, such as IELTS, TOEFL, Cambridge Exam, and the like.For that reason, Uysal (2010) critically reviewed IELTS writing task to see the universality of the notion of correctness in IELTS raters by particularly addressing the reliability and validity of the test regarding the claims of IELTS to be an international test of English.He found that there are some issues such as the fairness as the result of using a single prescriptive criterion in the assessment.Building on to this point, I assume that there should be a further research that addresses the issue of prescriptivism (and contrast it to the descriptivism) to see what happened in the practice of teaching English as a foreign/second language (henceforth TEFL/TESL) and how these issues related to the high stakes assessment, especially in IELTS writing.Therefore, the present paper discusses the notion of prescriptivism and contrasts it with the view of descriptivism; especially to what actually occurs in the Teaching English as a Foreign/Second Language (henceforth TEFL/TESL).The focus of this paper, then, would be: first is to clarify whether prescriptionist features are important or prominent in the learner approximations and need to be taught explicitly.And second, to clarify whether the learner errors encompass other elements and describing the target language to the learners is more important. Therefore, the structure of this paper would be: first about the overview of the notion of prescriptivism and descriptivism from several selected scholars which includes the origins of prescriptivist pronouncements, of which it is more on historical overview, and then relate it to how language users treat the language today.The second would be looking at the context of TEFL/TESL and its relation to high stakes assessment in which in this paper would focus on IELTS writing.The third would explain the methodology I used and then followed by a discussion of the findings.Finally, the conclusion would sum up the whole paper. Historical description of Prescriptivism vs. Descriptivism: and Selection of Academic Views Around 15th to the latest 16th centuries, English gained notoriety, but at the same time, there was a concern about English being imprecise and becoming an ambiguous communication because of making it linguistically rich (Cole, 2003).For that reason, there was an urge of making a set of rules as a reference of matter of correctness and incorrectness.However, the different usages inevitably occurred.These happened because every writer has their own individual judgement regarding what was correct and incorrect (Cole, 2003;Curzan, 2014).The debate then moved to the one similarities that English had a prior age, in which linguists agreed-upon English had a "pure time" and that English could be restored to that period.Still, this idea triggered even bigger differences because every writer claimed himself as the one who owns the pure period (Cole, 2003).Around several decades later, Robert Lowth (1710Lowth ( -1787) ) who was a strong prescriptivist, published "A Short Introduction to English Grammar" in 1762.Lowth wrote several books on English grammar of which the key reason was to "teach what is right" in which most of his judgements were reinforced by analogies to Latin grammar (Curzan, 2014).Leonard (1929) as cited in Drake (1977) stated that prescriptivist in England in 18 thcentury attempt to control and regulate the uniformity and conformity of language through an absolute standard, of which it implies authority; order, stability, predictability and reason.In his discussion, Drake (1977) emphasised that what prescriptivist was trying to enforce as the basic principle of correctness was conformity.Cole (2003) mentioned that prescriptivist claimed themselves 'as authorities with power to prescribe and proscribe English usage' (p.134).Moreover, she referred the prescriptivists as the one who codify and enforce the rules of English usage.Another scholar, Kroeger (2005) mentioned that when it comes to prescriptivism, the rules about using language are consciously learned, of which it is learned in school.These pronouncements define the standard form of the language through an explicit policy by an authority.Curzan (2014) and Cole (2003), furthermore, also mentioned that there are some language etiquettes in which it prescribes us to such usages.In this paper, there are five prescriptive rules are presented, they are: a. do not split infinitiveit is wrong to put adverb in between 'to' and 'bare infinitive, in which to must be followed by only infinitive and should not be interfered by any other words, i.e. adverb, e.g., I have to really help her now, (Borjars & Burridge, 2013;Curzan, 2014).b. do not end sentence in prepositions -'sentence-final prepositions are also condemned as improper', e.g., who are you talking to?, (Cole, 2003, p. 136).c. do not use double negationtwo negatives are equivalent to the rule of affirmative, e.g., I didn't do nothing, (Lowth, 1762as cited in Cole, 2003).d. shall vs will -'shall' is for first person (singular: I and plural: we), while 'will' is for second and third person, e.g., I shall do this, but he will do that again (Cole, 2003).e. who vs whomwho signifies subject (nominative) and whom is used for (indirect) object (dative), e.g., the chancellor gives a speech to the audience (Cole, 2003). On the other hand, Joseph Priestly (1733Priestly ( -1804)), who was more of a descriptivist, against most of Lowth's views.Priestly described thatfor example in the issue of will vs.shall; shall can only be used in a formal condition and limited to first person which make the frequency of usage is occasional, whereas will can be interchangeably used with shall, and suits in any circumstances (formal and informal), and that will is widely accepted among the society (Cole, 2003).Descriptivism according to Drake (1977) focused on analysing the functions of language in actual use.He argued that descriptivist concentrate to "change over stability, diversity over uniformity, usage over authority, and spoken over written" (p.1).Kroeger (2005) mentioned that when it comes to descriptivism, the rules about using language are naturally and unconsciously learned from the members of the speech community; e.g., parents, friends, teachers, and so on.In Kroeger's view, the emphasis of the argument is not on whether these rules are standardized or not, but it is more onto the observation, description, and analysis of what speakers actually say; of which it constitutes the grammar of the language. TEFL and TESL context and its relation to High Stakes Assessment (IELTS writing) Being successful in such high stakes test like IELTS is one of the big goals and has become an important phase of lives for EFL/ESL.This is because the standardized tests are often used as the prerequisite to being accepted into the universities especially overseas.To be successful, the students must have a good English command; listening, reading, speaking, and writing, of which they mostly learn that skills and knowledge in the English institutes (government or private sector).Therefore, it is crucial to revisit what really happened in the class where the learning process takes place.Some students may find an English teacher who teaches the usage of English descriptively by considering some changes over time, such as the interchangeable usage of will and shall, splitting infinitive, and the like.Whereas some others are often found to be strictly prescriptive in the class by presenting some pronouncements as have been explained above; no splitting infinitive, no stranding preposition, and so on.Drake (1977) argued that the notion of stability in prescriptive view of which emphasised that language is in the state of stable over time can help the students to understand the writings from the past.However, if usage preferences are conservative; prescription resistant to language change, thus, there is a tendency for prescription to lag behind the colloquial language (Curzan, 2014). Building on to this point, there is a correlation between what has been learned in the class as input and how the learners answer the question in such high-stakes test such as IELTS writing as output.What makes it problematic is that the issue of fairness in IELTS writing assessment (Uysal, 2010).He argued that the assessment is found to be vague as the band descriptors do not tell muchfor example grammatical range and accuracy is very general which is highly possible to be ambiguous.Another issue concerning both views is that a writing assessor for IELTS writing may consider the idea of "correctness and incorrectness" based on only one specific view; prescriptivism, in which a particular grammar may unacceptable in prescriptivism's pronouncements, but it is finely accepted in the descriptivist view.Therefore, it is important to re-evaluate: what actually the students need in the EFL/ESL context; does prescriptivism plays an important role; and what learners errors say concerning these both views. Methodology As for the methodology, among many high-stakes test; TOEFL, IELTS, Cambridge Exam, and the like, I choose IELTS because first IELTS is the most popular EFL/ESL test around the world, and second, I have my individual experience on IELTS test, and finally IELTS claims to be an international English test (Uysal, 2010).In this study, I use eight (8) samples of IELTS writing band five (5) from IELTS-Blog.The reason for that is because I have limited time to collect the data, while IELTS-blog has provided some samples along with the errors and the corrections.These samples of writing will be analysed by looking at four rules based on prescriptivist: a. splitting infinitive b. stranding preposition c. the use of will and shall d. the use of who and whom I posit these four pronouncements to be analysed because they are high frequency/more common occurred.Therefore, in the next part, I will discuss these with the findings to see if prescriptivism features play a role in any of the errors and to find out what the errors can generally tell regarding the prescriptivism and descriptivism. Findings and Discussion In this segment, I will discuss the findings related to four pronouncements above.They are as follows: In the table, the findings tell us that among eight samples, it is found two cases that break the prescriptivism rule while most them follow.One case of splitting infinitive in sample 1, and one case of the use of will vs shall in sample 8. Regarding splitting infinitive, the sentence found in sample 1 is "I agree with the fact that punishment is the way to *really avoid the crime …", whereas for the case of using will vs shall, the sentence found in sample 8 is "we will not need natural fuel".Now let me discuss each of those pronouncements in the following. Splitting Infinitive In the sample 1 data, I found there are 13 times use of 'to infinitive' (see appendix), one of which is split by the adverb really.Whereas the rest of samples; 2-8, I do not find any splitting infinitive occurs which means sample 2 to 8 are all obeying the to infinitives rule from prescriptivist -without any splitting happened in between.Therefore, this finding concludes that almost all the writing samples abide by the prescriptive rule of which "do not split infinitive".In sample 1, the test taker wrote: (1.a) I agree with the fact that punishment is the way to *really avoid the crime … According to prescriptivist, this sentence is wrong because of the splitting infinitive.Lowth (1762) argued that there should be no word in between to and bare infinitive, and that he suggested, adverb (in this case "really") should come before or after to infinitive.Therefore, the sentence should be like this: (1.b) I agree with the fact that punishment is (really) the way to avoid the crime … In my opinion (regardless both views) the adverb really is not necessary for the sentence, as I could say "… punishment is the way to avoid the crime…", and the meaning is still complete.However, regarding descriptivism, these sentences: (1.a) and (1.b), have a different meaning.In the sentence (1.a), the meaning could be that the writer believes 100% that punishment is the best way to avoid the crime, while in the sentence (1.b), the emphasize is not as strong as sentence (1.a).It could mean that punishment is one of many ways to avoid the crime. From this finding, it is found only one case of splitting infinitive in one sample out of eight in which it could mean that what the student has learned in school was about the rules of prescriptivism, albeit the teacher does not explicitly tell them that the rule they learn is based on prescriptivist.This has been explained by Kroeger (2005) that the prescriptivism is learned consciously in school whereas descriptivism is naturally and unconsciously learned from their surrounding speech community, like parents, friends, coach, and so on. Stranding Preposition Regarding the stranding preposition, I do not find a single case that breaks the rule of prescriptivism.All prepositions used in the samples abide by the rule of prescriptivism in which proscribe preposition ends the sentence.Lowth (1762) as cited in Cole (2003) argued that sentence that ends with preposition be considered as colloquial, improper, and inelegant.From this view, I can argue that since the (IELTS) writing is considered as formal, there should be no colloquial, improper, and inelegant, that refer to stranding prepositions.However, the question that may arise that "do the students know this rule that ends the sentence with preposition is considered as colloquial, improper, and inelegant?".According to some scholars, (see Sundbay et al. 1991;Cole, 2003;Curzan 2004), stranding preposition is often happened in the speaking because of it happens spontaneously while in writing, the writer often have plenty of time to recheck and restructure the sentence, and that it is occasional to find stranding prepositions in the writing piece. Regarding the EFL/ESL context which based on my experiences, the teacher does not allow to end/stop the sentence with preposition as he considered it as unfinished sentence.Borjars & Burridge (2013) added that to attest preposition, it should occur before noun phrases, such as at grandpa house; should take an object form of the pronoun, e.g., I threw a party for him; and preposition works to link its noun phrase to another part of the sentence. The use of Will vs Shall In the sentence, I found one case of simple future is expressed by will instead of shall in the first person (plural: we).For prescriptivists, they argued that the use of shall and will is intolerably interchangeable.Lowth (1762) mentioned that shall is specifically used for the first person (singular and plural, I and we, respectively).However, will is used with the second and third person (he, she, it, you, and they).And that referring to that view, the sentence (2.a) will should be changed to shall because the rule says we shall (see (2.b)), and not *we will. (2.a) …we *will not need natural fuel However, Priestly (1733Priestly ( -1804) ) as cited in Cole (2003) and Drake (1977) argued that language cannot be maintained (forever) to the state of stable (or stagnant), but they believe that there must be a change over stability because people do not stay in one area and interact with the same people in their whole life.In that regard, they emphasize the notion of diversity instead of uniformity. (2.a) …we shall not need natural fuel Regarding the specific rule for shall, Priestly scrutinized that shall is a simple future tense that was specifically used only with the first person pronoun where the society was welcome to this specific rule until around the 18 th century.Priestly added that shall only found in a formal situation while will can be used for both conditionsformal and informal.For that reason, the society today is more accepting will than shall, as will can be interchangeably used for all pronouns.Therefore, regarding what priestly has said, I would assume that the use of will with preceded by the first person (we) is not grammatically wrong. In the context of EFL/ESL, in my experience, I remembered when my English teacher explains that there is a differentiation between the use of shall and will.Moreover, he explained that shall is for first person only, but will can be used for all.This means that the teacher is introducing the specific usage of shall (which is prescriptivist), but at the same time he opened to some evolution of the language (which is more of descriptivist). The use of Who vs Whom Like in stranding preposition case, I did not find any case that breaks the rule of prescriptivism regarding the use of who and whom.It seems that all samples follow the rule of prescriptivist that who is for subject (nominative case) and whom is for (indirect) object (dative case) (Cole, 2003).While for a descriptivist, they consider that the use of who is not constrained to subject only but also to object.They do not give judgement (right or wrong) to this phenomenon but rather see it as how the society functions the language in actual use.In EFL/ESL context, these relative pronouns rules are quite clear in the class.In my experience, the teacher explained the grammatical use of relative pronouns and that we can differentiate each use of them. Concluding remarks There are four prescriptivist pronouncements discussed in this study, and it is found only two that break the rule.In this IELTS writing, not only it is a formal writing, but also high stakes test, and that there is a tendency of using the exact, predictable, and stable rule (like Drake (1977) said) rather than other pronouncements that are more diversity.However, as Uysal (2010) critically reviewed about the fairness of assessment, it might disadvantage for the students who lack knowledge which one is correct and 'almost correct'.This is because the writing assessor for IELTS writing may base his judgement on a specific view; prescriptivism, of which a particular grammar may unacceptable in prescriptivism's pronouncements, but it is finely accepted in the descriptivist view.Therefore, as English practitioners -TEFL/TESL, they should educate themselves and the students the prescriptive usage and explain them explicitly which includes a brief historical overview so that the student realize what actually happened today and how they should react for a particular occasion such as high stakes test.Moreover, the standard rules of the languagespelling, grammar and punctuation which based on descriptivist are also essential for clear, updated, and unambiguous communication with speakers of English around the world.Also, those who purely uphold prescriptivism (whoever they are), they need to be more willing to accept natural changes to the language, of which it is inevitably evolving.
4,632.6
2018-12-26T00:00:00.000
[ "Linguistics", "Education" ]
The Price Dynamics of Hand Sanitizers for COVID-19 in Indonesia: Exponential and Cobweb Forms This study aims to analyze the impact of the COVID-19 pandemic on changes in handsanitizer prices in Indonesia. This study is motivated by the increasing demand for handsanitiz-ers which has had an impact on price ratings. The method used is an explanatory survey with an exponential form approach for the demand function and Linear Cobweb Form for the supply function. The structure of the hand sanitizer market is assumed to be a perfectly competitive market. The re-sults of the study indicate that despite changes in prices, the numerical analysis shows that prices are stable. The entry and exit of companies in the market does not affect price stability. This finding implies that after the pandemic the prices of handsanitizers will return to normal, so producers and consumers must remain rational. INTRODUCTION Since the World Health Organization (WHO) announced on March 11, 2020 related to changing PHEIC status to a pandemic, after a significant increase in the number of case reports and the number of deaths due to Novel Coronavirus virus (COVID19) in various parts of the world (World Health Organization, 2020) has had a great impact on all life activities including the economy. This virus has flu-like symptoms and respiratory infections (De Groot-Mijnes et al., 2004). Before becoming a pandemic, this virus ini-tially occurred in Wuhan City of Hubei Province, China in December 2019.. There is a report stating that an outbreak of pneumonia is related to a virus called Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) (Huang et al., 2019;Zhu et al., 2020;Li et al., 2020). The number of confirmed Covid-19 cases from 22 January 2020 to 31 March 2020 experienced a very significant jump. The number of case reports worldwide was 802,639 with a total of 39,014 deaths and a total of 172,319 patients recovered (www.worldometers.info/coronavirus/). One of the impacts of the COVID-19 Pandemic on the economy is shown by the potential for shocks in the demand and supply of goods and services. The impact of this outbreak prompted people to buy hand sanitizers to prevent corona virus transmission. The prices of hand sanitizers increased significantly from the normal prices. A number of well-known brands of hand sanitizers have increased by 81.12 percent on several e-commerce platforms, such as Shoppe, Bukalapak and Tokopedia. This condition has encouraged producers to increase the volume of hand sanitizer production, which in turn has an impact on increasing demands for chemicals such as ethanol by hand sanitizer manufacturers, disinfectants and antiseptics. Based on the records of the Indonesian Chemical Industry Federation (FIKI), the increase in chemicals was recorded around 10% -15% compared to normal conditions. Price dynamics is a strategy, in which prices vary over time, consumers, and/or circumstances (Elmaghraby & Keskinocak, 2003;Gönsch et al., 2013;Faith & Agwu, 2018;Elmaghraby & Keskinocak, 2003) and it distinguishes between two dynamic pricing models, namely the pricing mechanism and the price discovery mechanism. With price-posted mechanisms, price changes are often offered as "take-it-or-leave-it" price, i.e., the company is still in charge of setting prices. With price discovery mechanisms, such as eBay, Priceline, or similar negotiation approaches, consumers get input to set the final price. The study of price dynamics in general is inseparable from the concepts of demand and supply and price theory. Price theories have been widely advanced by experts (Mills, 1959;Muth, 1961;Petruzzi & Dada, 1999;Bi & Liu, 2014). Meanwhile, Collery (1955) examined the price expectations associated with the Cobweb theorem. Cobweb's theorem has to do with the form of the supply function, where the supply de-cision considers the conditions of the past (Waugh, 1964;Morgenstern, 1948;Collery, 1955) They use the linear form request function. The cobweb model describes a dynamic price process in a market by adjusting goods that cannot be stored and supplies that are owned by producers. The cobweb framework can consider interactions between two interconnected markets. Interaction can occur when producers enter different markets. This process is carried out every specific time period depending on the profit generated by the producer. The decision of producers is influenced by the state of prices in different markets each period, so most producers choose markets that have greater profits. Cobweb model is one that is not linear. In principle, the cobweb model can be useful to investigate the elasticity of supply and demand elasticity due to price fluctuations and the amount of production. There are six assumptions of the cobweb model that must be fulfilled: 1) the price is determined by the competitive structure that occurs in the marketing process, 2) the price is determined in the short-term production process, 3) the production plan is based on new prices, 4) there is a grace period to provide production responses , 5) the price cycle that occurs depends on the actual production equation, and 6) demand and supply are static (Machmud, A 2016). The Cobweb model is one of the forecasting strategies that can be studied in detail (Waugh, 1964;Day & Tinney, 1969). Parameters used in this analysis illustrate the evolution of forecasting strategies. The Cobweb model also illustrates the transition between rational estimates and not, there is a wrong behavior when discovering the value of supply of goods and elasticity demand is large enough (Dawid & Kopel, 1998;Franke, 1998). Producers have assumptions about future price estimates heterogeneously (Hommes, 2011). Before determining price estimates, producers must describe produc- tion decisions and market needs. The results show that the cobweb theory is appropriate for discussing price fluctuations, supply quantities, demand for variable variables affect the demand and supply of agricultural products (Waugh, 1964;Day & Tinney, 1969;Pashigian, 1970). Exponential function is one of the important functions and is often used in mathematics (Clenshaw & Olver, 1980;Confrey & Smith, 1995;Rovira & Rovira, 2010). Exponential functions have been extensively studied by mathematicians (Bartlett, 1976;Lam & Goodman, 2000;Tegmark, 2008;Debnath, 2015;Salas et al., 2010). From the studies that have been conducted, several definitions have been produced in different forms of exponential functions, logarithms and limits. This study aims to analyze the impact of COVID-19 on the balance of the hand sanitizer market in Indonesia with the approach of the exponential demand function and the linear cobweb supply function. The study method used was exploratory survey with data analysis techniques sourced from articles that have been published in the mass media and national and international journals. The assumption used in the handsanitizer market structure is perfect competition with the characteristics of (i) Many buyers and sellers in the market, (ii) Goods offered by many companies are uniform, and each company follows the market price, and (iii) companies are free in and out. The results of the study are expected to be used as input for policy makers in dealing with covid-19. Demand and Supply Function Demand is the desire of consumers to buy an item at various price levels over a certain period of time. The amount of goods demanded by consumers in addition to depending on price is also determined by factors: (i). Prices of other related goods; (ii). Per capita opinion level; (ii). Taste or habit; (iv). Total population; (v). Estimated item in the future; (vi). Distribution of community income; and (vii). Producer's efforts to increase sales (Machmud et al., 2016;Machmud et al., 2018). Changes in these factors will determine the quantity demanded by consumers for the goods they want (Mojaveri & Moghimi, 2017). We assume that there are consumers and producers. and each producer produces and goods that consumers want to consume. We also assume that the number of consumers does not change over time. In general, the request function can be declared with the equation: Y i = f i (x 1 .x 2 , … ,x n ), I = 1, 2, …, m (1) which is assumed to be a continuous function. If x1 is the price, then according to the law of demand, ceteris paribus , 0 (2) Inequality (2) does not apply to goods that have elements of speculation, prestige, and gifen. The equation function (1) which fulfills (2) is called the individual demand function for an item, and is called the consumer behavior equation [7]. Furthermore, the market demand function is as follows: Furthermore, supply is the price of goods that producers want to offer (sell) at various price levels during a certain period. Factors affecting supply are: (i) the price of the goods themselves, (ii) the prices of other related goods, (iii) factor prices, (iv) production costs, (v) production technology, (vi) number of traders or sellers, (vii) corporate objectives, and (viii) government policies (Machmud et al., 2018). Changes in these factors will cause changes in the quantity of bids by the company. Furthermore, in general the bid function is stated as y j = g j (x 1 .x 2 , … ,x n ), j = 1,2,…k. (4) If X1 is the price, then according to the law of supply, ceteris paribus, then (4) complies Inequality > 0 (5) The function of equation (4) that satisfies (5) is called the function of the supply of goods by a company, and is called the equation of company behavior (Machmud et al., 2018). The market supply function is Changes in the variables y of equations (1) and (4) due to changes in x1.x2, ..., xn, graphically, cause the demand curve and supply curve to shift to the right or left; x1.x2, ..., xn is called a movement factor [9]. Market Price Dynamics We will use the exponential demand function as the consumer demand function of (1), which is: Deman Function from (7) we can state with y D = α , a > 0, δ > 0 (8) Supply Function is linear : The Market Supply Function from (9) can be expressed as follows: Value is the quantity of stock sold, while b is the tendency to sell by the company. From (8) and (10) a market price is obtained that meets the balance nature of yD = yS p(t) = ln (11) where p(t -1) > 0. If another company produces similar goods, and competes in the market, the company initially follows the price of p (0) to sell its production goods. However, after the company operates all the time t, assuming demand does not change, the price of p (t) will go down. This is due to the increase in the value of the coefficient b from equation (10), as a result of the entry of a new company into the market, where the quantity of goods sold in the market rises. If the driving factors x1.x2, ..., xn are also considered in model (11), then the price of p (x1.x2, ..., xn) will fluctuate over time. The Development of Handsanitizer Prices in Indonesia at Covid19 Based on a report on the price comparison, the prices of hand sanitizers have increased many times. A number of wellknown brands of hand sanitizer soared high in several e-commerce platforms, such as Shoppe, Bukalapak and Tokopedia around 81.12 percent. The sample data were taken in the span of time from February 26, to March 4, 2020. The highest sales figures occurred on March 2, 2020 with a total value of Rp159,560,430 from 1,613 transactions. The average product purchased was in the form of a 50 to 500 ml hand sanitizer, with a price range of Rp. 25,000 to 155,000 per bottle. Seeing the surge in transactions, not a few there were some sellers who set prices above normal. More clearly can be seen in Figure 1. The high demand for these products, making Tokopedia provide a special page so that users can buy and search for health goods more easily. Not to forget Tokopedia provides cashback promos for every product purchase through its platform. A number of preventive measures have also been taken by e-commerce platforms such as Amazon and eBay. Both companies imposed a ban on not selling health products with frills that can prevent or cure the coronavirus. Amazon even will explicitly delete every ad sold by its users if it is known to set an unreasonable price, for health products such as masks, hand sanitizers and medicines. While eBay has reportedly removed 20,000 lists of items that have been verified as selling coronavirus-related items. The momentary change of the price of p (t) is P '(t) = -() (17) If selected p '(0) = 0.07, δ = b, a = 50, then the behavior of price changes is shown in Figure 3. In addition, Figure 2 also shows that the change in price of p ′ (t) converges to 0. The entry of a new company will cause the value of p ′ (t) in equation (12) to decrease, because the value of p (t) in equation (11) goes up. However, p ′ (t) remains convergent to 0. And vice versa, the exit of the company from the market, causing p ′ (t) to fall, but still convergent to 0. In other words, the entry and exit of the company, will not cause market price instability . CONCLUSION This study has provided an explanation of price dynamics in perfectly competitive markets, where the demand function is exponential, and the Cobweb form is linear. In this explanation, it is assumed that the demand is fixed for prices, ceteris paribus. The results of numerical analysis show that the dynamics of market prices are stable. The entry and exit of companies in the market shows that market prices have remained stable. AUTHORS' NOTE The author(s) declare(s) that there is no conflict of interest regarding the publication of this article. Authors confirmed that the data and the paper are free of plagiarism.
3,176.6
2020-05-06T00:00:00.000
[ "Economics", "Medicine" ]
Evaluation procedure for blowing machine monitoring and predicting bearing SKFNU6322 failure by power spectral density Bearing failure is one of the most important causes of industrial equipment shutdown, in fact it is estimated that 40% of shutdowns are generated by bearings [28, 42]. Bearing failure is caused by many factors, such as lubrication, loads, handling failures, lack of preventive maintenance, among others [10, 34, 39]. ISO 10816-3 establishes that there are four stages of breakage, which are caused by unbalance, or misalignment [24, 32, 8]. Determining the stage is one of the main objectives of predictive maintenance and of industries in general, which are looking for this new technology to reduce the problem of production stoppages and the cost associated to them [4, 36]. Finding the breakage moment is the main objective, but the breakage type in the form of a ski curve makes this very difficult, because the transition from the third to the fourth stage takes place in a brief period of time involving corrective actions [18, 27]. Another problem is the number of fault types [22]. New non-invasive techniques, such as acoustic study, are trying to improve predictive maintenance [21, 23], but involve environmental problems. The approach by spectral analysis of the vibrations is the most widespread technique, but it requires several control points and daily monitoring, resulting in a large cost in terms of time and personnel [20]. Other approaches experiment with vector support machines (SVM) and Wavelet signal processing [1, 5]. Current studies determine the type of defect in the laboratory. The problem arises when it is analysed under real working conditions [9, 12, 13]. In order to validate the method, it is necessary to have very large control samples, which allow to discriminate external factors. Therefore, this research evaluates equipment monitoring over a 15year period. Although there is some background of similar studies to the one done here, none of them are applied to equipment of such a large size and power as used in this research, and in no case, those studies have been extended over such a long period of time. Thus, some research found have been applied in a small blowing machines for bottles [17, 37] evaluating the software used, and those carried out by [15, 38] in wind turbines, in which the data acquisition has been developed for less than three years. Evaluation procedure for blowing machine monitoring and predicting bearing SKFNU6322 failure by power spectral density Javier Castilla-Gutiérrez b, Juan Carlos Fortes Garrido a,b, Jose Miguel Davila Martín a,b,*, Jose Antonio Grande Gil a,b Introduction Bearing failure is one of the most important causes of industrial equipment shutdown, in fact it is estimated that 40% of shutdowns are generated by bearings [28,42]. Bearing failure is caused by many factors, such as lubrication, loads, handling failures, lack of preventive maintenance, among others [10,34,39]. ISO 10816-3 establishes that there are four stages of breakage, which are caused by unbalance, or misalignment [24,32,8]. Determining the stage is one of the main objectives of predictive maintenance and of industries in general, which are looking for this new technology to reduce the problem of production stoppages and the cost associated to them [4,36]. Finding the breakage moment is the main objective, but the breakage type in the form of a ski curve makes this very difficult, because the transition from the third to the fourth stage takes place in a brief period of time involving corrective actions [18,27]. Another problem is the number of fault types [22]. New non-invasive techniques, such as acoustic study, are trying to improve predictive maintenance [21,23], but involve environmental problems. The approach by spectral analysis of the vibrations is the most widespread technique, but it requires several control points and daily monitoring, resulting in a large cost in terms of time and personnel [20]. Other approaches experiment with vector support machines (SVM) and Wavelet signal processing [1,5]. Current studies determine the type of defect in the laboratory. The problem arises when it is analysed under real working conditions [9,12,13]. In order to validate the method, it is necessary to have very large control samples, which allow to discriminate external factors. Therefore, this research evaluates equipment monitoring over a 15year period. Although there is some background of similar studies to the one done here, none of them are applied to equipment of such a large size and power as used in this research, and in no case, those studies have been extended over such a long period of time. Thus, some research found have been applied in a small blowing machines for bottles [17,37] evaluating the software used, and those carried out by [15,38] in wind turbines, in which the data acquisition has been developed for less than three years. Evaluation procedure for blowing machine monitoring and predicting bearing SKFNU6322 failure by power spectral density This work shows the results of the comparative study of characteristic frequencies in terms of Power Spectral Density (PSD) or RMS generated by a blower unit and the SKFNU322 bearing. Data is collected following ISO 10816, using Emonitor software and with speed values in RMS to avoid high and low frequency signal masking. Bearing failure is the main cause of operational shutdown in industrial sites. The difficulty of prediction is the type of breakage and the high number of variables involved. Monitoring and analysing all the variables of the SKFNU322 bearing and those of machine operation for 15 years allowed to develop a new predictive maintenance protocol. This method makes it possible to reduce from 6 control points to one, and to determine which of the 42 variables is the most incidental in the correct operation, so equipment performance and efficiency is improved, contributing to increased economic profitability. The tests were carried out on a 500 kW unit of power and It was shown that the rotation of the equipment itself caused the most generating variable of vibrational energy. Equipment under study It has been studied an air blower and the bearing SKFNU322, which supports the mechanical drive shaft between the blades and the motor. The driving system is a motor, model Framesize 400, 3190 kg and 1.9 x 0.9 metres. All this can be seen in Figure 1. The blade system measures 2.075 meters in diameter and the connection between the two is made by a 0.095-metre diameter and 3.9metres-long shaft. This equipment belongs to Atlantic Copper company. Many studies on bearing vibration analysis have been carried out, since operational safety and efficiency, together with environmental protection, is one of the challenges of the world's industry (McFadden and Smith 1984). Therefore, efforts are being made to meet this need through spectral vibration analysis [7,14]. The dynamic state of movement and the actual operating conditions are the major milestones in laboratory studies. They analyse bearing failures [25,40], but their approximations are not valid for real operating conditions. Mechanical vibration is a set of impulses caused by hitting moving elements on coinciding areas. These energy effects under constant operating conditions generate stable frequency disturbances, which can be determined and discriminated according to the element that generates them [11]. Each bearing is made up of a cage, balls and tracks, and the impact between them generates different periodic waves, which have their own spectral range [6,38]. The fault frequency of the components of bearing SKFNU322 is defined according to the contact points [19]. The service life of different bearing components is determined by the amplitude of a specific disturbance. The existing types and the mathematical expressions that define them are the following: The movement of rotation of the balls or rollers around their axis • is what defines the frequency of rotation of each of them (Eq. 1): To determine the frequency of the ball defect, the so-called ro-• tational frequency will be used, that is, twice the frequency of rotation of the balls or rollers (BSF), produced by the ball hit with both races (external and internal ) at every turn (Eq. 2): The defect frequency of the ball support or rotation body (FTF) • can be determined with the equation (3): The defect frequency produced in the ball passage or internal race • (BPIR) and the defect frequency produced in the outer race or ball passage (BPOR) care calculated with the expressions (4) and (5) respectively: Where: N is the angular speed of the axis in revolutions per second (rpm), D is the average bearing diameter (inches), d is the diameter of the rolling circumference of the balls or rollers (inches) and n is the number of rollers or balls forming the bearing. The blade frequency (Fpa) is the frequency generated by the blade • passage and can be determined by the equipment rotational speed (RPM) and the blades number (Na) by using de equation (6): SKFNU322 bearing consists of 5 characteristic elements or frequencies, which are BSF, FTF, BPIR and BPOR, respectively. Another very important variable is 2BSF, which is generated by the second harmonic of the balls [30]. The equipment uses the SKFNU322 bearing as a support for the power transmission shaft, between the motor and the blades. Table 1 shows the fundamental characteristics of the bearing. They have been followed the guidelines of the ISO 10816.3 standard [43], to obtain the frequency spectra of the SKFNU322 bearing variables and the equipment fundamentals such as the blades and SPEED. The frequencies can be seen in the table below (Table 2). Vibration analysis Predictive maintenance is largely based on advances in the processing of vibration signals, through the frequency spectra and the power density spectra generated by these signals. The mathematician Jean Baptiste Fourier has formulated the relationship of the generated waves as a function of frequency instead of time (Eq. 7) [16]: The improvement of the Fast Fourier Transform (FFT) allows to avoid signal masking or aliasing, in signals of small amplitude. The PSD or Power Spectral Density (e.g. applied in [26] and [33]), is the energy that has each characteristic frequency, this follows the mathematical expression below (Eq. 8): The sum x(i) for ∆T is the mean power of the interval. One way of estimating this energy is to limit the integration interval. It has been used a window based on the sample. This window is determined by the period o gram and allows to analyse hidden frequencies, avoiding signal masking [35]. The Hilbert HT transform (Hilbert Transform) studies the signal envelope. It achieves an improved PSD, making it easier to sample signals at low frequencies [31]. The method follows the procedure of transforming two functions s(t) and 1/(ᴫt) into a third one, see the following mathematical expression (Eq. 9): The amplification of the signal envelope, for low frequency vibrations, provides improvements and disadvantages such as sensitivity to noise [29]. This was solved by the Wavelet Transform (WT), which provides information on signal processing such as time and frequency [41]. Data acquisition and processing The study covers a period of 15 years, using Entel IRD and Odyssey Emonitor software for data acquisition and analysis. The measurement equipment has 16 channels with a nominal voltage of 24 V and 3.6 mA current. The investigation was carried out on a 500 kW of power blowing machine with 3 m blades in a real operating conditions (constant regime) for 15 years and it was made by accredited technicians according to standard 18436-2, this fact makes this study unique in its category. There are 4 filters used through a multiplexer, which generates acceleration, velocity and displacement signals. The accelerometer used has a sensitivity level of 100 mV/g., with a frequency range of 10000 Hz. The analogue-digital converter captures up to 51.2 kHz, with a resolution of 16 bit. Monitoring is done through a magnetized anchor with quick release, which allows to reach faults with a frequency and sensitivity range between 0-300 Hz. A window of 3200 lines is used, with a range of 60000 to 300000 lines, depending on whether it is for speed or acceleration. The resolution of the Hanning window will be between 18 and 194 CPM, depending on the type of variable analysed (speed or acceleration). According to the ISO 10816 standard, there are four levels of functional risk. For an equipment with flexible shaft, with power over 300 kW and speed of 1500 rpm, there would be an operating level that ranges from 0.18 to 11 RMS (mm/s), and its critical value is 7.1 RMS (mm/s), where corrective maintenance must be applied. The ISO 10816 Standard, used for the diagnosis of vibration functionality in industrial equipment, determines that the evaluation should be done by the spectral level based on speed, as opposed to the use of acceleration or displacement, which only act properly at high or low frequency, respectively. Integration considers the vibration signals produced at low frequency, acting to a lesser extent on those of high frequency. This is due to the proportional invertibility of the speed V(f) as a function of frequency f, see the following expression (Eq. 10): The D(f) shift is also affected by the frequency in an exponential way (Eq. 11): where A(f) is the acceleration of frequency f and C 1 and C 2 are constants. The equipment has 2 sampling points for data acquisition, these are called POS3 and POS4. In each of them, data are collected in the three space coordinates, identified as POS3H, POS3V and POS3A for the third position and the horizontal, vertical and axial axes, respectively. The monitoring of point 4 follows the same rule. According to the indications of the standard 10816-3, there are three maximum levels that dictate the correct operation of the equipment. For this analysis, a total of 617 shots are obtained for each of the two positions and the three coordinated axes, resulting in a total of 3702 data. This data discriminates the variable, the position and the axis that are most decisive and influential in the operation of the equipment. As a summary, Figure 2 includes a diagram with the procedure used. Results and discussion In the first instance, the characteristic frequencies of the SKF-NU322 bearing in positions 3, 4 and their respective Cartesian axis are analysed. Within each position, the most critical axis is evaluated and compared with the other measurement position. After the analysis of each frequency separately, they are all compared in their most sensitive positions to determine three fundamental elements, which would be the most harmful variable for the machine, its position and its axis. Once the most sensitive variable of the bearing has been identified, it is related to the characteristic frequencies of the SPEED and Blades bearing in their most decisive control positions. The analysis of the FTF frequency, generated by the bearing cage, is obtained by calculating the sum of values of the entire spectrum, at each of the control points. It is also done with the mean and the maximum or peak value, all in RMS. For accumulated values and peak at position 3, the vertical axis is the most sensitive. When evaluating position 4, it has been obtained a value identical to the previous one. If the PO3V and POS4V points are compared, it can be seen how it is in this latter position where the effect of the FTF frequency is greater, both in maximum accumulated values and in peak values, see Table 3. Fig. 2. Diagram with the procedure In the analysis by values, the vertical axis of position 4 is the most important one to predict the failure of the frequency generated by the ball cage. To fully validate the method, the FTF result on POS3 and POS4 is graphically compared to the vertical axis. The aim is to see if there is concordance and symmetry between the two, which allows us to state with complete certainty that the failure can be predicted, just analysing one of them. Figure 3 shows that there is linearity between both and how it is in POS4V where the highest values are reached. This determines that it would be possible to predict the failure of this frequency, analysing only one of the six sampling positions. It is important to highlight the ski curve effect, going from perfect operation values to breakage values higher than 7.1 RMS on days. Once the FTF has been evaluated, BSF is analysed, starting from the third position and its axes, where the highest accumulated value and peak is obtained, which would be on the axial axis, with 47.7 RMS and 0.692 RMS, respectively. It has been obtained more incidence in the axial axis than in the vertical one, this indicates that position 3 is more sensitive to the vibration effect of this variable. In the case of position 4, the vertical axis is the most important, followed by the axial and finally the horizontal axis. When comparing both positions, it can be seen how the axial and vertical axes are the most important, but in absolute terms POS4V that is the most decisive. Table 4 shows everything. Fig. 3. FTF result in terms of RMS(mm/s) in positions 3 and 4 vertical The variable generated by the bearing balls follows a common pattern in positions 3 axial and 4 vertical, even though they have different axes. The vertical axis of position 4 is the highest. In terms of peak values, they are very equal in both positions. The graphical analysis indicates that both positions are important and should be monitored, because it is not always the vertical axis that produces the highest peak value. When this result is compared with that obtained by the BSF variable, it is determined that the peak value in the latter reaches 7.1 RMS. However, in the variable generated by the balls, the peak value of the vertical axis is 0.727 RMS, which is within the optimum working conditions. This indicates that this variable is not the cause of bearing failure or breakage. All this can be seen in Figure 4. The BPOR analysis, in position 3, is the first variable where the most important effect is found in POS3H, its accumulated value is slightly higher than POS3V and both are very distant in relation to the axial axis. Fig. 4. BSF result in terms of RMS (mm/s) at positions 3 axial and 4 vertical Peak values are the same as cumulative values. It should be noted that these do not reach the risk values, as determined by the applied standard. In position 4, the most sensitive axes are the horizontal and the axial, both in peak values and in accumulated values. It should be noted that in position 3 the values do not reach the risk values for the machine's operation. The analysis of the BPOR frequency in accumulated values determines that POS3V is the most sensitive in position 3, and in the case of position 4, it is obtained in POS4A. The comparison of both determines that position 4 is the most decisive. All the above is shown in Table 5. The variable generated by the balls in the outer track of the bearing follows a common pattern in POS3H and POS4A positions, even though they have different axes. When comparing the graph, It has been obtained linearity between both positions and axes, this determines that the most severe action is the one perceived by POS4A, as shown in Figure 5. The study of the BPIR variable on axis 3 determines that the most sensitive axis in cumulative values and peaks is the axial, followed by the vertical position. In the case of position 4, It has been obtained that the most important axis is the horizontal one, followed at great distance by the vertical one. It should be noted that the effect between the axial and vertical axes of position 3 is closer, in cumulative terms, than those obtained between the horizontal and vertical axes of position 4. Fig. 5. BPOR result in terms of RMS(mm/s) at positions 3 horizontal and 4 axial The analysis of the peak values determines that the axes 3 axial and 4 horizontal are the most important, but these do not reach values higher than 0.6 RMS, well below the critical 7.1, established by the existing legislation. When comparing both positions, the result is that the axial position 3 is the one that perceives most the vibration effect of this variable, followed very closely by the horizontal position 4, with a little more than 1 RMS of difference. The same result is obtained in terms of peak values. Table 6 shows everything. The variable generated by the balls in the inner track of the bearing follows a common pattern in POS3A and POS4H positions, even though they have different axes. This result is important because the linearity between both control points and determines that monitoring is possible by observing either of them, as shown in Figure 6. The study of the variable 2BSF on the 3rd axis determines that the most sensitive axis in cumulative values and peaks is the vertical one, followed by the horizontal position. In the case of position 4, the most important axis is the horizontal one, followed at great distance by the vertical one. In position 4, the highest value is obtained on the horizontal axis, followed at a great distance by the vertical and axial axis. It should be noted that the effect between the horizontal and vertical axes of position 3 and 4 is very close in cumulative terms. The analysis of the peak values determines that axis 3 vertical and 4 horizontal are the most important. These values do not exceed 1.6 RMS, being well below the critical value of 7.1 RMS. When comparing both positions, It has been obtained that the vertical position 3 is the one that most perceives the vibration effect of this variable, followed very closely by horizontal position 4, with little more than 2 RMS difference. In terms of peak values, It has been obtained the same result (see Table 7). The variable generated by the second harmonic of the balls follows a common pattern in positions POS3V and POS4H, even though they have different axes. This result is important and gives confidence to determine that it is possible to monitor the machine, observing the vertical position 3, as shown in Figure 7. After evaluating the bearing frequencies, it is observed that the most determining variable is that produced by the second ball harmonic, followed by that of the cages. The analysis of the peak values gives a more important result, obtaining in FTF 7.74 RMS a critical value of operation, making this frequency the most determinant, as shown in table number 8. After identifying that the FTF is the most determining frequency of the SKFNU322 bearing, it is then compared with the results of the other two machine operating variables, such as SPEED and Blades. Fig. 6. BPIR result in terms of RMS (mm/s) at positions 3 axial and 4 horizontal Other studies determine that the turning variable of the machine generates more incidence in POS4V. In the case of the blades, the axial position would be the most important [2,3]. In the Table 9 shows the comparison of the three frequencies. The graph above shows the result of SPEED, Blades and FTF, with no linearity. It is certain that this machine can be monitored through positions POS4V and POS3A, as shown in Figure 8. Conclusions After the analysis of the variables FTF, BSF, BPOR, BPIR and 2BSF of the bearing SKFNU322, using the approximation method by power density values in accumulated values and peaks, it can be said that most important values are found in position 4, and specifically in the vertical axis. In cumulative values, the actions created by 2BSF are the most important. However, in peak values, the most remarkable action is the one caused by FTF, with values higher than the critical ones, according to the application standard. When comparing the FTF variable with the most representative of the machine, SPEED and Blades, it can be seen that the turning variable of the machine generates more effect in the vertical position, and the one on the blades has more effect in the axial position. Finally, It can be concluded that despite studying 42 variables in six control points, it is possible to predict the failure of the bearing and avoid the stoppage of this equipment just monitoring POS4V and the SPEED variable.
5,684
2021-06-15T00:00:00.000
[ "Materials Science" ]
Gator: A Python-driven program for spectroscopy simulations using correlated wave functions The Gator program has been developed for computational spectroscopy and calculations of molecular properties using real and complex propagators at the correlated level of wave function theory. Currently, the focus lies on methods based | INTRODUCTION Quantum chemical calculations have become a corner stone for fundamental and applied research in molecular sciences. Simulations of spectroscopies and predictions of molecular properties are made available through various formulations of response theory and allow for the revelation of molecular structure-function relationships that are indispensable for experimental spectrum characterization and rational material design. 1,2 However, any attempt to compute molecular spectra and/or properties remains a compromise between accuracy and applicabilityparticularly so for large molecular systems embedded in complex environments. 3 While reliable quantum chemical methods exist to address the electronic ground state, electronic structure theory as a whole and efficient computer programs in particular are lagging behind the current needs in the field of computational spectroscopy. 4 A wide range of semiempirical and density functional-based methods exist, which are applicable to a wide range of large molecular systems, 5, 6 but they are not reliable and predictive without extensive benchmarking and/or comparisons with experiments. This turns out to be an increasingly difficult way forward with growing system complexity as suitable experiments are often missing and highly accurate methods are computationally unaffordable. 7 Therefore, the development of wave function-based ab initio methods and the provision of efficient computer program implementations are in high demand, not least because such methods are systematically improvable and usually possess predictive power. In contrast to semiempirical and density functional-based approximations, the physical effects contained in the ab initio Hamiltonian are a priori well defined, and from the outset, it is clear which properties and spectra can be described in a physically correct manner. The focus of the Gator program lies on single-reference methods, currently using the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to the third order in the fluctuation potential [8][9][10][11] as implemented in the ADC connect (adcc) toolkit. 12 It had been inspired by the adcman module contained in Q-Chem 13, 14 but evolved into an independent, open-source hybrid Python/C++ module. Based on the capabilities of adcc, the Gator program enables the calculation of molecular properties using linear real and complex response functions. In addition, Møller-Plesset (MP) perturbation theory and ADC (2) have been implemented with efficient exploitation of high-performance computing (HPC) resources. This implementation is accomplished using a distributed auxiliary Fock matrix-driven tensor contraction scheme with hybrid MPI/OpenMP parallelization. Overall, the Gator program is designed to provide a flexible platform for (i) high-performance ab initio quantum chemistry, (ii) scientific extensions (methods and algorithms) with relative ease and quick turnover time, and (iii) educational training with a high degree of student interactivity. The available features of Gator are described in more detail in the corresponding subsection. With partly different scientific objectives but an otherwise largely shared philosophy and intentions, similar program development efforts have recently been undertaken, for example: (i) the Psi4NumPy 15, 16 Python programming environment for facilitating the use of the self-consistent field (SCF) and post-Hartree-Fock (post-HF) kernels from the Psi4 program through NumPy 17 ; (ii) the PySCF program platform 18,19 for Python-based SCF and post-HF electronic structure theory calculations of finite and periodic systems; (iii) the molsturm 20 quantum chemistry framework, which provides a contraction-based and basis function-independent SCF, integrating readily with existing integral libraries and post-HF codes; and (iv) the Dalton Project 21 providing a platform written in Python for the interoperability of quantum chemical software projects. There exist several other efforts to provide massively parallel quantum chemistry programs, which do not necessarily exploit a Python-based modular program setup. For an overview, see Refs. 22, 23. | Algorithmic considerations The performance-determining tasks in post-HF spectroscopy calculations are the atomic-to-molecular orbital integral transformations followed by the tensor operations defined in the iterative solution of the underlying response equations. Conventional implementations suited for shared-memory computers typically store the full tensor of transformed antisymmetrized two-electron integrals in memory. While this approach works well for calculations of molecular systems up to about 700 one-particle basis functions on standard computers, it quickly becomes unfeasible for larger systems due to a memory bottleneck. As a complement to a conventional integral transformation, we have also implemented an MPI/OpenMP parallel auxiliary Fock matrix-driven transformation routine. The basic idea of this distributed integral transformation is to express quantities on the molecular orbital (MO) basis (such as integrals and tensor contractions) in terms of auxiliary Fock-like matrices. This has previously been exploited in an atomic orbitalbased formulation of the complete active space SCF method on graphical processing units. 24 For electron-repulsion integrals in the MO basis, we use the simple relation where Here, i, j, k, l and α, β, γ, δ refer to MO and atomic orbital (AO) indices, respectively, and N is the number of basis functions. The basic advantage is that it allows these quantities to be constructed incrementally, and by using a split MPI communicator, the work of constructing this set of auxiliary Fock matrices can be divided into the available cluster nodes. Any MO integral resulting from Equation (1) remains afterward on the node of its evaluation, being ready, for example, for the distributed calculation of the MP2 energy or ADC matrix-vector products. For the ADC part, Gator relies on contraction-based iterative algorithms for solving response and eigenvalue equations as has been described previously for adcc. 12 In such schemes, the ADC matrix M is never explicitly constructed, but instead, the secular and response equations are solved by calculating matrix-vector products with sets of trial vectors {b i }: Formally, the ADC(2) vectors consist of a single-excitation part and a much larger double-excitation part, which can be folded into the space of single excitations. 25 We implemented this algorithm within the HPC-QC module of Gator as vectors in the single excitation space can be communicated between nodes at a negligible cost. The expressions for the ADC(2) matrix-vector product in Equation (3) have thus also been derived and implemented with the use of the auxiliary Fock matrix-driven MO transformation. For smaller systems, the computational cost of the auxiliary Fock matrix-driven approach is not beneficial, and hence, both variants of integral transformation and calculations of σ-vectors are made available in the Gator program by VeloxChem. 26 With AO integral screening based on a combination of the Cauchy-Schwarz inequality and the density, the calculation of auxiliary Fock matrices scales somewhat better than N 3 . As there are a total of N 2 pair indices (i, j), this results in a total scaling of less than N 5 for the auxiliary Fock matrix-driven integral transformation in Equation (1). | Modular structure of Gator The Gator program consists of three major modules as illustrated in Figure 1: (i) HPC-QC for MP2 and ADC(2) calculations in HPC environments, exploiting the Fock matrix-driven integral transformation outlined above; (ii) Respondo for the evaluation of real and complex linear response functions; and (iii) ADC connect (adcc) 12 for excited-state calculations based on the ADC scheme up to the third order. The Hartree-Fock reference state and the required one-and twoelectron integrals are provided by the VeloxChem 26 program. The modules of Gator are developed in a two-layered fashion. The upper layer is written in Python and includes the management of the hardware resources, input/output handling, result processing, and high-level parts of some algorithms. Numerical array operations are carried out using NumPy 17 and SciPy. 27 Parts of the lower program layer are written in C++, providing functionality for the evaluation of tensor contractions that, in adcc, are performed by libtensor. 28 The two layers communicate by means of the pybind11 library. 29 | Implementation and efficiency of HPC-QC In addition to the standard implementations of MP2 and ADC(2) available within the adcc module, we have implemented new and highly parallel versions of MP2 and ADC(2) that exploit the auxiliary Fock matrix-driven MO transformation within the HPC-QC module. VeloxChem is responsible for the construction of the auxiliary Fock matrices, including the underlying evaluation of two-electron integrals on the AO basis using a hybrid MPI/OpenMP parallelized module for efficient execution on HPC clusters. The module accepts, depending on memory limits, an arbitrary number of density matrices in a batch and handles those in parallel. With a single evaluation of the set of two-electron integrals, the corresponding complete batch of auxiliary Fock matrices is constructed. Such a multiauxiliary Fock matrix construction leads to additional costs for integral distribution. As a larger batch size comes at the cost of reduced OpenMP parallel efficiency in the integral evaluation, it may become necessary to balance the two forms of parallelism in the integral transformation depending on the size of the system and the adopted basis set. In the same way, HPC-QC is hybrid parallelized. With regard to MO integral storage, a split MPI communicator is used across cluster nodes, and any given integral naturally resides in the memory of the node executing Equation (1) to enable usage of the aggregated memory of the available cluster nodes, as well as to minimize the need for communication. For example, the storage of an integral block hoojjvvi can be distributed over the occupied-occupied (oo) or the virtual-virtual (vv) pair indices. The distributed storage of MO integrals allows for a parallel implementation of ADC (2), where the tensor operations take place on the individual compute nodes that include a list of distributed pair indices and the corresponding auxiliary Fock matrices on the MO basis. The construction of matrix-vector products (σ-vectors) in ADC(2) is evaluated by looping over the distributed pair indices and contracting the corresponding auxiliary Fock matrices on the MO basis with the trial vectors of single excitations. Storage of the large vectors of double excitations is avoided by packing them into the space of single excitations. 25 As such, parallel ADC(2) calculations can be carried out efficiently on cluster nodes with only moderate amounts of memory. In Figure 2, we show results for guanine oligomers. We investigated the scaling of the MO integral calculation compared to the construction of σ-vectors with respect to the number of contracted basis functions (gray curve in Figure 2). The MO integral calculation involves all those necessary at the level of ADC(2) theory. For guanine oligomers with the def2-SVP basis set, the MO integral calculation scales as N 4.9 in line with the discussion above. The evaluation of MO integrals therefore uses the majority of the computational time (core-hours) in the parallel implementation of ADC (2) as the construction of σ-vectors shows a lower scaling of N 3.8 . | Capabilities Calculations using the ADC(2), 8 ADC(2)-x, 9 and ADC(3) 10 methods can be performed using the adcc module for shared-memory architectures. For spectrum calculations in the soft X-ray region, the core valence separation (CVS) is F I G U R E 1 Gator program capabilities and module overview available. [30][31][32] All excited state and transition properties implemented in adcc are also available in the Gator program. Absorption cross sections and C 6 dispersion coefficients can be obtained through the complex polarization propagator (CPP) approach, [33][34][35][36][37][38] implemented in the Respondo library. Calculations with explicit solvents are currently available using the polarizable embedding (PE) model [39][40][41] through the CPPE library 42 as interfaced with VeloxChem. MP2 energies and ADC(2) excitation energies can additionally be obtained by employing the HPC-QC module using the distributed aggregated memory on HPC clusters. | HOW TO USE GATOR To illustrate the modular, low-barrier nature of our Python-based environment, we outline the calculation of the X-ray absorption spectrum of water in Figure 3. This calculation was performed with the Gator program in a Jupyter notebook, 43 which is an open, web-based interactive environment of increasing popularity. By dividing the code into multiple blocks, Jupyter enables the calculation of individual code sections, keeping results in local memory. This simplifies rewriting and recomputing code snippets, such that the user does not need to recalculate all quantities after a modification is performed. This approach allows for an interactive path to programming, analysis, and education. In Gator, we take advantage of these concepts by combining ab initio computational spectroscopy with intuitive analysis and plotting routines through NumPy 17 and Matplotlib. 44 Molecular systems can be investigated in an iterative manner by varying parameters such as basis sets, convergence criteria, numerical solvers, and convolution/broadening functions. Gator also offers traditional job submission using input files. An example input file is shown in Figure 4. | NUMERICAL EXAMPLES Within the electric dipole approximation, the absorption cross section of a randomly oriented molecular system is obtained from the isotropic average of the complex polarizability 33,35,37 : F I G U R E 3 Gator notebook mode: The notebook input for the calculation of the X-ray absorption spectrum of water (upper left) and program output (lower left), focusing only on information for the requested five excited states. The resulting spectrum plot is shown on the right F I G U R E 4 Gator program input file for performing the core valence separation-complex polarization propagator-algebraic diagrammatic construction (CVS-CPP-ADC(2)) spectrum calculation shown in Figure 3 allowing for the calculation of an absorption spectrum at arbitrary energy intervals by solving the associated linear response equations over a range of frequencies. To illustrate the capabilities of the Gator program, we computed the UV/vis and X-ray absorption cross sections, the static polarizability, and the polarizability at imaginary frequencies, yielding C 6 dispersion coefficients 38 for furan and thiophene as shown in Figure 5. A damping term γ = 0.24 eV was used for the calculation of absorption cross sections. For comparison, the excitation energies and intensities obtained from explicit calculations of 25 eigenstates are convoluted using a Lorentzian function with the same broadening parameter. All property calculations were performed using the 6-311++G** 45 basis set. The structures of the molecules were optimized at the B3LYP 46 /cc-pVTZ 47 level of theory using the Q-Chem 5.1 program. 14 With imaginary frequency arguments, we obtain a smooth monotonous polarizability as illustrated in the left panels of Figure 5, resulting in an isotropic static polarizability ( α 0 ) and a C 6 coefficient of 47.07 and 906.3 a.u. for furan and 62.34 and 1438 a.u. for thiophene. Experimental static polarizabilities are reported to be 48.59 and 65.18 a.u. for furan and thiophene, respectively-see Ref. 51 and references therein. For the calculation of valence excitation spectra, Gator offers the possibility to either solve explicitly for eigenstates or to scan over relevant frequency intervals using damped response theory. Using a Lorentzian broadening for the transitions in the first approach will result in a spectrum that is practically identical to that obtained in the second approach, as illustrated in the middle and right panels of Figure 5. The linear absorption cross section obtained from (CVS-)CPP-ADC and the broadened eigenvalue results agree perfectly for lower energies. However, for higher energies, differences between the two curves occur because the resolved 25 eigenstates are not sufficient to obtain the full spectrum window. The carbon K-edge spectra of furan and thiophene obtained with CVS-ADC(2) are given together with the experimental gas-phase spectra as insets in the right panels of Figure 5. Here, we include the first use of CVS-CPP-ADC, which has been developed and implemented in Gator. The use of the CVS approximation in addition to CPP here serves multiple purposes: (i) reduced matrix dimensions and a resulting decrease in computational effort; (ii) removal of spurious valence-continuum features that may contaminate the resolved spectrum window; and (iii) reducing difficulties in converging pure CPP-ADC calculations in the energy region of core excitations. Convergence issues have also been observed for CPP calculations of core excitations in the context of coupled cluster theory, and we note that the use of the CVS scheme introduces only a minor scalar shift in absolute energies. The overall discrepancy with respect to experiment amounts to a blue-shift of 0.5 eV. 52 To demonstrate the capabilities of Gator in computing the absorption spectra of solvated molecules at the ADC (2) level of theory, the spectrum of noradrenaline has been computed in the gas phase and in the aqueous solution using the standard eigenvalue solver, as well as the CPP approach. The vacuum structure of noradrenaline was optimized at the CAM-B3LYP/def2-svp 53, 54 level of theory as implemented in ORCA 4, 55 which served as input for the subsequent calculation of the seven energetically lowest singlet excited states at the ADC(2)/cc-pVDZ level of theory. Then, CPP-ADC(2) calculations were carried out on the frequency range obtained via the regular ADC(2) approach with a spacing of 0.1 eV, yielding the one-photon absorption cross section for each frequency; see Figure 6. To model the UV/vis spectrum of noradrenaline in solution, it was placed in a box of size 48 × 44 × 43 Å 3 of water molecules using packmol, 56 and molecular dynamics (MD) simulations were carried out with NAMD 2.12 57 using the CHARMM 36 force field 58 and parameters for noradrenaline obtained with CGenFF. 59 The system was equilibrated for 5 ns at 310 K and a pressure of 1 atm (NPT ensemble). A subsequent hybrid QM/MM minimization was performed with NAMD, 60 where noradrenaline represented the QM region, and the MM region was assigned to the solvent. The QM computations were run at the CAM-B3LYP/def2-SVP level of theory with ORCA 4 using an electrostatic embedding scheme. The final snapshot of the quantum mechanics/molecular mechanics (QM/MM) minimization was extracted and parameterized using PyFramE 61 with a standard potential model. In the spectrum calculation, the solvent effects are modeled self-consistently with the PE model applied for the Hartree-Fock reference state. [39][40][41] The absorption spectra of noradrenaline in vacuum and solution are shown in Figure 6, where we note that energies in the latter calculation are not perturbatively corrected. 41 | GATOR FOR TRAINING AND EDUCATION The Gator program is a convenient platform for undergraduate training and education in computational and theoretical chemistry with students ranging from the introductory bachelor to the advanced master level. It offers insight into the underlying general theory of quantum chemistry and the adopted electronic structure theory methods with direct and concrete access to the intermediate quantities (tensor elements and wave functions) that are addressed in the workflow of quantum chemical calculations. This is accomplished by interfacing the Gator program with the Jupyter notebook as described above. This allows the student to steer, control, and analyze the course of a desired quantum chemical calculation in a modular manner. Students who are initially not familiar with programming are naturally motivated to overcome barriers involved with entering the field of theory and program development. With basic knowledge of Python programming, it is thus possible for students to design and implement key algorithms such as, for example, the SCF F I G U R E 6 UV/vis spectra of noradrenaline in vacuum and solution. Continuous lines show the convoluted stick spectra, obtained from a Lorentzian function with γ = 0.124 eV. Dots depict the one-photon absorption cross section σ(ω) computed with complex polarization propagator (CPP) optimization of electron densities or the response theory calculations of excitation energies. Following this philosophy, it is possible to incorporate such practical exercises into existing courses, which has been successfully demonstrated already by the Psi4 16 community for example. The deepened algorithmic understanding gained by the students will be advantageous when entering the research level in this field. | OUTLOOK The Gator platform will undergo development following several major lines: (i) The HPC-QC module will be extended to include implementations for the calculation of transition and excited-state properties, as well as the inclusion of external one-particle potentials exploiting the intermediate state representation. 62 (ii) The complex linear response function will be implemented in the HPC-QC module. (iii) The capabilities of the Respondo library will be further extended by including higher-order response functions to allow for simulations of advanced linear and nonlinear spectroscopies. (iv) Quantum chemical methods other than the ADC approaches will be included with the primary aim at approximate coupled cluster theory of second order (CC2) 63 and also equation-of-motion and linear-response coupled cluster methods with singles and doubles. [64][65][66] (v) Modular libraries for environment models other than the PE model are under development. (vi) The engine of the HPC-QC module, that is, the efficient construction of the large number of auxiliary Fock matrices on the AO basis, will be adapted for execution on heterogeneous cluster nodes with varying combinations of central processing units and graphics processing units. These extensions will be realized as developments of the already existing modules (i)-(iii), while (iv) will require the development of a new module of its own. | SOFTWARE AVAILABILITY Gator is freely available under the GPLv3 license. The source code can be found on GitHub (https://github.com/gatorprogram/gator). Input files and notebook examples can be found on the GitHub website. Furthermore, the package can be conveniently installed using the conda package manager (https://anaconda.com) by running the command conda install gator -c gator, which will also automatically install all other dependencies. RELATEDWIREs ARTICLES The ORCA program system Molpro: A general-purpose quantum chemistry program package VeloxChem: A Python-driven density-functional theory program for spectroscopy simulations in high-performance computing environments
4,947.6
2021-03-17T00:00:00.000
[ "Computer Science", "Chemistry" ]
Charge-state effects on charge transfer of helium ions in gold nanosheet We investigate the charge-state effects on charge transfer of helium ions in gold nanosheet using the time-dependent density functional theory non-adiabatically coupled to the molecular dynamics. In order to characterize and extract the charge-state information of incident particles inside the nanosheet, we develop two novel computational methods. It is found that the charge transfer behavior of He ion in gold nanosheet is sensitive to its charge state at the time of incident. Analysis of these results allows us to gain new insights in the interaction between He ions and gold nanosheet. This work validates the ability of current methodology in dealing with ion collisions in materials. Introduction When an incident particle moves through a material, it is angularly dispersed and loses its kinetic energy due to collisions with nuclei and the electrons that compose the material. A detailed theoretical understanding of the interaction of particle radiation with materials is highly crucial for addressing a number of important problems in many fields of fundamental science and practical technology [1,2], such as nuclear industry [3][4][5][6] and space technology [7] applications, high-energy density physics [8], radio-therapy in medicine [9][10][11], fundamental-research laboratories, as well as manufacturing of integrated circuits. On average, the ability of a material to slow down projectile particles traveling in its interior is characterized by the stopping power [12][13][14][15][16]. Conceptually, the stopping power of a material for a given type of projectile can be divided into two categories depending on the type of particles that compose the material: one the nuclear stopping [12,16,17] at low projectile velocities, in which the projectile particle mainly transfers energy to nuclei without electronic excitations, and another the electronic stopping [13,[17][18][19][20] at high projectile velocities, in which the projectile particle mainly transfers energy to electrons. Although for bulk materials under ion irradiation, stopping power has been fairly well-studied experimentally [15,16,[21][22][23][24], and tabulated for a diverse range of target materials and incident projectiles, there is a fair amount of information of detailed stopping process that can be extracted from a fully atomistic first-principles simulations which is not available in the form of experimentally tabulated data. Indeed, several modern first-principles simulations [13,17,[25][26][27][28][29][30] have provided a detailed description of the energy and charge dynamics in bulk, as it offers greater spatial and temporal resolution than currently achievable experimentally. In general, the interaction of incident particles with the matter may trigger numerous processes, involving two degrees of freedom of nuclei and electrons. In light of the present findings of charge exchange/transfer effects, some interesting open points in electronic interaction of slow ions can be understood. For instance, the excessive stopping can be explained as a consequence of charge exchange, which represents an additional energy loss channel [12]. Overall, a good understanding of these intriguing charge exchange behaviors of the stopping process requires extensive microscopic knowledge of how an incident particle loses its kinetic energy and transfers energy to its surrounding atoms near the surface of the material. In addition, the physics of the charge transfer process itself deserves further attention, and it is important to take into account the projectile's charge state inside the material in order to identify the charge exchange effects that take part in the shopping process. For this reason, in this work, He ion in gold is taken as an example. Here, as a point of reference, the stopping power of He ions in bulk gold has been studied extensively both experimentally [23,24,[31][32][33][34] and theoretically [35] providing opportunities to validate our results for the bulk limit. Another point in this respect is that there is no model calculations made in similar gold nanosheet conditions covering the whole range from low to intermediate energies for helium ions. The purpose of this work is to make a comparative study of the charge transfer for helium ions in single crystal nanosheets of gold, considering the effects of the ion charge [36] on the stopping power. The rest of this paper is organized as follows. In section 2, this paper mainly introduces the method and model used in this calculation. In section 3 introduces the results of this simulation calculation and some analysis of these results. In section 4 summarizes the conclusions of this study. Atomic units (h = m e = e = 1) are used throughout unless explicitly stated. Computational methods In a recent study, we have used a model to compute the slowing down of ions in single crystal films, where we combine time-dependent density functional theory (TDDFT) calculations for electrons with molecular dynamics (MD) simulations for ions in order to obtain an unbiased microscopic insight into the ion-film interaction on the same footing in real-time [37,38]. This approach allows for a self-consistent coupling of electronic and nuclear motion. Here we briefly recall the theoretical framework and the mathematical relations. The motion of electrons is described by the time-dependent Kohn-Sham (TDKS) equation, where ψ i is the Kohn-Sham orbital (KSO), and the HamiltonianĤ KS is defined bŷ where m e is the mass of an electron, and the TDKS potential V KS is divided into the following terms: where V en is the electron-nuclei Coulomb potential using pseudopotential or all-electron approach in calculations with atomistic details, such as V en (r, R(t)) = − The Hartree potentials, V H , is given by and accounts for the electrostatic Coulomb interaction between electrons, where ρ(r, t) represents the electronic density given by: where N is the number of electrons in the system. And V XC is the exchange and correlation (XC) potential, which can be expressed as [39] where E hom XC is the XC energy density of a homogeneous electron gas of densitiesρ, which has an exact expression in [40]. Then the motion of the Ith nuclei is determined by Newton's equations of motion: where M I and Z I are the mass and charge of the Ith nucleus, respectively. R(t) ≡ {R 1 (t), . . . , R L (t), . . . } denote ionic coordinates. We will utilize density functional theory to initialize the electronic orbitals of the projectile/target system, while TDDFT will be employed to simulate the electron dynamics. In our model of bare He 2+ ion collisions, He 2+ ion is simply given an initial velocity v. However, in the case of He + ion with one bound electron, the preparation of the initial state must be carefully considered in order to avoid an unphysical initial decoupling of the projectile's nucleus from the bound electron [41]. In order to achieve this, we included a phase factor e ik·r before KSO of the projectile's bound electron at time t = 0. This was done to ensure that the projectile's bound electron can move along with the projectile's nucleus [42], that is, ψ i (r, 0) −→ e ik·r ψ i (r, 0), with k = m e v/h. This phase factor, commonly referred to as the electron translation factor, causes the He + ion KSO to carry an extra linear momentum along with their kinematic energy [43]. When a projectile collides with a target, charge transfer takes place alongside the creation and breaking of bonding or antibonding states in the immediate vicinity of the target [44]. Charge transfer involving neutralization and reionization of ions is typically a localized process. The electron localization function (ELF) [44] is a tool used for the quantitative analysis of the complex bonding characteristics of electronic systems in their ground state, which is widely used in the search for atomic shells, covalent bonds, lone pairs, hydrogen bonds, ionic bonds, metal bonds and many others. Furthermore, the time-dependent ELF (TDELF) [45] serves as an extension of the ELF for the study of time-dependent processes, which enables the observation of the time-evolving bonding properties of an electronic system, taking into account the dynamics of excited electrons. The specific expression for TDELF is as follows: this equation compares the local value of D(r, t) with that of the Hartree-Fock homogeneous electron gas of the same local density D 0 (r, t) (a handy reference system against which to compare the actual value of D(r, t)). And D(r, t) is an effective local description of localization, the specific expression is as follows, with j(r, t) = |j(r, t)|, where j(r, t) is the electric current density, given by wherer denotes the operator of space coordinate r. In order to look at the electronic localization of systems as a whole, the average ELF, weighted according to the density, has been defined [46] as where Ω denotes the volume integral over a sphere of radius r Ω , centered at O Ω . Inspired by this, we defined average radial ELF as where Θ is the spherical shell of thickness ∆r Θ and radius r Θ , centered at O Θ . ELF is a dimensionless value. ELF is within the range of [0,1]. A large ELF value means that electrons are greatly localized, indicating that there is a covalent bond, a lone pair or inner shells of the atom involved. When the gold nanosheet is not disturbed by any disturbance, the ELF sketch is shown in figure 1(b). The calculations were done using the OCTOPUS code [47,48], with the numerical details and basic aspects summarized below. The wave function, density, and potential are discretized in a real space grid of a rectangular simulation cell under periodic boundary conditions in the x and y directions. The simulation cell size in the x and y directions was chosen so as to minimize the spurious effects of the repetition while maintaining manageable computational costs. Atoms of the system can be thought of as consisting of valence electrons and ionic cores, which consist of nuclei and core electrons. The interactions between valence electrons and ionic cores are described by means of the norm-conserving pseudopotentials of Troullier-Martins [49] type. In this research, two pseudopotential models are employed to substitute ionic cores of the system, one for the nucleus and the core electrons [Xe]4f 14 of the gold atom, and the other for the nucleus of He atom. In order to numerically propagate the equations by discretizing time, we use the velocity Verlet algorithm for time-stepping Newton's equations of motion, and an enforced time-reversal symmetry algorithm [50] for the TDKS equations, using the instantaneous local density approximation [51][52][53] to XC potential. Since the time step is controlled by the mesh spacing, we use an optimized time step of 0.01 a.u. for a selected mesh spacing of 0.45 a.u. under the condition that the time propagation is unitary. The mesh spacing corresponds to a cutoff energy of 47.91 Ry in the plane-wave expansion. We take the coordinate system shown in figure 1(a). A cubic unit cell of gold crystal adopted from [54], containing four atoms with a lattice parameter of 7.705 a.u., is expanded into a supercell, placed within a simulation box of size 15.41 a.u. × 15.41 a.u. × 61.65 a.u. The He ions begin at a distance of 15 a.u. from the gold nanosheet and travel through it along a channeling trajectory. A 2 × 2 × 1 grid of Monkhorst-Pack k points is used to sample the Brillouin zone. Results and discussion The stopping power S is used to measure a material's ability to stop incoming projectiles, defined as the energy loss per unit path length of charged ions [55]. It is calculated as S = −dE/dx [23,29,[56][57][58][59][60]. In the simulation of a gold nanosheet, the S is approximated as ∆E/∆x, where ∆E is the difference in kinetic energy before and after the projectile penetrates the nanosheet, and ∆x is the displacement of the projectile [23,56,61]. The size of the stopping power capability is represented by the Scattering Stop Cross section (SCS), calculated as ε = S/n, where n is the atomic number density of the target [23,56,61]. In this study, we calculated the ε value of He + and He 2+ particles in gold nanosheet during channeling and off-channeling. Figure 2 presents the simulation results along with reference values. The reference data comprises the SRIM-2013 curve (represented by a green solid line) [62], TDDFT calculations conducted by Zeb et al [35], and experimental values obtained from Martínez-Tamayo et al [24], Semrad et al [63], and Markin et al [23]. These references were selected due to their coverage of both high-velocity and low-velocity regions. It can be seen that the ε value trend of the two ions in channeled and off-channeled channels is consistent with the SRIM-2013 curve [62]. In the velocity range of 0.1-0.3 a.u., it is consistent with the experimental values of [35], the black ▼ represents the experimental values of Martínez-Tamayo et al [24]. The experimental values of Semrad et al [63] are represented by the gray ◀, while the experimental value of Markin et al [23] is depicted as a gray ▶. It should be noted that in both experiments, the ε value refers to electronic stopping cross section. The inset displays typical incident points of channeling and off-channeling situations. Markin et al [23], and the theoretically calculated values of Zeb et al [35]. All these indicate that the model parameters used in this simulation are reasonable. However, when the velocity is greater than 1.2 a.u., the calculated value of this simulation is lower than the reference value, which is caused by the sampling position of the incident point. The inset in figure 2 shows the incident points used in simulation, representing both channeling and off-channeling typical situations. Here, the off-channeling ε value is obtained by averaging three off-channeling incident points. It can be seen that for He + , the values of channeling and off-channeling are in line with the variation trend of the SRIM-2013 curve [62], and the value of off-channeling is greater than that of channeling. However, for He 2+ , the energy loss becomes gentle and decreases due to resonance charge transfer within the velocity interval of 0.6-1.0 a.u., regardless of channeling or off-channeling conditions. When the velocity is 0.6 a.u., the value of the off-channeling is lower than that of the channeling. To summarize, there is a notable disparity in the energy loss values between He + and He 2+ in gold nanosheet, which is partly due to the different charge transfer mechanisms exhibited by He ions with different charge states in the material. In order to further describe the charge transfer process of the projectile inside the material, the probability P i 1s that the ith occupied KSO ψ i is projected onto the 1s orbital of He + ion is defined here as: where ϕ 1s represents the static wave function of He + at the position of R He . It is emphasized that the ϕ 1s is computed at each time t, for the reason that the positions of the He + ion are different from its initial ones. k can be expressed as where v(t) represents the velocity of projectile ions at time t. And the probability P i 1s that the ith occupied KSO does not project onto the 1s orbital of He + ion is expressed as, So the probability that there is only one electron in the 1s orbital of He + ion during the projectile's travel is, It is worth noting that the higher the excited state, the larger the volume occupied. In gold nanosheet, the space to accommodate electrons captured by projectiles is limited, so only the 1s ground state electrons are most likely to survive in collision processes. Generally, upon entering the material, the charge state of the projectile initially will undergo a pre-equilibrium process [55,64,65] before its equilibration occurs. In pre-equilibrium regime, the response of materials to projectile irradiation has been shown to depend on the charge state of the projectile. According to figure 3, the charge transfer process can be categorized into five distinct stages as outlined below: (a) pre-entering stage, where the He ion is located far from the front surface of the gold nanosheet; (b) entering stage, in which He ions enter the front surface of the gold nanosheets; (c) internal stage, where the He ions reside within the gold nanosheets; (d) leaving stage, in which the He ions leave the rear surface of the gold nanosheets; (e) post-leaving stage, where the He ion is located far from the rear surface of the gold nanosheet. Figure 3(a) shows the results of He + traveling through the gold nanosheet. It can be seen that P 1s can always remain 1 in stage (a), indicating that no charge transfer occurs around He + during this process. In stage (b), the probability of He + being in the 1s state will sharply decrease with increasing velocity. In stage (c), the probability of He + in the 1s state tends to change slowly. In stagr (d), there is a significant perturbation in the probability of He + in the 1s state. In stagr (e), the probability of He + in the 1s state remains unchanged. Figure 3(b) shows the results of He 2+ traveling through the gold nanosheet. It is evident that, in stage (a), the probability of He 2+ capturing an electron to the 1s state has consistently remained at 0. In stage (b), the probability of He 2+ capturing an electron to the 1s state increases sharply with increasing velocity. In stage (c), the probability of He 2+ capturing an electron to the 1s state undergoes gradual changes over time. In stage (d), there is a significant perturbation in the probability of He 2+ capturing an electron to the 1s state. In stage (e), the probability of He 2+ capturing an electron to the 1s state remains unchanged. Comparing the collision processes of He + and He 2+ projectiles, several interesting findings are as follows: (1) during the collision process at a lower velocity of 0.3 a.u., the electrons of He + projectile ions in the 1s state are almost not lost, while He 2+ projectile ions hardly capture electrons to the 1s state. (2) At a higher velocity of 0.9 a.u., the probability of He ions formed after collision capturing one electron to the 1s state tends to approach equality. (3) It is found that the charge transfer behavior of He ion in gold nanosheet is sensitive to its charge state at the time of incident. It is a well-established fact that metal crystals contain free electrons that are not confined to individual metal atoms, but rather are shared across the entire crystal structure. Therefore, the outer valence electron of metallic crystals are in a low localization state (see figure 1(b)). In figure 4, ELF diagrams of two He ions at different velocities at z = 0 a.u. are given. During the entire collision process, He ions are surrounded by a cluster of electrons that can move with the projectile, and these electrons are captured by He ions. The electron capture region is surrounded by a low localization and density electron hole region, resulting in the presence of electron holes along the trajectory. It clearly shows that the electron excitation regime is located around the He ion. The distribution of holes is concentrated along the trajectory of the projectile. The ELF value indicates the degree of localization of an electron, with higher values indicating a more bound state. In the case of He ions, the 1s state electrons exhibit the highest localization. Therefore, a higher ELF value for electrons captured by He ions implies a greater probability of the electron being in the 1s state. By comparing the cases of He + and He 2+ projectiles under identical conditions (velocity and position), it becomes evident that the probability of a bound electron of He + remaining in the 1s state is higher than the probability of He 2+ capturing an electron to the 1s state. To depict a more detailed charge transfer behaviors, we conducted calculations to determine the average radial ELF distribution at various positions for He 2+ projectiles with different velocities (all with He 2+ as the center of integration, r Θ = 5 a.u., ∆r Θ = 0.3 a.u.), which can be used to describe the distribution of electrons captured by projectile ions. The corresponding results can be observed in figure 5. Several interesting findings are as follows: (1) the charge transfer process occurs in a sequential manner. Initially, electrons are captured by a distant, higher excited state, away from He 2+ . As the collision progresses, the quantity of trapped electrons in the higher excited state steadily rises, accompanied by the transfer of charge to the lower Figure 6. The ELF snapshot of He ions generated by He + and He 2+ projectiles at incident velocities of 0.3, 0.6, and 0.9 a.u., respectively. This phenomenon occurs when the projectile and target are widely separated after collision. excited state, which is in closer proximity to He 2+ . (2) The electronic state distribution of He ions formed after collision with He 2+ at three different velocities of 0.3, 0.6, and 0.9 a.u. can be observed in figure 5(d), revealing significant differences. Figure 6 illustrates the distribution characteristics of electronic states in He ions generated by He + and He 2+ projectiles at incident velocities of 0.3, 0.6, and 0.9 a.u., respectively, when projectile and target are widely separated after collision. It can be seen that the ELF distribution of He ions can be divided into two categories: high localization and low localization. High localization means that the electron is mainly in a lower excited state, while low localization means that the electron is mainly in a higher excited state. Interestingly, for a He 2+ projectile with a velocity of 0.3 a.u., a hollow He ion can be formed. Figure 7 shows the average number of electrons (n e ) of He ions generated by He + and He 2+ projectiles in the case of channeling and off-channeling incidence when projectile and target are widely separated after collision. Several interesting findings can be observed from figure 7(a) as follows: (1) The n e for He + projectile is almost not sensitive to velocity v within the range of 0.1 ⩽ v ⩽ 0.7 a.u. However, as the velocity increases within the range of 0.7 < v ⩽ 2.0 a.u., the n e decreases. (2) For He 2+ projectile, n e exhibits an obvious resonance peak in the range of 0.1 ⩽ v ⩽ 2.0. (3) When 0.1 ⩽ v < 1.0 a.u., there is a significant difference in the variation trend of n e between He + and He 2+ projectiles, indicating that their charge transfer mechanisms have significant differences. Figure 7(b) demonstrates that the change in n e with velocity remains consistent across various integration radii. This finding suggests that the integration radius employed in this simulation is indeed appropriate. The charge transfer mechanism may be understood based on a phenomenological classical over-barrier model [66], in which, as an electron moves from a target to a projectile, it tends to occupy a projectile orbital where its single-particle energy E p = ε p i + ∆ closest to that E T = ε T j of the target, with ε p i /ε T j denoting the energy of the ith/jth single-particle orbital of the projectile/target at rest, and ∆ denoting the translational kinetic energy of the transferred electron traveling with the projectile. In this model picture, for the low/high incident energy case, the velocity of the projectile is rather low/high and the electron around the Fermi level of the target prefers to transfer into the weakly/strongly bound orbital of the projectile, leading to the formation of highly/lowly excited projectile after the collision. Clearly, electrons in metals can be divided into two categories, one is non-localized valence electrons that can cruise between atoms, and the other is localized core electrons bound near atoms. When the velocity of the projectile ion is low, the core electron cannot be excited, and only the valence electron can be excited. Therefore, the energy loss is mainly contributed by the valence electron excitation. In this case, the energy loss is not sensitive to the track of the projectile ion passing through the metal (i.e. not sensitive to the channeling and off-channeling situations), and of course not sensitive to whether the freedom of the core electron is opened in the model. When the velocity of projectile ions is high, both core electrons and valence electrons can be excited and contribute to the energy loss. The energy loss in this case is certainly sensitive to the trajectory of projectile ions passing through the metal (i.e. sensitive to channeling and off-channeling conditions) and is also sensitive to the freedom of opening the core electrons. Summary In this paper, the charge-state effects on charge transfer of helium ions in gold nanosheet has been investigated by using the TDDFT non-adiabatically coupled to the MD. In order to extract the probability of having one electron in the 1s orbital of He + ions during the projectile's travel, we have developed a state-selective projection probability method. In order to visually characterize charge transfer behavior during the projectile's travel, we developed an average ELF weighted according to the density. These two methods can be mutually validated and supplemented. The charge transfer process of He ion projectiles within gold nanosheet is investigated in detail, obtaining a combination of interesting findings: (1) The energy loss for He 2+ projectile does not change linearly with velocity, as there is a resonant charge transfer occurring within the velocity range of 0.6-1.0 a.u. (2) The charge transfer behavior of He ion in gold nanosheet is sensitive to its charge state at the time of incident. (3) The electron capture region is surrounded by a low localization and density electron hole region, resulting in the presence of electron holes along the trajectory. (4) The charge transfer process occurs in a sequential manner. (5) For He 2+ projectile, n e exhibits an obvious resonance peak in the range of 0.1 ⩽ v ⩽ 2.0. When 0.1 ⩽ v < 1.0 a.u., there is a significant difference in the variation trend of n e between He + and He 2+ projectiles, indicating that their charge transfer mechanisms have significant differences. Analysis of these results allows us to gain new insights that charge transfer is an important reason for the difference in stopping power between He ions with different charge states and the gold nanosheet. This work validates the ability of present methodology in dealing with ion collisions in nanosheet, which may have implications for the study of stopping power in more complex systems in the future. It is still an active field and many new results are expected in the future. Data availability statement The data that support the findings of this study are available upon reasonable request from the authors.
6,258
2023-08-09T00:00:00.000
[ "Physics" ]
Uteroplacental insufficiency down regulates insulin receptor and affects expression of key enzymes of long-chain fatty acid (LCFA) metabolism in skeletal muscle at birth Background Epidemiological studies have revealed a relationship between early growth restriction and the subsequent development of insulin resistance and type 2 diabetes. Ligation of the uterine arteries in rats mimics uteroplacental insufficiency and serves as a model of intrauterine growth restriction (IUGR) and subsequent developmental programming of impaired glucose tolerance, hyperinsulinemia and adiposity in the offspring. The objective of this study was to investigate the effects of uterine artery ligation on the skeletal muscle expression of insulin receptor and key enzymes of LCFA metabolism. Methods Bilateral uterine artery ligation was performed on day 19 of gestation in Sprague-Dawley pregnant rats. Muscle of the posterior limb was dissected at birth and processed by real-time RT-PCR to analyze the expression of insulin receptor, ACCα, ACCβ (acetyl-CoA carboxylase alpha and beta subunits), ACS (acyl-CoA synthase), AMPK (AMP-activated protein kinase, alpha2 catalytic subunit), CPT1B (carnitine palmitoyltransferase-1 beta subunit), MCD (malonyl-CoA decarboxylase) in 14 sham and 8 IUGR pups. Muscle tissue was treated with lysis buffer and Western immunoblotting was performed to assay the protein content of insulin receptor and ACC. Results A significant down regulation of insulin receptor protein (p < 0.05) and reduced expression of ACS and ACCα mRNA (p < 0.05) were observed in skeletal muscle of IUGR newborns. Immunoblotting showed no significant change in ACCα content. Conclusion Our data suggest that uteroplacental insufficiency may affect skeletal muscle metabolism down regulating insulin receptor and reducing the expression of key enzymes involved in LCFA formation and oxidation. increased incidence of insulin resistance, type 2 diabetes, and cardiovascular disease in the adult [1][2][3][4][5][6]. These observations have led to the hypothesis that metabolic and cardiovascular disease in adulthood arises in utero, in part, as a result of changes in the development of key endocrine and metabolic pathways during suboptimal intrauterine conditions associated with impaired fetal growth. This hypothesis has been tested experimentally in a number of species, using a range of techniques to impair fetal growth. Inducing intrauterine growth retardation (IUGR) by placental insufficiency or by undernutrition, stress, or hormone treatment of the mother leads to endocrine and metabolic alterations in the adult offspring in several species [7]. However, the mechanisms by which an abnormal uterine milieu leads to the development of diabetes in adulthood are not known. To investigate potential sequelae of IUGR and the underlying mechanisms, ligation of the uterine arteries in rats has been used as an animal model of uteroplacental insufficiency leading to growth retarded fetuses with a metabolic profile very similar to that of IUGR human fetuses [8,9]. We have recently shown that uteroplacental insufficiency, obtained by uterine artery ligation, which leads to IUGR pups, affects the expression of specific hypothalamic lipid sensing genes such as the CPT1 isoform C, and acetyl-CoA carboxylase (ACC) isoforms alpha and beta [10]. Insulin resistance and type 2 diabetes are characterized by hyperglycemia with hyperinsulinemia, a reduced ability to oxidize fat, and an accumulation of fat within skeletal muscle [11,12]. This increase in muscle fat content is highly associated with insulin resistance [13,14]. Recently, perturbed skeletal muscle insulin signaling has been reported in adult growth restricted rats [15]. In this study we focused on the hypothesis that uteroplacental insufficiency may affect skeletal muscle metabolic pathways altering the expression of insulin receptor and of key enzymes of intramuscular lipid metabolism. Animal model Time-dated Sprague-Dawley pregnant rats (Harlan Sprague Dawley, Inc.) were individually housed under standard conditions and allowed free access to standard chow and water. On day 19 of gestation (term is 22 days) the maternal rats were anesthetized with intramuscular injections of xylazine (8 mg/Kg) and ketamime (40 mg/ kg) (Sigma-Aldrich, St. Louis, MO), and the abdomen was opened along the midline. Suture was placed around both uterine arteries, then either tied or withdrawn before closing the abdomen [8,9]. Dams recovered quickly from uterine artery ligation (n = 4) and sham procedures (n = 4), and resumed feeding the same day. After recovery, rats had ad libitum access to food and water. The pregnant rats were allowed to deliver spontaneously and at birth pups were weighted and killed by cervical dislocation. Posterior limb skeletal muscle tissue was immediately harvested and frozen in liquid nitrogen and stored at -80°C. 14 sham and 8 IUGR rats from different litters were tested. All procedures complied with Italian regulations for laboratory animal care, according to the guidelines and under supervision of the Animal Technology Station, Interdepartmental Service Center, Tor Vergata University, Rome, Italy. Plasma assays At birth, 14 SHAM and 8 IUGR pups were decapitated, blood was collected and centrifuged at 1900 × g at 4°C for 10 min, and plasma was stored at -80°C. Glucose and insulin concentrations were measured. Glucose was determined using a colorimetric commercial kit (SigmaChemical Co.). Plasma insulin concentrations were measured in duplicate by a rat/mouse insulin ELISA kit, using rat insulin as the standard (Linco Research, St. Charles, MO) according to the manufacturer's instructions. The intraassay CV was 1.2-8.4%, the interassay CV was 6.0-17.9%, and the sensitivity limit was 0.2 ng/mL. RNA isolation and cDNA synthesis Total RNA was extracted using TriPure (Roche Applied Science) according to the manufacturer's instructions and quantified in duplicate using ultraviolet absorbance at 260 nm. Gel electrophoresis confirmed the integrity of the samples. 1 μg RNA, pretreated with RNase free DNase (Invitrogen Co.) was transcribed into the complementary DNA using the High-Capacity cDNA Archive Kit (Applied Biosystems) in a final volume of 50 μl following the manufacturer's protocol. To minimize variation in the reverse transcription reaction, all RNA samples from a single experimental setup were reverse transcribed simultaneously. Real-time RT-PCR We explored the expression of key enzymes that regulate fatty acid metabolism in the muscle. Real-time RT-PCR was performed on an ABI PRISM 7300 Sequencer Detector (Applied Biosystems). PCR primers and TaqMan probes to amplify and detect ACCα, ACCβ, ACS, AMPK, CPT1B, MCD, Insulin receptor and the housekeeping gene 18S were commercially available as inventoried assay (Assayon-demand Gene Expression Product; Applied Biosystems). Prior to performing real-time PCR, primer and probe concentrations were determined to demonstrate their specificity and optimal reaction condition. 18S was used as an internal control for differences in cDNA loading. Before the use of 18S as a control, parallel serial dilution of cDNA were quantified to prove the validity of using 18S as an internal control. The real-time RT-PCR amplification was performed in skeletal muscle tissues from fourteen SHAM and eight IUGR rats. Experiments were performed in triplicate using 96-well tray and optical adhesive covers (Applied Biosystems) in a final reaction mixture of 20 μl containing 3 μl of undiluted cDNA. Real-time PCR was performed using Platinum Quantitative PCR SuperMix-UDG with ROX (Invitrogen Co.). The cycling consisted of 2 min at 50°C, 2 min at 95°C followed by 40 cycles of 95°C for 15 sec and 60°C for 45 sec. Determination of reaction efficiency was routinely used as an internal quality control for adequate assay performance. Crossing of threshold (Ct) values obtained for the target gene were normalized against each individual 18S value which was run in the same well of the real time RT-PCR run. Relative quantification of PCR products was performed using Relative Quantification Study software (Applied Biosystems). Results are expressed in raw relative quantification (RQ) ± standard errorrs. Western immunoblotting Tissues were homogenized in ice with lysis buffer (50 mM Hepes pH 7.4, 150 mM NaCl, 10 mM NaF, 1 mM Na 3 VO 4 , 10% glycerol, 0.5% Triton ×-100, 5 mM EDTA, 10 μl/ml cocktail protease inhibitors). Lysates were clarified by centrifugation at 13,000 g (30 minutes, 4°C), and protein concentration in the supernatant were determined by the Bradford assay (Bio-Rad Laboratories, CA, USA) using bovine serum albumin as a standard. Eighty micrograms of the extracted proteins were separated by SDS-PAGE on 3-8% Tricine gel (Bio-Rad Laboratories, CA, USA) and blotted onto ECL nitrocellulose membrane (Amersham Biosciences UK, Ltd., Little Chalfont, Buckinghamshire, UK). The filter was blocked with 5% non-fat dry milk in TBS-0.1% Tween 20 and then incubated with ACC rabbit polyclonal antibody (Cell Signaling Technology Inc. Danvers, MA) and insulin-Rβ rabbit polyclonal antibody (Santa Cruz). After several washes in PBS-0.1% Tween 20, horseradish peroxidase-conjugated secondary antibody (1:5,000) (Amersham) was added for 1 hour at RT. The labelled bands were detected using Amersham ECL western blotting system according to manufacturer's specifications. After protein detection, membranes were stripped with Restore Western Blot Stripping Buffer (Pierce, Rockford IL) and re-blotted with rabbit HRP-conjugated actin antibody (1:1000) (Santa Cruz). Densitometry analysis of bands was performed using a Image Quant 5 software (Molecular Dynamics). Statistical analysis Statistical analysis was performed using Sigma Plot for Windows Version 13.0 (SPSS, Inc, Chicago, IL, USA). Dif-ferences in gene expression between sham and IUGR rats were analyzed with one-way ANOVA. Differences between means from plasma assays and densitometric analyses were assessed by unpaired two-tailed t test. Differences were considered statistically significant at p < 0.05. Insulin Receptor Although a tendency toward a decrease of insulin receptor mRNA expression was noted (20% less), these changes did not achieve statistical significance (Figure 1). Immunoblot analysis indicated a significant down regulation of insulin receptor in skeletal muscle of IUGR animals when compared with SHAM (control) (p < 0.05, Figure 2). Expression of LCFA metabolism regulatory enzymes LCFAs act as nutrient abundance signals in the muscle. The expression of key enzymes that regulate muscle LCFA metabolism was assessed. The expression of ACCα and ACS mRNA levels was significantly reduced (by 54% and 66% respectively, p < 0.05) in skeletal muscle of IUGR rats at birth (Figure 3), whereas no significant differences in Insulin receptor mRNA expression in skeletal muscle of SHAM (n = 14) and IUGR rats (n = 8) Figure 1 Insulin receptor mRNA expression in skeletal muscle of SHAM (n = 14) and IUGR rats (n = 8). Transcripts were measured by real-time RT-PCR using appropriate primers and normalized to 18S mRNA. Data are expressed as relative quantification vs. SHAM group (RQ = 1). Bars represent standard errors. the expression of ACCβ, AMPK, and MCD were observed. Expression of muscle specific CPT1B was measured by quantitative RT-PCR but no significant difference between IUGR and SHAM pups was found. ACCα immunoblotting analysis showed no differences between IUGR and SHAM animals thus suggesting a posttranscriptional regulation (Figure 4). ACS immunoblotting analysis couldn't be performed for unavailability of the specific antibodies. Discussion Uteroplacental insufficiency limits availability of substrates to the fetus and retards growth during gestation, ultimately leading to IUGR. Alterations in the intrauterine milieu have a profound impact on glucose homeostasis in the offspring, culminating in the development of insulin resistance, glucose intolerance and type 2 diabetes in adulthood. The etiology of insulin resistance within skeletal muscle in the human with type 2 diabetes is multifactorial, involving impairments in hormonal signaling, enzyme and transporter activity, and substrate availability. Previous studies also implicated decreased oxidative capacities of skeletal muscle of human diabetics as contributory to insulin resistance [16]. To investigate the effects of uteroplacental insufficiency on the expression of insulin receptor in muscle, we studied an animal model of intrauterine growth retardation obtained by bilateral uterine artery ligation. In this model, the resulting uteroplacental insufficiency leads to growth retarded fetuses with a metabolic profile very similar to that of IUGR human fetuses [8,9]. These animals exhibit impaired oxidative phosphorylation in skeletal muscle [17], mild peripheral insulin resistance and β-cell secretory defects very early in life but have adequate compensatory insulin secretion for several weeks [18]. However, eventually, β-cell compensation fails, and overt diabetes occurs at age 3-6 months [9]. More recently, it has been described in the same animal model that ligated offspring showed impaired glucose tolerance from the age of 15 weeks as well as elevated glycosylated hemoglobin and corticosterone levels [19]. Our data show for the first time reduced protein levels of insulin receptor in skeletal muscle of IUGR animals. The lack of statistical significance in insulin receptor mRNA expression is probably due to the relative low number of examined samples. Most of previous studies focused on downstream effectors of insulin actions in peripheral tissues without determining insulin receptor expression. In rat model, Ozanne et al. [20] showed that maternal protein restriction leads to muscle insulin resistance. Soleus muscle from growth restricted offspring had similar basal glucose uptakes compared with the control group, but whilst insulin stimulated glucose uptake into control muscle, it had no effect on growth restricted offspring Expression of acetyl-CoA carboxylase isoenzyme alpha (ACCα) and beta (ACC β), acyl-CoA synthase (ACS), AMP-activated protein kinase (AMPK), carnitine palmitoyltrans-ferase-1 isoenzyme B (CPT1B), malonyl-CoA decarboxylase (MCD), in IUGR rat skeletal muscle muscle. This impaired insulin action was not related to changes in expression of either the insulin receptor or glucose transporter 4 (GLUT4). However, growth restricted offspring muscle expressed significantly less of the zeta isoform of protein kinase C (PKC ζ) compared with controls. This PKC isoform has been shown to be positively involved in GLUT4-mediated glucose transport. We have used a different model of intrauterine growth restriction based upon uterine artery ligation in which blood flow to the fetus is not ablated but reduced to a similar degree to that observed in human pregnancies complicated by uteroplacental insufficiency. Furthermore, whilst Ozanne et al. [20] studied 15-month-old animals, we investigated rats at birth. The use of only hind limb skeletal muscle might represent a limiting factor since an important difference between type 1 (slow) and type 2 (fast) fibers could exist. However, at birth it is practically impossible to select muscle fibers. In humans, Jaquet et al. [21] demonstrated that insulin resistance is associated with an impaired regulation of GLUT4 gene expression by insulin in IUGR-born subjects in both skeletal muscle and adipose tissue. Impairment of muscle fat metabolism is highly associated with insulin resistance. Our study shows for the first time that uteroplacental insufficiency leads to reduced ACS and ACCα mRNA expression in skeletal muscle at birth. In intramuscular lipid metabolism, ACS is the key enzyme for converting free fatty acids into LCFA-CoA. ACC catalyzes the formation of malonyl-CoA, an essential sub-strate for fatty acid synthesis in lipogenic tissues and a key regulatory molecule in muscle, brain, and other tissues [22]. Three CPT1 isoforms with various tissue distributions and encoded by distinct genes have been identified: liver (CPT1A) [23], muscle (CPT1B) [24], and brain (CPT1C) [25] Cellular levels of malonyl-CoA repress CPT1 activity and decrease LCFA-CoAs oxidation. Therefore, the final effect of reduced expression of both ACS and ACCα would be the decrease of intramuscular LCFAs. This finding is consistent with our recent study in hypothalamus of IUGR rats, showing significant decreased ACCα and ACCβ expression at birth [10]. Taken together these findings suggest that intrauterine programming may affect key enzymes of lipid metabolism at multiple levels. In muscle, however, ACCα protein content was not affected thus suggesting post-transcriptional regulation. Lipids are implicated in the development of insulin resistance in skeletal muscle. This seems to be linked to an imbalance between lipid supply and lipid oxidation, the latter being related to decreased mitochondrial oxidative capacity in states of insulin resistance [26]. In humans, it has been described that during physiological hyperglycemia with hyperinsulinemia and maintained FFA concentrations (i.e., a condition that mimics the insulin-resistant state), human skeletal muscle malonyl-CoA concentrations are significantly increased and are directly associated with a reduction in LCFA oxidation and functional CPT-1 activity [27]. As LCFA oxidative capacity in mitochondria is low in subjects with insulin resistance [20,[28][29][30][31][32], mitochondrial dysfunction and thereby decreased lipid oxidation has been considered to play a key role in the development of insulin resistance. Lane et al. reported an increase in triglyceride levels in IUGR rats [33] Accordingly, a connection between mitochondrial dysfunction, increased intramuscular triacylglycerol levels, and insulin resistance has been described in insulin-resistant offspring of patients with type 2 diabetes [34]. LCFAs may influence glucose metabolism by multiple mechanisms. LCFAs inhibit hexokinase activity [35] thus reducing glucose metabolism. They are also a substrate for the synthesis of ceramide which is increased in muscle of insulin-resistant rats [36]. LCFAs may interfere with insulin signaling by activating protein kinase C. PKC in turn inhibits insulin signaling by phosphorylation of the serine residues on the insulin receptor [37,38] and insulin receptor substrate-1 (IRS-1) [39], thus inhibiting the tyrosine phosphorylation of IRS-1. Patients with type 2 diabetes have increased PKC protein levels in the rectus abdominal muscle [40] and decreased muscle insulin receptor tyrosine kinase activity, which could be restored by phosphatase treatment in vitro, possibly suggesting Acetyl-CoA carboxylase isoenzyme alpha (ACCα) protein expression in skeletal muscle of SHAM (n = 14) and IUGR rats (n = 8) on day 0 Figure 4 Acetyl-CoA carboxylase isoenzyme alpha (ACCα) protein expression in skeletal muscle of SHAM (n = 14) and IUGR rats (n = 8) on day 0. A, western immunoblotting analysis; B, densitometric analysis. increased serine phosphorylation of the insulin receptor due to increased PKC activity [37]. We speculate that intrauterine limited supply of substrates secondary to uteroplacental insufficiency may lead to down regulation of insulin receptor in skeletal muscle. We hypothesize that reduced intramuscular content of LCFAs secondary to reduced expression of both ACS and ACC may represent a transient compensatory mechanism to counteract insulin resistance during early postnatal life. Conclusion Uteroplacental insufficiency may affect skeletal muscle metabolism down regulating insulin receptor and reducing the expression of key enzymes involved in LCFA formation and oxidation. We speculate that decreased intramuscular lipid accumulation may represent a transient compensatory mechanism to counteract insulin resistance.
4,081
2008-05-18T00:00:00.000
[ "Biology", "Medicine" ]
USZEGED: Correction Type-sensitive Normalization of English Tweets Using Efficiently Indexed n-gram Statistics This paper describes the framework applied by team USZEGED at the “Lexical Normalisation for English Tweets” shared task. Our approach first employs a CRF-based sequence labeling framework to decide the kind of corrections the individual tokens require, then performs the necessary modifications relying on external lexicons and a massive collection of effi-ciently indexed n-gram statistics from English tweets. Our solution is based on the assumption that from the context of the OOV words, it is possible to reconstruct its IV equivalent, as there are users who use the standard English form of the OOV word within the same context. Our approach achieved an F-score of 0.8052, being the second best one among the un-constrained submissions, the category our submission also belongs to. Introduction Social media is a rich source of information which has been proven to be useful to a variety of applications, such as event extraction (Sakaki et al., 2010;Ritter et al., 2012; or trend detection, including the tracking of epidemics (Lamb et al., 2013). Analyzing tweets in general, however, can pose several difficulties. From an engineering point of view, the streaming nature of tweets requires that special attention is paid to the scalability of the algorithms applied and from an NLP point of view, the often substandard characteristics of social media utterances has to be addressed. The fact that tweets are often written on mobile devices and are informal makes the misspelling and abbreviations of words and expressions, as well as the use of creative informal language prevalent, giving rise to a higher number of out-of-vocabulary (OOV) words than in other genres. Related Work The informal language of social media, including Twitter, is extremely heterogeneous, making its grammatical analysis more difficult compared to standard genres such as newswire. It has been shown previously, that the performance of linguistic analyzers trained on standard text types degrade severely once they are applied to texts found in social media, especially tweets (Ritter et al., 2011;Derczynski et al., 2013). In order to build taggers that perform more reliable on social media texts, one possible way is to augment the training data by including texts originating from social media (Derczynski et al., 2013). Such approaches, however, require considerable human effort, so one possible alternative can be to normalize the social media texts first, then apply standard analyzers on these normalized texts. Recently, a number of approaches have been proposed for the lexical normalization of informal (mostly social media and SMS) texts (Liu et al., 2011;Liu et al., 2012;Han et al., 2013;Yang and Eisenstein, 2013). Han and Baldwin (2011) rely on the identification of the words that require correction, then define a confusion set containing the candidate IV correction forms for such words. Finally, a ranking scheme, taking multiple factors into consideration, is applied which selects the most likely correction for an OOV word. In their subsequent work, Han et al. (2012) propose an automated method to construct accurate normalization dictionaries. Liu et al. (2011; propose a character-level sequence model to predict insertions, deletions and substitutions. They first collect a large set of noisy (OOV, IV) training pairs from the Web. These pairs are then aligned at the character level and provided as training data for a CRF classifier. The authors also released their 3,802-element normalization dictionary that our work also relies at. Yang and Eisenstein (2013) introduce an unsupervised log-linear model for the task of text normalization. Besides the features that can be derived from pairs of words (e.g. edit distance), features considering the context are also employed in their model. As the number of class labels in that model is equal to the size of the IV words an OOV word could possibly be corrected to (typically on the order of 10 4 -10 5 , which is far beyond the typical label size of classification tasks), the authors propose the use of Sequential Monte Carlo training approach for learning the appropriate feature weights. The Task of Lexical Normalization Formally, given an m-long sequence of words in the i th tweet, T i = [t i,1 , t i,2 , . . . , t i,m ], participants of the shared task had to return a sequence of normalized in-vocabulary (IV) words, i.e. S i = [s i,1 , s i,2 , . . . , s i,m ]. The training set of the shared task consisted of 2,950 tweets comprising 44,385 tokens, while the test set had 1,967 tweets which included a total of 29,421 tokens. According to the dataset, most of the words did not require any kind of corrections, i.e. the proportion of unmodified words was 91.12% and 90.57% for the training and test set, respectively. Further details with respect the shared task can be found in the paper (Baldwin et al., 2015). As a consequence, we first built a sequence model to decide which tokens need to be corrected and in what way. A typical distinction of the correction types would be based on the number of tokens a noisy token and its corrected form comprises of. According to this approach, one could distinguish between one-to-one, one-to-many and many-to-one corrections on the per token basis. However, instead of applying the above types of corrections, we identified a more detailed categorization of the correction types and trained a linear chain CRF utilizing CRFsuite (Okazaki, 2007). The correction types a token could be classified as were the following: • M issingApos, standing for tokens that only differ from their corrected version in the absence of an apostrophe (e.g. youll → you'll), • M issingW S, standing for tokens that only • 1to1 ED≤2 , standing for corrections where no whitespace characters had to be inserted and the augmented edit distance (introduced in Section 4.2) between the noisy token and its normalized form was at most 2 (e.g. tmrw→tomorrow), • 1to1 ED≥3 , standing for corrections where no whitespace characters had to be inserted and the augmented edit distance was at least 3 (e.g. plz→please), • 1toM ABB , standing for corrections where both whitespace and alphanumeric characters had to be inserted to obtain a tokens corrected variant (e.g. lol → laugh out loud). For the sake of completeness, we should add that a further class label (O) was employed. This, however, corresponded to the case when there was no correction required to be performed for a token. As mentioned above, more than 90% of the words in both the training and test sets belonged to this category. Table 1 shows the distribution of the correction types on both the training and test sets. Proposed Approach Our approach consists of a sequence labeling module and relies on lookups from an efficiently indexed n-gram corpus of English tweets. Subsequently, we describe the details of these modules. Sequence Labeling for Determining Correction Types As already mentioned in Section 3, the first component in our pipeline was a linear chain CRF (Lafferty et al., 2001). Besides the common word surface forms, such as the capitalization pattern, the first letter or character suffixes, we relied on the following dictionary resources upon determining the features for the individual words: • the SCOWL dictionary being part of the aspell spell checker project containing canonical English dictionary entries, • the normalization dictionaries of Han et al. (2012) and Liu et al. (2012), • the 5,307-element normalization dictionary derived from the portal noslang.com, which map common social media abbreviations to their complete forms. For each token, word type features were generated along with the word types of its neighboring tokens. The POS tags assigned to each token and its neighboring tokens by the Twitter POS tagger (Gimpel et al., 2011) were also utilized as features in the CRF model. The Twitter POS tag set was useful to us, as it contains a separate tag (G) for multi-word abbreviations (e.g. ily for I love you), which was expected to be highly indicative for the correction type 1toM ABB . In order to be able to discriminate the M issingW S class, we introduced a feature which indicates for a token t originating from a tweet whether the relation max s∈split(t) f req 1T (s) ≥ τ holds, where τ is a threshold calibrated to 10 6 based on the training set, f req 1T (s) is a function which returns the frequency value associated with a string s according to the Google 1T 5-gram corpus and the function split(t) returns the set of all the possible splits of token t such that its components are all contained in the SCOWL dictionary. For instance split("whataburger ) returns a set of splits including "what a burger", "what a burg er" and "what ab urger". As there is a split (i.e. "what a burger") that is sufficiently frequent according to the n-gram corpus, we take it as an indication that the original token omitted some whitespace characters that we need to inserted. A CRF model with the above feature set was trained using L-BFGS training method and L1 regularization using CRFsuite (Okazaki, 2007). The overall token accuracy this model achieved was Table 2 and Table 3. These tables reveal that the most difficult error type to identify was the one where a word missed some whitespace characters (row M issingW S). This class happens to be the least frequent and one of the most heterogeneous class as well, which might be an explanation for the lower results on that class. Augmented Edit Distance When determining a set of candidate IV words that an OOV might be rewritten for, it is a common practice to place an upper bound on the edit distance between the IV candidates and the OOV word. In order to measure edit distance between tokens originating from tweets and their corrected forms, we implemented a modification of the standard edit distance algorithm that is especially tailored to measuring the difference of OOV tokens originating from social media to IV ones. The edit distance we employed is asymmetric as insertions of characters into OOV tokens have no costs. For instance, for the words tmrw and to-morrow, the edit distance is regarded as 0 if the former is considered to be the substandard OOV token and the latter one as the standard IV one. Note, however, if the role of the two tokens was changed (i.e if tmrw was treated as IV and tomorrow as OOV), their edit distance would become 4. A further relaxation to the standard edit distance is that we assign 0 cost to the following kinds of phonetically motivated transcriptions: • z → s located at the end of words (e.g. in catz → cats), • a → er located at the end of words (e.g. in bigga → bigger). By making the above relaxations to the definition of the standard edit distance, we could obtain larger candidate sets for a given edit distance threshold for tokens with higher recall, as we could reduce the edit distance between the OOV words and their appropriate IV equivalent in many cases. Obviously, as the candidate set grows, it might get increasingly difficult to choose the correct normalization from it. However, at this stage of our pipeline, we were more interested in having the correct IV word in the set of candidate normalization, rather than reducing its size. Making Use of Twitter n-gram Statistics Our basic assumption was that from the context of an OOV word, it is possible to reconstruct its IV equivalent, as there are users who use the correct IV English form of the OOV word within the same context, e.g. see you tomorrow instead of see u tmrw. The Twitter n-gram frequencies we made use of were the ones that we aggregated over the Twitter n-gram corpus augmented with demographic metadata described in (Herdadelen, 2013). For a given token t i at position i in a tweet, we chose the most probable corrected form according to the formula where the function C(t i , ct(t i )) returns a set of IV candidates for the token t i , according to ct(t i ), which is the correction type determined for that token by the sequence model introduced in Section 4.1. We indexed the Twitter n-gram corpus with the highly effective LIT indexer (Ceylan and Mihalcea, 2011), which made fast queries of the form t i−1 * t i+1 possible, the symbol * being a The only case when we did not choose the normalization of an OOV word according to (1) was when there was a unique suggestion for an IV word in the normalization dictionaries we listed in Section 4.1. The performance of the normalization on the training and test sets, according to the correction types we defined can be found in Table 4 and Table 5, respectively. From these tables, one can see that the worst results were obtained for the correction type when spaces were required to be inserted to a OOV word. This is in accordance with the fact that our sequence model obtained the lowest scores exactly on this kind of corrections. However, due to the fact that this error category is the least frequent, the lower scores on that category does not harm that much our overall performance as can be seen in Table 6 for both the training and test corpora. The results shown in Table 6 also illustrate that our approach seems to generalize well, as there is a small gap between the performances observed on the training and test sets of the shared task. Training Test precision 0.8703 0.8606 recall 0.7673 0.7564 F1 0.8156 0.8052 Table 6: Overall performance of our system on the training and test sets Conclusion In this paper, we introduced our approach to the lexical normalization of English tweets that ranked second at the shared task among the unconstrained submissions. Our framework first performs sequence labeling over the tokens of a tweet to predict which tokens need to be corrected and in what way. This step is followed by correction typesensitive candidate set generation, from which set the most likely IV normalization of an OOV word is selected by querying an efficiently indexed large n-gram dataset of English tweets.
3,315.2
2015-07-01T00:00:00.000
[ "Computer Science" ]
Platinum Group Metal Based Nanocatalysts for Environmental Decontamination Research and development in chemical engineering is currently focused on design of highly active and selective catalytic systems for process intensification. In recent years, there has been growing interest in the use of catalysts based on nanosized metal particles to improve catalytic processes. Among the many metal catalysts, platinum group metals (PGMs) have received greater attention because of their physical and catalytic properties. They have found applications in a wide range of chemical conversion and environmental decontamination reactions due to their chemical stability and enhanced catalytic reactivity in the nano range. This chapter reviews some of the major innovative applications of PGM nanocatalysts for catalytic environmental decontamination. Introduction Industrialization and rapid population growth has resulted in energy shortages and environmental contamination which has raised concern of a potential global crisis. For sustainable human society development, technologies for environmental decontamination need urgent attention. Among numerous available technologies, catalysis has gained considerable attention because of the diverse potentials in energy and environmental applications. Generally, catalysts for environmental applications are based on less expensive materials that will not cause secondary environmental pollution [1]. The major advantage of environmental catalysis is the chemical conversion of pollutants into non-hazardous and less toxic products. Pollutants can be degraded and transformed efficiently through homogenous or heterogeneous oxidation and reduction processes under ambient conditions or conditions in which external energy such as light may be required [2]. Therefore, the present book chapter aims to provide analysis in the use of PGMs nanocatalysts in the recent development and appraise their potential applications in environmental decontamination. Nanotechnology and the environment Nanotechnology refers to the research and development of materials at the atomic, molecular or macromolecular scale. Materials at nanoscale find applications in a myriad of areas, such as magnetic and optoelectronic, biomedical, pharmaceutical, cosmetic, energy, electronic, catalytic, and environmental domains. Because of the potential of nanotechnology, there has been a worldwide increase in investment in nanotechnology research and development [3]. The unique properties of materials and their stupendous performance at nanoscale are the main reason for the increased growth in this area. Controlled assembly of nanoparticles has been proposed as one of the most ways to achieve the target technologies [4]. Nanotechnology has immense potential in environmental decontamination through the use of materials such as adsorbents and nanocatalysts. Therefore, it is necessary to develop novel processes for the fabrication of nanomaterials that can be used as the basis for the development of highly efficient new technologies for solving environment challenges. Platinum group metals Platinum group metals (PGMs) including iridium (Ir), osmium (Os), platinum (Pt), palladium (Pd), rhodium (Rh) and ruthenium (Ru) have high resistance to corrosion and oxidation in moist air, unlike most base metals [5]. Their precious nature derives from their rarity in Earth's crust. It has been reported by Grabowska et al., that PGMs such as platinum and palladium allow the extension of light absorption of semiconductors such as TiO 2 into the visible region [6]. Recently, Dozzi et al. has investigated the catalytic effect of PGMs metal for the degradation of formic acid using Pt, Au doped TiO 2 [7]. On the other hand, Kisch et al. modified TiO 2 with chloride complexes of Pt for the photocatalytic degradation of 4-chlorophenol [8]. The presence of PGMs nanocatalysts gave rise to an enhanced charge carrier separation hence improving the photocatalytic performances of the modified photocatalyst systems. According to Yoon et al., the surface of PGMs serves as visible light absorbing sensitizers and centers of charge separation [9]. The possibility of designing and applying nanosized PGM catalysts has been demonstrated in a number of studies [9][10][11][12]. This has resulted in interest in considering their application in environmental remediation, water treatment, chemical transformation and microbial disinfection. A priority task of present day science is to solve the problem of environmental contamination of planetary resources which requires development of efficient technologies employing nanocatalysts. PGMs catalysts have already found applications in a number of industrially important R&D niche areas, including environmental cleanup. The effectiveness of the PGM nanocatalysts depends on a number of factors, including their size, morphology of the particles and their packaging into usable devices and systems. In most of these applications, the cost of PGMs cannot be ignored. Platinum has been reported to be a very good catalyst but a trade-off has to be established between efficacy and cost. In the nano-range the amount of the catalyst is usually kept very low while huge improvements in efficiency are realized. However, there is still need for more effort in design of viable systems for environmental decontamination using PGM based nanocatalysts. Potential up-scaling of these devices and systems is currently an area of increasing research interest. Synthesis of PGM nanocatalysts Designing a new class of highly selective and active catalytic systems with the use of recent developments in chemistry has become one of the main concerns faced in contemporary engineering. As compared to bulk material, nanoparticles have an increased surface area and high dispersity and, thus provide high reactivity and allow fabrication of efficient catalysts with lower noble metal loading. Controlling the growth, size, and monodispersity of metal nanoparticles is a subject of interest in designing of nanosized catalytic systems. Therefore, different methods have been used for the synthesis of nanocatalysts. For instance, in 2005, Wong et al., successfully synthesized a bimetallic catalyst using Pd-on-Au NPs through the Turkevich-Frens (citrate reduction) method and obtained particles with an average diameter of about 20 nm [10]. Paula and co-workers synthesized Pd/C-catalyst using one-pot method for synthesis of secondary amines by hydrogenation of nitrocompounds as single starting materials [11]. Additionally, Coleman et al. synthesized and Pt/TiO 2 catalyst by the photodeposition method in the presence of a sacrificial organic hole scavenger [12]. Barakat et al., synthesized Pt doped TiO 2 catalyst by immobilizing colloidal Pt nanoparticles onto titanium dioxide (rutile) [13]. Kuvarega et al., used a modified sol gel method for the synthesis of nitrogen, PGM co doped TiO 2 . In these studies spherical particles of average size between 2 and 5 nm were reported [14]. Characterization of PGMs based nanocatalysts Many different physico-chemical techniques are used to characterize nanocatalysts among them scanning electron microscopy (SEM) used for the morphology of the materials. Transmission electron microscopy (TEM) is also used for particles size and morphology. Chun-Hua et al. used TEM to analyze 3% Pt nanoparticles supported on CNT particles is shown in Figure 1. In another study, Pt catalyst containing 3% metal supported on activated carbon (AC) was prepared and the particle size varied from 8 to 10 nm [15]. In addition, techniques such as X-Ray Photoelectron Spectroscopy (XPS) have been used to give information on the oxidation states of the PGM and the nature of bonding between the metals and the supports. Raman spectroscopy (RS) and X-ray diffraction (XRD) are conducted to identify the crystalline phases and estimate particle sizes of nanocatalysts. For instance, Renfeng et al. used XRD to identify the Pt in Pt/RGO [16]. The Pt peak was conspicuous in the samples containing Pt (Figure 2). Xie and co-workers used Fourier Transform infrared spectroscopy (FTIR) to verify the bond vibrations related to functionalities on the surfaces of their synthesized materials (Figure 3) [17]. Environmental decontamination PGMs nanocatalysts have become a new class of environmental remediation materials that could provide affordable solutions to some of the environmental challenges. At nanoscale, PGMs particles have high surface areas and surface reactivity. As such, they provide more flexibility for in situ applications. PGMs nanocatalysts have proven to be an excellent choice for the transformation and decontamination of a wide variety of common environmental contaminants, such as organochlorine pesticides and chlorinated organic compounds. The major consumer of PGMs is the automobile industry. PGMs such as Pt, Rh and Pd are used as catalysts in the automobile industry in order to reduce the level of unburnt hydrocarbons, carbon monoxide (CO) and nitrogen oxide present in the exhaust gases. Generally, a typical automobile converter contains 0.04% Pd, 0.005-0.007% and 0.08% Pt and Rh supported on a base [18]. Iridium has become the new entrant in this application area. A typical example is the introduction of iridium-containing catalytic converters in their direct injection engines by Mitsubishi of Japan [19]. A typical example is the reduction of nitrogen oxides to nitrogen used in car exhaust systems for abatement of emissions from petrol/rich-burn engines. There is a huge number of reports on use of Rhodium as it is the most effective element in the conversion of nitrous oxides to nitrogen [20,21]. Jianbing et al., in their study, investigated the catalytic ozonation of dimethyl phthalate (DMP) in aqueous solution and DBP precursors in natural water using Ru/AC. These two kinds of organics are both recalcitrant to biodegradation and will cause severe hazards to human health. Ru/AC was an active nanocatalysts in the catalytic ozonation of dimethyl phthalate and had the ability to complete mineralize the DMP in a semi-batch experiment. On the other hand, the total organic carbon (TOC) removals were stable around 75% for a duration of 42 h and no trace of Ru was observed from the reactor in the continuous experiments of Ru/AC catalyzed ozonation of DMP. Consequently, Ru/ AC catalyzed ozonation was found to be more efficient than ozonation alone for TOC removals in the natural water treatment [22]. Water treatment Oxyanions, such as BrO 3 − ClO 3 − , NO 3 − and ClO 4 − are toxic and they are pervasive in drinking water. These oxyanions originate from both anthropogenic and natural sources. Furthermore, they are also produced during water treatment processes such as ozonation, desalination, electrochemical treatment and chlorination [23][24][25][26]. These ions have mutagenic, endocrine disrupting and carcinogenic effects [27,28]. Ion exchange and reverse osmosis cannot completely degrade these oxyanions [28,29]. Thus, it would be preferable to apply destructive treatment technologies based on nanocatalysts for sustainable drinking water treatment processes [30]. Pd-based heterogeneous catalysis has garnered momentous attention as a potential solution for reduction of these oxyanions and for other highly oxidized contaminants such as halogenated and nitro organics [31]. Chen et al., have used PGMs for the reduction of BrO 3 − [32]. Five activated carbon supported on metal namely (with a 5 wt % Pd, Pt, Rh, Ru and 1 wt% for Ir) with a M/C of 0.1 g L catalysts loading were used for the catalytic reduction of bromated (BrO 3 − ) Figure 4. From Figure 4, it is noticeable that Rh/C was significantly more active than the other PGMs/C catalysts used for the reduction of BrO 3 − . When catalyst loading 0.1 g L −1 was used, a reduction of 1 mM BrO 3 − was achieved in approximately 5 minutes. Rh/C had a much higher activity than most supported metal catalyst reported in literature when compared with metal mass-normalized basis. While each metal catalyst dispersion differs, it was vital to compare the activity of metals hydrogenation using the initial turnover frequency values (TOF o ) [33][34][35]. Additionally, Rh/C had the best performance compared to the other four catalysts at pH 7.2. On the other hand, Ir/C had a slightly lower apparent reactivity with BrO 3 − than that of Pd/C but was the second highest performance. Therefore, the incorporation of the two PGMs catalysts namely Rh and Ir led to the highest activity for the catalytic reduction of BrO3 − under condition that are suitable for the water treatment systems. Antimicrobial Microbial contamination and growth on the surfaces are risks to human health. The chemicals used to tackle microbials such as detergents, alcohols and chlorine are very aggressive and hence not environmentally friendly besides being ineffective for long-term disinfection. Therefore, a myriad of studies are being conducted in order to tackle these challenges. Arcan et al. doped SnO 2 and TiO 2 with Pd for microbial inactivation of E. coli, S. aureus and S. cerevisiae [36]. The addition of Pd led to an enhancement in the photocatalytic efficiency observed for the degradation of microorganisms when 1% of Pd was used. In addition, the UV-Vis showed an extension of the absorption edge into the visible range without affection the phase of the catalysts (Figures 5 and 6). Chemical transformation PGM nanoparticles have proven to be efficient heterogeneous and homogeneous catalyst with advantages such as a high specific surface area due to their small size resulting in a high number of potential catalytic sites [37]. Owing to these phenomena, there has been in the past two decades an increase in the use of metal nanoparticles in catalysis [38,39]. Rong et al., reported the synthesis of supported Pt NPs and their use as catalysts in the partial hydrogenation of nitroarenes to arylhydroxylamines [40]. The particles were prepared by reduction of H 2 PtCl 6 with NaBH 4 in the presence of the carbon support. The hydrogenation of several substituted nitroarenes was performed under soft conditions (10.15°C, 1 bar H 2 ) to favor the formation of hydroxylamines, showing excellent activity and selectivity in this transformation. Pd supported on C has also been successfully used as catalysts for the conversion of nitrobenzenes to secondary amines (Figure 7). The proposed mechanism for the hydrogenation of nitrobenzene was proposed as shown in Figure 8. During this process, there is generation of intermediates such as hydroxylamines, azo and azoxy derivatives. For example, in the case of m-dinitrobenzene, a catalyst containing 2 wt% Pt/C yielded 92.3% of the corresponding hydroxylamine after 190 min of reaction in THF ( Table 1). In a study, Zeming et al. used carbon as Pt colloid support for the hydrogenation of arylhydroxylamines. The Pt colloid supported on carbon was an active and selective catalyst for the partial hydrogenation of nitroaromatics with electronwithdrawing substituents to the corresponding N-arylhydroxylamine, indicating an additive-free green catalytic approach for arylhydroxylamine synthesis. Very encouraging results were obtained with N-arylhydroxylamine bearing electronwithdrawing substituents. Since N-arylhydroxylamine can be further converted to highly valuable compounds through several reactions like Bamberger rearrangement, this result will generally contribute to a simpler and greener synthetic methodology of N-arylhydroxylamine derivatives. Future perspectives Progress has been reported in the application of nano-PGMs for heterogeneous catalysis reactions. While most of the applications have centered on organic transformations, there is potential for extending the catalytic potential of these metals to other fields such as pollutant degradation and microbial inactivation in water *Address all correspondence to<EMAIL_ADDRESS>treatment processes. On their own, PGM at the nanoscale tend to aggregate and thus limited application has been realized. However, the use of supports has greatly enhanced the activity of the PGM nanocatalysts in various fields. Mono and bimetallic systems have been reported on. In environmental decontamination processes there is still need to find the most suitable supports and application devices. While encouraging findings have started appearing in literature, more work still needs to be done to in environmental catalysis for water treatment. Conclusion Several efforts have been devoted to the preparation of PGM nanocatalysts for application in environment decontamination. The high number of literature reports highlights the interest in this family of catalysts for catalytic transformation, both in terms of reactivity and selectivity. There is potential for application of PGMs nanocatalysts for environmental decontamination, water treatment, antimicrobial and chemical transformation. Further studies are necessary to better understand parameters influencing the reactivity as well as enhancing the conversion rates and efficiencies.
3,567.2
2019-04-03T00:00:00.000
[ "Chemistry" ]
Human gesture recognition under degraded environments using 3D-integral imaging and deep learning : In this paper, we propose a spatio-temporal human gesture recognition algorithm under degraded conditions using three-dimensional integral imaging and deep learning. The proposed algorithm leverages the advantages of integral imaging with deep learning to provide an efficient human gesture recognition system under degraded environments such as occlusion and low illumination conditions. The 3D data captured using integral imaging serves as the input to a convolutional neural network (CNN). The spatial features extracted by the convolutional and pooling layers of the neural network are fed into a bi-directional long short-term memory (BiLSTM) network. The BiLSTM network is designed to capture the temporal variation in the input data. We have compared the proposed approach with conventional 2D imaging and with the previously reported approaches using spatio-temporal interest points with support vector machines (STIP-SVMs) and distortion invariant non-linear correlation-based filters. Our experimental results suggest that the proposed approach is promising, especially in degraded environments. Using the proposed approach, we find a substantial improvement over previously published methods and find 3D integral imaging to provide superior performance over the conventional 2D imaging system. To the best of our knowledge, this is the first report that examines deep learning algorithms based on 3D integral imaging for human activity recognition in degraded environments. Introduction Human gesture recognition involves deriving meaningful inference from human motions and has a wide range of applications in human-computer interaction, patient monitoring, surveillance, robotics, sign language recognition, etc. [1].In recent years, human gesture and action recognition have attracted wide interest among the computer science and computer vision communities.Numerous approaches have been proposed for gesture recognition, including the use of mathematical models such as Hidden Markov Models (HMM) [2], spatio-temporal interest points-based detectors (STIPs) [3], correlation filter-based approaches [4], etc.Recently, deep learning-based models for gesture recognition have gained wide acceptance due to their generalization capabilities and high accuracy in detecting and classifying gestures [5,6].While these methods have been shown to work well for clean datasets, gesture recognition under degraded conditions remains a challenge, especially in cases where gestures are partially occluded, or in low illumination conditions.In degraded environments, the features of the gestures may not be fully recorded during the camera pickup process, which can make gesture recognition under such conditions more challenging. Most of the state-of-the-art vision-based gesture recognition methodologies consider nonoccluded cases and typically only contain a single gesture present in the scene [1].However, in real-world scenarios, multiple gestures may be present, and in many cases, the gestures may be partially occluded, and the illumination conditions may not be perfect.In multi-gesture scenarios, hand-to-hand occlusions may occur, which can make gesture detection difficult.The three-dimensional (3D) reconstruction algorithm based on integral imaging provides an efficient way of removing occlusions and detecting gestures [7,8].In addition, our experiments suggest that the variable focusing and depth-sectioning capabilities of 3D integral imaging aid in the gesture detection capabilities when multiple gestures are present.Therefore, in this paper, we leverage the advantages of passive 3D integral imaging with deep learning to propose an efficient human multi-gesture recognition system under degraded conditions.The degraded conditions considered in this paper include partial occlusions in low illumination conditions. Convolutional neural networks (CNNs) are a class of deep neural networks, widely used for tasks such as object recognition and image classification [5,9].As the name suggests, CNNs consist of stacked convolutional and pooling layers followed by one or more fully connected layers and a classification layer.The convolutional layers use a series of convolutional kernels for feature learning.The output of the convolutional layers will be feature vectors, which represent the spatial features representing an image.The pooling layer combines semantically similar features into a single feature [9].In the case of gesture recognition, the temporal dependency of feature vectors extracted from adjacent frames are also important, which CNN alone cannot capture.Therefore, we have used a cascaded network comprising of CNN and bi-directional long short-term memory (BiLSTM) network.The convolutional layers of the CNN produce the feature vectors, which are used by the BiLSTM network for gesture classification.Furthermore, we compare the results of the proposed deep learning-based method with respect to the previously reported 3D integral imaging-based spatio-temporal interest point with support vector machines (STIP-SVM) [8] and distortion invariant non-linear correlation filter-based approaches [7]. This paper is organized into four different sections: Section 1 provides the introduction and a brief review of gesture recognition and the various approaches by which it may be achieved.Section 2 discusses the proposed approach and its details.Section 3 deals with the experimental results, including the performance of the proposed system, and comparison with previously reported methodologies.Finally, section 4 provides the conclusions of the paper. Computational volumetric reconstruction using integral imaging (InIm) Integral imaging is a passive three-dimensional (3D) imaging technique that captures both the intensity and directional information of a scene using an array of cameras, lenslet array, or a moving camera system [10][11][12][13][14][15][16].Originally proposed by Lippmann, the integral imaging-based techniques have proved to be useful for human action and gesture recognition [7,8].In the original work, in which the method was defined as integral photography, Lippmann used a microlens array (MLA) in front of photographic film in order to capture the multiple 2D images, each having a different perspective of the scene [16,17].The individual 2D images are usually called elemental images (EIs).Following the advancements in digital sensors to replace photographic film, this technique is now generally referred to as integral imaging.As in the original conception, integral imaging may still be performed using a lenslet array in front of an imaging sensor.However, another interesting approach for implementing the concept of integral imaging to capture the 3D information is by using an array of digital cameras [18].The advantage of this approach is that the captured 3D scenes can have higher parallax and better resolution at a longer distance.Another possibility is using a single camera on a moving translation stage.This has been named as the synthetic-aperture integral imaging [17], but this method is not suitable for dynamic scenes.By capturing the scene from multiple viewing perspectives, integral imaging helps in reducing noise due to partial occlusions [19], low-light illumination [20,21], and scattering medium [22][23][24], etc.The computational reconstruction algorithm is an inverse mapping procedure, which integrates the elemental images by back-projecting the elemental images through a virtual pinhole array into the object space at the desired reconstruction depth [10].It has been shown that this process is optimal in a maximum likelihood sense [17,25].The integral imaging pickup process and the computational reconstruction process are shown in Fig. 1(a) and Fig. 1(b), respectively.The spatio-temporal volume video data is reconstructed as follows [7]: where, r(x, y, z; t) is the integral imaging reconstructed video.The reconstructed video is obtained by shifting and overlapping K × L elemental images at reconstruction depth.The x, y represents the pixel indices and t is the frame index.Here, EI i,j represents the i, j th elemental image.The r x , r y and d x , d y in Eq. ( 1) represent the resolution and physical size of the image sensor, respectively.The p x , and p y indicate the pitch of adjacent image sensors on the camera array in the x, and y directions, respectively.The O(x, y; t) matrix contains information regarding the number of overlapping pixels, and the magnification factor is M = z f , where f represents the focal length. Video encoding using convolutional neural network (CNN) The 3D reconstructed video obtained using Eq. ( 1) is encoded using a CNN.The CNN consists of a series of convolutional and pooling layers, which spatially filter the input data to yield relevant features.The output of the last pooling layer produces the feature vector representing the spatial information of the video.This feature vector can be used for high-level interpretation tasks such as object detection and classification.In order to mitigate the effect of lesser training data, we have used the deep GoogLeNet network pretrained on the well-known ImageNet [26] dataset for feature extraction.The GoogLeNet uses an inception module.The inception module architecture is similar to that proposed by Serre et al. [27], which uses a series of Gabor filters to handle multiple scales.The filters in the inception module are learned, and the inception layers are repeated multiple times in order to get a deep neural network model [28].The inception module consists of multiple filters in parallel, 1×1, 3×3, 5×5 convolution, and max-pooling operations [28].The resulting filter outputs are concatenated.The input size of the network is 224×224 and takes the RGB color channels with the mean subtracted.The CNN architecture we used is 22 layers deep, excluding the pooling layers.As we go deeper, the feature representations become more abstract and informative.We have used a total of 9 inception modules and global average pooling in order to generate the feature estimates.A series of max-pooling layers have been stacked in between the inception modules in order to reduce the dimensionality of the features learned at each stage.The convolution layers, including those present in the inception module use the rectified linear unit (ReLU) as the activation function [28]. Assuming the 3D reconstructed video consists of N frames, for each frame I n , = {1, 2, . . ., N}, the convolutional and pooling layers of the CNN produces a K dimensional feature vector x i R K , i = 1, 2, . . ., N. Therefore, when also considering the temporal dimension of the data, the input can be represented by a feature matrix X R K×N , i.e. where, represents the concatenation operator.Thus, for each input video, the CNN produces the corresponding feature matrix X j R K×N , j = {1, 2, . . ., P}, where P represents the number of input videos.The rows and columns of X j R K×N encode the spatial and temporal information of each input video, respectively. LSTM network For applications involving sequential inputs, such as in gesture recognition, the recurrent neural networks (RNNs) have been shown to be useful for capturing the temporal dependency of the data.Recurrent neural networks (RNNs) are a very powerful dynamical system, but training them has been a difficult task due to the problem of vanishing and exploding gradients [9].In order to deal with the problems present in conventional RNNs, Hochreiter and Schmidhuber [29] proposed long short-term memory networks (LSTMs).These networks have improved the "memory" of the cell by introducing a "gate" into it.Consider an input sequence of T time steps, x = [x 1 , x 2 , x 3 , . . ., x T ] which is fed to a recurrent neural network, a standard RNN computes the hidden vector h = [h 1 , h 2 , h 3 , . . ., h T ] and the output vector h = [h 1 , h 2 , h 3 , . . ., h T ] as follows [30]: and for time instants t = (1, 2, 3, . . ., T).In the above equations, σ(x) = 1/(1 + e −x ) represents the element-wise logistic sigmoid function.Here, W kk and b k , k {o, c, f , i, x, h} represent the corresponding weight matrices and bias terms of the network, respectively.Unlike standard RNNs, LSTMs [29] use memory cells to extract the long-term temporal relationships hidden inside the sequential input.The memory cells have a multiplicatively gated self-connection with unity weight, which copies its state and accumulates the external signal.This multiplicative gating is controlled by another unit that learns to decide when to clear the memory [9].A standard LSTM network computes the hidden vector using the following relationships [29,30], where, i, f, o, are respectively the input, forget and output gates, and c is the cell state vector.The bi-directional long short-term memory (BiLSTM) network [31] is a variant of long short-term memory (LSTM) network, where two separate networks are used to extract all the available input information.It has been previously reported that the BiLSTM networks outperform unidirectional LSTM networks [32].In a BiLSTM network, one network will be responsible for learning in the forward time direction, which outputs h forward while the other will be for learning in reverse time direction outputs h reverse .The output of these two layers will be merged, i.e., h = f (h forward , h reverse ).We have chosen the merging operator f to be a concatenation operator; therefore, h = [h forward , h reverse ].The structure of an LSTM cell and the BiLSTM architecture has been shown in Fig. 2(a) and Fig. 2(b), respectively.Finally, the merged output is fed to a fully connected layer and a Softmax layer for classification purposes.We used a BiLSTM layer with 100 hidden units to learn the temporal dependency of the feature vectors.The recurrent weights of the network are randomly initialized from a unit normal distribution.We have used hyperbolic tangent (tanh) and sigmoid function for the state activation function and the gate activation function, respectively.The forward and backward layers of BiLSTM network do not interact with each other.This allows us to use the standard training algorithms used for RNNs to train our network.The network was optimized using Adam optimizer with a gradient decay factor of 0.9, and a learning rate of 10 −4 .To mitigate overfitting, we have used the dropout The block diagram of the proposed system is shown in Fig. 3.The human gesture data has been recorded using the integral imaging technique.The 3D computational reconstruction algorithm has been used to reconstruct the data at the correct depth.The reconstructed data is fed into a convolutional neural network for feature extraction.The convolution and pooling layers of the network extract a set of spatial feature vectors that are input into a Bi-directional LSTM network (BiLSTM) that is designed to capture the temporal variation between the feature vectors.Finally, the Softmax and classification layers categorize different gestures. Experimental results and discussions In this section, we discuss the performance of the proposed approach on experimental data.To that end, a 3×3 camera array was used for integral imaging, as shown in Fig. 4(a).In our experiments, we used Mako G192C machine vision cameras.All the cameras have identical intrinsic parameters.The pitch of the camera array was designed to be 80 mm in both x and y directions.The pixel size is 4.5 µm × 4.5 µm, the resolution of each camera is 1200 × 1600, and the focal length was set to be 15 mm.The quantum efficiency of the camera is 0.44 at a wavelength of 525 nm, and the sensor read noise is 20.47 electrons rms/pixel.All the cameras in the array are synchronized to record the data with a frame rate of 10 frames per second (fps).The data is recorded with camera lens F-number of 1.8 and an exposure time of 30 milliseconds (ms).In our experiments, we considered two classes of gestures in order to demonstrate the potential benefits of the proposed approach.The gesture motions are depicted in Fig. 4(b).The data were collected from 4 participants with 5 different backgrounds, as shown in Fig. 4(c).The gestures are captured at a distance of about 3 meters away along the axial direction from the camera array.The participants were asked to repeat each gesture twice to capture both fast and slow variations of the same gesture.The low illumination effects considered in the experiments have been simulated by computational models applied to the experimentally captured elemental images in order to generate a large number of low light data for testing. We have conducted three different experiments and collected data under three different conditions: 1) single gesture without any occlusion, 2) single gesture with partial occlusion, 3) multiple gestures in the scene at different depths without added occlusion, plus low light effects.For all the experiments, we assume the depth of gesture of interest is known a priori.This assumption is for convenience, and it is not necessary as integral imaging can reconstruct the in-focus object of interest in the 3D scene.In the first experiment, we collected data without occlusion, as shown in Fig. 5(a) and in the second experiment, we recorded data with partial occlusion, as shown in Fig. 5(b).The integral imaging-based computational reconstruction algorithm, as outlined in section 2.1, has been used to reconstruct the 3D data at the correct depth, where the gesture of interest is located.As shown in Fig. 5(c), the integral imaging technique helps us to remove the effect of partial occlusion and thereby improving the visibility and thus enhancing the classification performance. To study the performance of the proposed system in low illumination conditions, we have computationally simulated the low illumination conditions using the degradation model I degraded = α × I original + n, where α is the attenuation factor, and n represents the camera noise, modeled as Gaussian noise with mean µ and variance σ 2 .For our analysis, we considered the attenuation factor α = 0.2.The Gaussian noise with mean µ equal to zero and variance equal to 0.05 has been added after attenuating the videos.In 2D imaging, a single frame representing the low illumination condition for gesture 1 and gesture 2 has been shown in Fig. 5(d).The signal-to-noise ratio (SNR) for the corresponding simulated low illumination images is calculated using [21,33], SNR = (µ s − µ n )/ σ 2 s + σ 2 n , where µ s , µ n correspond to the means of the signal (the object of interest) area and the background area, respectively, and σ 2 s , σ 2 n denote their corresponding variances.Thus, the SNR calculated for the degraded image shown in Fig. 5(d) is 0.0438 and 0.0643 for gesture 1 and gesture 2, respectively.The number of photons per pixel (N photons ) for the degraded image can be calculated as [7,21]: where, n r represents the camera read noise, and QE represents the quantum efficiency of the camera.Using Eq. ( 10), the estimated photons per pixel for the degraded 2D frame is 2.0376 and 2.99 photons per pixel for gesture 1 and gesture 2, respectively.The integral imaging computational reconstruction algorithm is used to reconstruct the degraded image at the depth of the object of interest, as shown in Fig. 5(e).The SNR of the 3D reconstructed frame for gesture 1 and gesture 2 are 0.1404 and 0.1699, respectively.In the next experiment, we considered a multiple-gesture scenario.In this case, we considered two gestures present in the scene.One of the gestures is considered as the gesture of interest (true class), and the other is considered a random false class gesture happening elsewhere in the scene.The two gestures are assumed to be happening at different depths.Since a random gesture can happen anywhere in the scene, for our experiments, we have chosen three different locations for the background gesture, as shown in Fig. 6.To study the effect of low illumination degradation, we have computationally simulated the low light degradation condition for the multi-gesture scenarios.The degradation has been simulated using the same degradation model as described in the above paragraph, by attenuating the recorded video and adding additive Gaussian noise.Figure 7 depicts one such scenario where both gestures are clearly visible in the case of conventional 2D imaging with normal illumination, as shown in Fig. 7(a), whereas the 3D integral imaging in-focus reconstruction allows us to extract the gesture of interest from the other out-of-focus objects in the scene as shown in Fig. 7(b).The low light 2D frame is shown in Fig. 7(c) with SNR equal to 0.1092 and 0.1413 for gesture 1 and gesture 2, respectively.Using Eq. ( 10), the photons per pixel (N photons ) has been calculated and equals to 5.0802 and 6.5735 photons per pixel, for gesture 1 and gesture 2, respectively.As shown in Fig. 7(d), the integral imaging reconstruction algorithm reduces noise and improves the visibility in low illumination.The SNR of the 3D reconstructed frames for gesture 1 and gesture 2 are 0.2957 and 0.3875, respectively. In total, 240 gesture videos were recorded, 80 videos (2 gestures x 4 participants x 5 backgrounds x 2 repetitions) corresponding to each of the three experiments as following: (1) no occlusion, (2) with partial occlusion, and (3) multi-gesture.Out of the 80 videos, 40 videos belong to the first gesture and 40 videos belong to the second gesture.The data collected without occlusion has been used for training the system since it is assumed that only clean datasets may be available for training the network, as one should not expect to know the occlusion a priori in a real-world scenario.The degraded datasets, such as those degraded conditions considered in this report, appear unpredictably over time and may be unknown to the system.To improve the performance and the generalization capabilities of the neural network model, we have used data augmentation to increase the size of the training dataset.We have used six different data augmentation techniques, including affine transformation, flipping, resizing, inversion, blurring, and noising.Thus, after data augmentation, we generated a total of 560 videos for both gesture classes, which we used for training the model.We have tested our model using the data with occlusion, the occlusion data with simulated low illumination, the multi-gesture data, and the multi-gesture data with simulated low illumination.We have obtained the receiver operating characteristics (ROC) curves for comparing the proposed technique with 1) a conventional 2D imaging-based deep learning technique, 2) the previously reported STIP-SVM approach [8], and 3) the distortion invariant non-linear correlation-based approach (using k = 0.3) [7,34,35].For comparing the performance of different classifiers using ROC analysis, we use the area under the ROC curve (AUC), which is widely used for assessing the overall performance of classifiers based on their ROC curves [36,37].For data with occlusion, we achieved an area under the curve (AUC) of 0.958 using the proposed 3D integral imaging-based CNN-bi-directional long short-term memory (CNN-BiLSTM) technique.For 2D imaging-based CNN-BiLSTM and 3D integral imaging-based STIP-SVM approaches, we obtained AUC values of 0.696 and 0.572, respectively.In the case of occlusion in simulated low illumination conditions, the proposed 3D technique yields an AUC of 0.941 while the 2D imaging-based CNN-BiLSTM and 3D integral imaging-based STIP-SVM approach each yielded an AUC of 0.574 and 0.476, respectively.For the cases of environmental degradation, such as occlusion and occlusion with low light illumination, the distortion invariant non-linear correlation approach produces an AUC of 0.471 and 0.477, respectively.These eight ROC curves for the scene with occlusion data are illustrated for comparison in Fig. 8. The effect of low illumination on the performance of different methodologies has been studied based on the percent reduction in area under the curve (AUC).In our experiments, the introduction of low illumination reduces the AUC by 1.77%.For the proposed 3D integral imaging-based CNN approach, while for 2D imaging and STIP-based approaches, the AUC decreases by 17.53% and 16.78%, respectively.In the case of distortion invariant non-linear correlation approach, the percentage change in AUC is 1.27%. The ROC analysis for the multi-gesture scenario is presented in Fig. 9.We have presented eight ROC curves for comparison.For the multi-gesture case, the proposed 3D CNN-BiLSTM approach achieves an AUC of 0.998 and in the multi-gesture with the low-illumination scenario, the proposed approach yields an AUC of 0.964.The 2D imaging-based CNN-BiLSTM and 3D integral imaging-based STIP-SVM approaches give AUC values of 0.958, and 0.426 respectively, for the multi-gesture scenario, and 0.705, and 0.519, respectively, for the multi-gesture data with simulated low-illumination conditions.The non-linear correlation approach produces an AUC of 0.467 and 0.444 for multi-gesture scenarios in high illumination and low illumination, respectively.As in the case of occlusion, the effect of degradation on the multi-gestures case has been analyzed by the percentage reduction in AUC due to degradation.The percentage reduction in AUC for the proposed approach is 3.41%, while the percentage change for 2D imaging and STIP-based SVM technique is 26.41% and 21.83%, respectively.For the case of distortion invariant non-linear correlation approach, the percentage change in AUC is 4.93%. In the multi-gesture scenario, we have only considered experimentally the cases wherein the false gesture takes place either in front of or behind the gesture of interest.However, the approach may also be effective when both gestures occur at the same plane, especially in degraded conditions.The integral imaging reconstruction acts as a depth-based filter and removes noise from the depths outside the depth of reconstruction.This depth-sectioning may improve performance over conventional 2D imaging strategies regardless of the depth of the false gesture.In particular, under degraded environments such as low-illumination or partial occlusion, the 3D integral imaging helps to mitigate the effects of degradation.Thus, the proposed approach provides better performance as compared to the 2D imaging-based CNN-BiLSTM and the other approaches used for comparison. The STIPs-based feature extraction is defined based on the variations of local gradients in spatial and temporal dimensions.Thus, the STIPs are suitable for detecting moving objects in a video, including for human gesture recognition tasks [8].In our experiments, this approach achieves better performance for the single gesture case, as shown in Fig. 8 compared to the multi-gesture scenario, as shown in Fig. 9.In low light illumination conditions, the number of STIPs detected will be greatly reduced.Thus, the performance decreases in low illumination for the single gesture scenario, as shown in Fig. 8.Under the multi-gesture scenario, even though the integral imaging helps to mitigate the effects of the false gesture to a certain extent, there will still be local gradient variations due to the false gesture, especially when it is moving.The STIPs detected from the false gesture may still be prominent, especially in cases where the false gesture is present in front of the gesture of interest.Thus, the false classification may increase when the STIPs are detected from false gesture, in addition to the STIPs detected from the gesture of interest.In low light illumination, the number of STIPs detected will be much less, so the recognition performance is similar to that of a random classifier.As shown in Table 1, the STIP-based SVM recognition methods did not achieve an accuracy above 55.00% or an AUC above 0.572 in any case.These results suggest that the STIP-based SVM recognition method may not be an effective gesture recognition strategy for the experimental conditions considered in this work. From Fig. 8 and Fig. 9, our experimental comparison shows that while the previous works demonstrated that the correlation-based approach outperformed the STIP-based recognition method, under more challenging experimental conditions including occlusion and low light illumination, both STIP-based and correlation-based gesture recognition strategies may be less effective than the proposed CNN-BiLSTM approach.The correlation-based classification is particularly useful for narrower tasks when limited datasets are available for classification, and the training and testing datasets are more consistent with each other.However, the correlation filters may have limitations [4], and their performance decreases when the intra-class variations are high, and the testing and training data are not consistent.The CNN-based feature extraction technique consist of a series of multi-scale non-linear filters, which increases its generalization capability as compared to a single trained correlation filter and provides a more robust strategy for gesture recognition, especially in challenging conditions.In our experiments, we have considered multiple backgrounds with fast and slow gesture movements as well as multi-gesture scenarios, which increase the intra-class variability in the training and testing datasets, thus degrading the performance of the non-linear correlation-based method proposed in [7]. The detection performance of the proposed approach has been compared using several performance metrics, which are summarized in Table 1.In addition to accuracy, we have also considered the F1 score and Mathew's correlation coefficient (MCC) for better evaluation of the performance of the proposed approach [38,39].The F1 score represents the harmonic mean between precision and recall.Its value varies between 0 and 1, with 1 indicating perfect classification performance.The MCC indicates the correlation between the observed and predicted classification, and its value ranges between -1 to 1.An MCC value of 1 indicates a perfect classifier, and -1 indicates complete disagreement between true and predicted classes.The results have been summarized in Table 1. From Table 1, we observe that in our experiments, the proposed system has a better performance in terms of all the metrics considered.In addition, the percentage reduction in AUC due to the effect of low illumination for various conditions considered in our experiments are significantly lower for the proposed approach as compared to the 2D imaging-based approach.In the case of occlusion and multi-gesture scenarios, for all the metrics considered (as shown in Table 1) the proposed approach with low illumination performs better than even the 2D approach and other methodologies in high illumination as well as low illumination conditions which shows the effectiveness of the proposed approach as compared to other methodologies considered.Thus, the proposed 3D Integral imaging-based deep learning approach is promising for gesture recognition, especially under degraded environments. Conclusion In this paper, we have presented a system for human gesture classification using integral imaging and deep learning.Our experimental results show that using the proposed 3D integral imagingbased deep learning approach improves the performance under degraded environments such as occlusion and low light in comparison to the conventional 2D imaging approach and the other methodologies considered.In addition, the percent reduction in AUC for 3D imaging-based CNN-BiLSTM network due to degradation (simulated low illumination) is significantly lower than that of the 2D imaging-based approach.The proposed approach could be further extended for detecting human activities, sentiment analysis, etc., especially under more challenging conditions.Future work may consider other integral imaging approaches [40], comparison with time of flight sensing, and increased scene complexity. Fig. 4 . Fig. 4. Figures showing (a) 3 × 3 camera array for integral imaging capture stage used for our experiments, (b) depicts the two different gesture motions considered in this paper, (c) shows a single gesture with different scene backgrounds used for our experiments. Fig. 6 . Fig. 6.Scene for multiple gestures experiments.Three different locations for the background gesture (false class) considered in our experiments.(a) behind and to the left of the true class gesture of interest, (b) behind and to the right of the true class gesture of interest, and (c) in front and to the left of the true class gesture of interest. Table 1 . Comparison of gesture recognition performance under various experimental conditions. a a Abbreviations: CNN -convolutional neural network, STIP -spatio-temporal interest points, SVM -support vector machine, AUC -Area under the ROC curve, MCC-Mathew's correlation coefficient.
6,998.6
2020-06-19T00:00:00.000
[ "Computer Science" ]
Quantum dot on a plasma‐facing microparticle surface: Requirement on thermal contact for optical charge measurement Semiconductor nanocrystals, quantum dots (QDs), are known to exhibit the quantum‐confined Stark effect which reveals itself in the shift of their photoluminescence spectra in response to the external electric field. It was, therefore, proposed to use QDs deposited on the microparticle surface for the optical measurement of the charge acquired by the microparticles in low‐temperature plasmas. Another physical process leading to the shift of the photoluminescence spectra of the QDs is heating. Charging of plasma‐facing surfaces is always accompanied by their heating. Thermal balance of a quantum dot residing on the surface of a microparticle immersed in a plasma is considered in this work. It is shown that under periodically pulsed plasma conditions, the spectral shift of the photoluminescence of the quantum dot caused by the oscillations of its temperature becomes undetectable if the effective thermal flux characterizing the thermal contact between the quantum dot and the microparticle exceeds the value ranging from 108$$ {10}^8 $$ to 1010$$ {10}^{10} $$ s−1$$ {\mathrm{s}}^{-1} $$ depending on the plasma parameters. If this effective thermal flux exceeds the above‐mentioned values, the entire spectral shift observed during the period of plasma pulsing should be attributed to the quantum‐confined Stark effect due to the microparticle charge. Lower‐boundary estimate for the effective thermal flux for the direct contact between the quantum dot and the microparticle is ∼1012$$ \sim {10}^{12} $$ s−1$$ {\mathrm{s}}^{-1} $$ . Estimations based on the heat conductivity of ligand‐stabilized nanocrystal arrays yield even higher values of the effective thermal flux ∼1013$$ \sim {10}^{13} $$ s−1$$ {\mathrm{s}}^{-1} $$ . INTRODUCTION environment, [6] in the usage of microparticles as small optically manipulated plasma probes [7] as well as in the investigations of triboelectric charging. [8][9][10] Traditionally, the charge of dust particles is measured using dynamical methods [11][12][13][14][15][16] which have known disadvantages such as necessity for (often not easily verifiable) assumptions on the forces acting on dust particles and limited spatiotemporal resolution. Optical methods of dust particle charge measurement are therefore of great interest. Significant theoretical efforts have been undertaken to explore the possibilities of optical detection of the dust charge. The surplus electrons modify the dielectric permittivity of the dust particle material or surface conductivity of the microparticle and therefore affect some of the spectral features of light scattering. Possibilities of using the transverse optical phonon resonance [17,18] were considered. Very recently, the possibility of the measurement of charges of silica nanoparticles in plasmas using Fourier-transform infrared spectroscopy of this resonance was experimentally demonstrated. [19] In addition, very recently, it was shown that semiconductor nanocrystals, quantum dots (QDs), [20][21][22] deposited on a large flat plasma facing surface are sensitive to the charge on this surface [23] due to the quantum-confined Stark effect [24] : Spectrum of their photoluminescence experiences red shift in reaction to the local electric field. It was therefore proposed [25] to design a charge microsensor by attaching the QDs to the surface of a micrometre-sized spherical particle. The calculations showed that under typical conditions of dusty plasma experiments, such a microsensor can exhibit Stark shifts of the order of fractions of a nanometre. The advantage of using the QDs compared to excitonic resonance is that the measurement can be performed in the visible spectral range. Experiments [23] have shown that exposure of QDs to the fluxes of charged particles is connected not only with the surplus-charge-induced Stark shift, but also with the thermal shift of the photoluminescence spectrum. [26][27][28] Charged particles bring their kinetic as well as potential energy to the plasma-facing surface, which is cooled by neutral gas as well as by thermal radiation. [29][30][31][32] Heating is, therefore, along with charging, an unavoidable consequence of the exposure of the surface to any ionized medium. The spectral shifts caused by these two phenomena have to be therefore distinguished. Charging of a plasma-facing surface usually occurs much faster than heating. On this basis, the experimentally observed 'fast' red shift [23] was attributed to the Stark effect. In accord with that, periodic pulsing of the plasma (on the timescales between those of charging and heating) was suggested [25] to distinguish between the thermal and electrostatic effects on the QD photoluminescence spectrum. The weak point of this approach is that this is, in fact, not the surface temperature, but rather the temperature of the QDs which determines the thermal spectral shift. QDs on the plasma-facing surface would not necessarily acquire the same equilibrium temperature as the substrate surface itself. In addition, their thermal inertia is much less than that of the substrate they are sitting on. These two issues raise the question about the thermal contact between the QD and the surface it is attached to: Is it sufficient to suppose the QD being thermally bound to the surface? This is the question we are addressing in the present paper. Analysing the thermal balance of the QD residing at the surface of a micrometre-sized particle, we arrive to the quantitative requirements for the thermal contact between the QD and the microparticle. We compare the obtained results with the estimations of the thermal contact between QD and the microparticle in case of the QD residing on the microparticle surface without any chemical bonding and in case of the QD attached to the microparticle surface with the help of chemical ligands. MODEL We model the thermal balance for both the microparticle and the QD residing on a microparticle surface considering the heat fluxes from the plasma to their surfaces and vice versa as well as heat exchange between them. The fluxes included into the model are schematically shown in Figure 1. Along with earlier works, [29,30,33] we suppose that energy is brought to plasma-facing surfaces by charged species and is taken away by neutral gas and thermal radiation. If the plasma is periodically pulsed, it periodically goes through reignition and afterglow phases, which are very hard to properly model. Afterglow is quite a long process. [34] Even milliseconds after switching off the electrical power, significant densities of the charged species remain in the reactor volume. Therefore, in a real afterglow, the microparticle and QD surfaces will continue collecting heat. On the other hand, after the plasma reignition, the densities of the charged species and, consequently, the heat fluxes to the microparticle and QD surfaces also do not immediately reach the steady-state values. Exact calculation of the heat balance of a microparticle and a QD residing on a surface of a microparticle under these transient conditions is impossible. To simplify the situation, we suppose that (a) the charged species in the plasma acquire the steady-state densities immediately after the plasma electrical power is switched on and (b) the density of the charged species in the plasma drops to zero immediately after the plasma electrical power is switched off. Since the plasma pulsing period should be much longer than the microparticle charging time [25] , we additionally assume that (c) the microparticle acquires the steady-state electrostatic potential immediately after the plasma electrical power is switched on and (d) discharges to zero electrostatic potential immediately after the plasma electrical power is switched off. Then, the heat fluxes from the plasma to the surfaces will also bounce between their steady-state values in the reignition and zero in afterglow. This simplification will increase the thermal contrast between the reignition and the afterglow and, therefore, strengthen the requirement to the thermal contact. The frequency of plasma pulsing should be selected in such a way that the oscillation of the temperature of the microparticle is negligibly small. Then, we are only interested in the variations of the QD temperature which will, in the worst case, oscillate between the temperature of a microparticle (in the afterglow) and the QD temperature in the steady-state plasma (in the reignition). The thermal balance equations for the former and the latter temperatures can then be, respectively, written as where is the duty cycle of plasma pulsing, J MP,QD e,i,g are the electron, ion and neutral gas fluxes to the microparticle and QD surface, respectively, T e,g,MP,QD are the electron, neutral gas, microparticle, and QD temperatures, respectively, a MP,QD are the radii of the microparticle and QD, respectively, MP,QD are the thermal emissivities of the microparticle and QD, respectively, MP,QD are the electrostatic potentials of the microparticle and QD, respectively, MP,QD are the coefficients of thermal accommodation of the neutral gas atoms on the microparticle and QD surfaces, respectively, E ion is the ionization potential of the plasma gas atoms and J QDMP is the effective heat flux characterizing the thermal contact between the microparticle and the QD which we will for simplicity term 'thermal contact flux. ' First, we consider the isotropic fluxes to the microparticle surface. For the fluxes of charged particles, we follow the approach in which the orbit-motion-limited current is implied for electrons, whereas for ions, the collisions with neutrals in the vicinity of the microparticle have to be considered. [35] The electron flux is then expressed as follows: where v eth = √ 8k B T e ∕ m e is the electron thermal velocity and MP is the absolute value of microparticle surface potential (which is normally negative). The expression for the flux of cold (ion temperature equals T g ) ions reads: where R denotes the radius of a sphere around a microparticle, inside which the absolute value of the electrostatic potential is below T g , v ith = √ 8k B T g ∕ M (M is the ion mass) is the ion thermal velocity and l i = k B T g ∕p ig ( ig is the cross-section of ion-neutral collisions) is the ion mean free path. In case of Yukawa potential around the microparticle, R should satisfy the equation: where D = √ 0 k B T g ∕ne 2 is the Debye length. For the neutral flux onto the microparticle, we use the kinetic expression [30] where p is the neutral gas pressure and v gth ≡ v ith is the thermal velocity of neutral atoms. The microparticle potentials MP is determined by the balance of electron and ion fluxes: The charge is then connected to the potential by the expression Z MP e = 4 0 a MP MP . It was shown [25] that typical distance between the electrons on the microparticle surface l = √ 4 a 2 MP ∕Z MP ≪ a QD . Therefore, we can assume that the QD will most of the time stay uncharged and QD = MP . In addition, for the microparticle, the fluxes of the plasma species are isotropically collected over its entire surface area, which will not be true for the QD. The respective fluxes will then read: where e,i,g is the fraction of the total QD surface area over which the electron, ion and neutral fluxes are respectively collected. Obviously, at arbitrary "surface" coefficients MP,QD , MP,QD and e,i,g , temperatures T MP,QD will be different. Our goal is to find such value of J QDMP that | | T QD − T MP | | < T, where T is the temperature difference corresponding to the spectral shift detection threshold which was estimated as 0.02 nm. The thermal red shift of the CdSe QD photoluminescence is ∼ 0.1 nm K −1 [26][27][28] which corresponds to T = 0.2 K. Since the surface coefficients are, in general, unknown, we cannot directly solve the Equations (1) and (2). We, therefore, undertake further simplifications. We note that usually E ion > k B T e . The ions will therefore be the main heat carrier in our system. To significantly decrease the temperature difference between the microparticle and the QD, the heat exchange between these bodies has to dominate over other cooling mechanisms. Therefore, if J QD i ≲ J QDMP k B T, the temperature difference between the microparticle and the QD will be below the required threshold. This reduces our uncertainty to only one coefficient i which can be constrained to lie between 1∕4 and unity. Since i cannot be very small, we can assume J QD i ∼ J MP i ( a QD ∕a MP ) 2 which allows us to define the threshold thermal contact flux RESULTS Calculations are performed for the parameters listed in Table 1. The microparticle size and electron temperature were taken from a previously published work [15] (these values were also used in our previous work [25] ), where the microparticle charge was measured using the phonon spectra of a 2D plasma crystal. We note that in that case, the microparticle levitated in the sheath of a rf discharge and had, therefore, about factor of three larger charge than that given by the condition J MP i = J MP e . In addition, to comply with our previous work, [25] we took a QD = 3.3 nm. Plasma density n was varied between 10 14 and 10 16 m −3 , whereas the gas pressure p were varied from 0.4 to 40 Pa. This range of experimental conditions covers almost the entire parameter space available in the capacitively-coupled rf discharge which is often used in the dusty plasma experiments. (8). For effective flux characterizing the thermal contact between the microparticle and QD J QDMP ≳ J QDMP thr , the temperature difference between the microparticle and the QD during the plasma pulsing will always lie below the detection threshold T = 0.2 K Figure 2 shows the pressure and plasma density dependence of the steady-state electrostatic potential of the microparticle. Both pressure and plasma density variations are associated with the collisional term in Equation (4): Ion mean free path decreases with pressure causing the growth of collisionality and, consequently, the decrease of MP , and Debye length decreases with plasma density causing the decrease of the ion trapping radius R and, consequently, increase of MP . In Figure 3, the plasma density and pressure dependence of the threshold thermal contact flux is shown. The main trend is the linear growth with plasma density, whereas a much weaker increasing trend with pressure due to the increase of the collisional term in Equation (4) is also visible. In the considered parameter range, J QDMP thr varies between 10 8 and 10 10 s −1 . DISCUSSION AND CONCLUSION For simplicity, let us first assume that the thermal contact between the QD and the microparticle is provided by a pair of atoms one of which belongs to the QD and the other belongs to the microparticle. This approach may be considered relevant for the cases when the QDs reside on the substrate without any chemical bonding. [23] In this case, J QDMP ∼ √ 1 ∕ 2 E , where 1 ∕ 2 < 1 is the ratio of the average masses of the atoms of the microparticle and QD materials, and E is the Einstein frequency of the hotter material. Supposing, in accord with previous works, [23,25] that the QD consists of CdSe, the Einstein temperature T E if of the order of 100 K. [36] Frequency E = k B T E ∕h and J QDMP ∼ 10 12 s −1 which significantly exceeds the J QDMP thr values in Figure 3. Measurements of heat conductivity of CdSe nanocrystal arrays stabilized by oleic acid [37] which is one of the ligands used to attach the CdSe quantum dots to silica surfaces [38] yield the heat conductivity ≈ 0.2 W K −1 m −1 . Thermal contact flux J QDMP ∼ h l ∕k B , where h l is the ligand length. Assuming h l ∼ 1 nm for oleic acid, we arrive at J QDMP ∼ 10 13 s −1 which again substantially exceeds the required minimum of J QDMP thr . We have therefore shown that the oscillations of the temperature of the QD residing on the microparticle surface exposed to pulsed plasmas are very unlikely to produce a measurable influence on the QD photoluminescence spectrum. This statement applies to the wide (two orders of magnitude) range of plasma density and gas pressure. Other parameters, for example, electron temperature or microparticle size, can be varied in much narrower boundaries and their effect is therefore much weaker. The possibility of the measurement of the microparticle charges in plasmas using the quantum-confined Stark effect on the QDs attached to the microparticle surfaces has already been demonstrated in our previous work. [25] In that work, we have suggested to use pulsed plasmas for the microparticle charge measurement to exclude the effect of heating of the microparticles in plasmas. By excluding the thermal issue in the present work, we can state that the entire red shift of the QD photoluminescence observed in pulsed plasmas should be attributed to the quantum-confined Stark effect originating from the surplus electrons residing on the microparticle. The proposed measurement technique can therefore be tested experimentally. ACKNOWLEDGMENTS The author thanks G. Klaassen, M. Hasani, Prof J. Beckers and Dr H. Thomas for careful reading of the manuscript and for helpful discussions. Open Access funding enabled and organized by Projekt DEAL. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request.
4,019.4
2022-10-11T00:00:00.000
[ "Physics" ]
What is Causal Specificity About, and What is it Good for in Philosophy of Biology? The concept of causal specificity is drawing considerable attention from philosophers of biology. It became the rationale for rejecting (and occasionally, accepting) a thesis of causal parity of developmental factors. This literature assumes that attributing specificity to causal relations is at least in principle a straightforward (if not systematic) task. However, the parity debate in philosophy of biology seems to be stuck at a point where it is not the biological details that will help move forward. In this paper, I take a step back to reexamine the very idea of causal specificity and its intended role in the parity dispute in philosophy of biology. I contend that the idea of causal specificity across variations as currently discussed in the literature is irreducibly twofold in nature: it is about two independent components that are not mutually entailed. I show this to be the source of prior complications with the notion of specificity itself that ultimately affect the purposes for which it is often invoked, notably to settle the parity dispute. Introduction Traditionally, the philosophy of causation has put the focus on the project of distinguishing causes from non-causes by clarifying what it is to be a cause. Recently, the complementary project of distinguishing among causes is receiving considerable attention as well (Woodward 2010). This latter task makes use of additional causal concepts that characterize differences in causal contributions, enabling comparisons among them. This is an important philosophical project for clear reasons. Often, one is not exclusively concerned with detecting causal relations, but with the characteristic features of particular causal contribution. Moreover, we often acknowledge 1 3 a given effect to have multiple causes, so the question arises as to how they compare in terms of diverse characteristics. A theory of causality that merely tells us how to distinguish causes from non-causes would not help us answer that question. Analyses of the respects in which causal relations differ can be done on the basis of several properties (e.g. strength, proportionality, stability), but I will be concerned with one that has inspired interesting disputes in the philosophy of biology, namely causal specificity. The notion of causal specificity has been spelled out in a number of ways; for Waters (2007), for instance, specificity is about the possibility that many different changes in the cause would produce many different changes in the effect. Philosophers of biology have invoked causal specificity to make the case that genetic and nongenetic developmental factors are not causally on a par, in other words, to argue against the causal parity thesis. The general rationale of specificity-based arguments is that it does not follow from the fact that there are multiple causes for a given effect that they are the same, ontologically speaking (i.e. qua causes). Thus, opponents of causal parity have invoked the concept of causal specificity to make the case that DNA is ontologically different. However, it has been replied that non-genetic factors can be highly specific as well. Regardless of the stance taken with respect to the causal parity of developmental factors, there is a consensus view that specificity captures an interesting and distinct feature of causal relations. In this context, two things seem clear to me. First, the debate around causal parity in philosophy of biology is a bit stuck and, at this point, it is not more biological details that will help fix the situation (as, in fact, opposing conclusions have been drawn from biological details). Second, philosophers seem to assume that attributing specificity to causal relations is at least in principle a straightforward (if not systematic) task, and that causal specificity is a perfectly coherent, intuitive, well understood notion. In this paper, I take a step back to reassess the value and limitations of the notion of specificity for the dispute. This is not carried out on the basis of further analysis of the overly discussed cases of DNA vs. the polymerase, or vs. alternative splicing. Rather, I point to prior difficulties with the notion of specificity itself and show that they ultimately affect the purposes for which specificity is often invoked, notably to settle the parity dispute. I argue that causal specificity, in all available accounts, is really about two distinct components, connectedness and repertoire, which-crucially-are not mutually entailed. Furthermore, I identify this dual nature as the source of the difficulties for attributions of causal specificity to any given causal relation. While my analysis has direct consequences for a particular debate in philosophy of biology that are made explicit in the paper, it will be of a broader interest in the philosophy of causation. The paper is structured as follows. In Sect. 2, I present the causal parity thesis in philosophy of biology, clarifying the principles upon which it rests. I then outline the basic logic of the specificity arguments against causal parity, illustrating how specificity works there. In Sect. 3, I review the ways in which causal specificity has been spelled out in the literature. Next, I address the question as to whether specificity should be viewed as a quantitative or as a categorical property, and recommend the former option following some of the suggestions in the literature (Sect. 4). In Sect. 5, I raise a paradoxical reading of the so-called "switch-like" causal structures in order to reveal the dual nature of causal specificity and clarify its components and their mutual independence. This is identified as the source of the difficulties explained in Sect. 6 concerning the specific/nonspecific distinction and how to attribute causal specificity. The consequences of such difficulties for the parity dispute in philosophy of biology (the success of specificity-based arguments), and ways to deal with those difficulties, are examined in Sect. 7. Lastly, some conclusions in Sect. 8. Causal Parity and the Specificity Argument Parity claims in biology are mainly attributed to Developmental Systems Theory, a challenge to traditional dichotomous, gene-centric views of development and evolution (for good overview, see Oyama et al. 2001). According to this tradition, biological phenomena can be accommodated along a series of dichotomous categories: nature/nurture, inherited/acquired, genetic/environmental; where the latter is spelled in terms of a few others: information carriers/material support, replicators/ interactors, controllers/controlled, instructive/permissive, etc. Developmental Systems Theory is an attempt to move beyond the traditional dichotomous view on the grounds that it fails to account for the complexity of biological phenomena. The talk of parity claims as such is not so widespread. The literature typically mentions "causal parity" or "the causal parity thesis", and often fails to distinguish among sorts of parity (an exception is Stegmann 2012). At least three senses of parity are distinct enough to call fairly independent analyses: causal, informational, and methodological (Ferreira Ruiz 2019). Here, we are only concerned with causal parity. Advocates of causal parity (sometimes also called "symmetry") reject any principled, metaphysical, objective distinction between genetic and nongenetic causes 1 (e.g. Oyama 1985;Griffiths and Gray 1994;Griffiths and Knight 1998;Griffiths and Stotz 2013). Causal parity claims are motivated by the causal complexity of biological phenomena and emphasize that such phenomena are typically brought about a multiplicity of (interacting) causes. Taking RNA synthesis as an example, causal parity (minimally) highlights the fact that DNA is not causally sufficient to bring about the synthesis of RNA molecules; rather, a particular cellular milieu and a variety of proteins and other molecules are needed for synthesis to take place. Typically, parity advocates commit to the following tenets: T1: Multiplicity of causal factors: Biological phenomena are attributable to an assembly of multiple causal factors. T2: Insufficiency of single causes: No developmental factor alone is sufficient for the effect. T3: Metaphysical homogeneity of causal roles: However causal roles may differ, they do not differ metaphysically. T1 and T2 are thoroughly addressed in (Oyama 1985). T3 is especially articulated in an attempt to counter objections that the idea of parity obscures valuable distinctions: The 'strawman' parody of developmentalism says that all developmental causes are of equal importance. The real developmentalist position is that the empirical differences between the role of DNA and that of cytoplasmic gradients or host-imprinting events do not justify the metaphysical distinctions currently built upon them (Griffiths and Knight 1998, p. 254; see also Griffiths and Gray 1994, p. 277). At first glance, it is unclear what such "metaphysical distinctions" would amount to 2 , but we get a better grasp of the causal parity thesis in the light of its criticisms, which invoke causal specificity. Thus, let us now consider the rationale of anti-parity arguments. (For the ease of exposition, the meaning of specificity here will be bracketed until the next section). Causal specificity appears in the causal parity dispute in biology as a result of the way this thesis has been interpreted by its main critics. While virtually everyone would accept T1-T2 above (as do critics of causal parity), T3, on the other hand, is contentious. Most replies to parity interpret T3 in connection with the problem of causal selection. This longstanding problem of concerns the grounds for singling out one or a few factors as 'the cause(s)' of an effect while relegating others to 'background conditions' (Broadbent 2008;Franklin Hall 2015;Ross 2018). Provided that causal explanation is typically selective in this sense, both in everyday causal judgement and in scientific practice (as we never cite every factor relevant to an effect), the question arises as to whether this practice of selecting causes responds to objective or only to pragmatic, interest-laden criteria. Here, some take it that the standard view in philosophy is that which can be traced back to John Stuart Mill (Mill 1974(Mill [1843, book III, ch. V). According to Mill, the way we select causes from among various conditions is interest-laden (p. 329). Thus, T3 would express a Millean view that the only objective distinction is between causes and noncauses. Critics of causal parity reject such Millean view, and contend that it does not follow from the fact that there be multiple causes that they be ontologically the same ("the fallacy of parity arguments" in Waters 2007). 3 What is Causal Specificity About, and What is it Good for in… Opponents of the causal parity thesis have invoked the concept causal specificity to make the case that DNA is ontologically different. This means that the customary singling out of genetic factors does not merely follow from the needs and interests of particular investigations, but it (in addition) reflects some objective aspect of the world. Once an effect has been specified, the question about its causes is an ontological one. Similarly, once the causes of a given effect have been identified, the issue of their characteristics (here, whether they bear a specificity relation to the effect) is, too, an ontological one. Specificity-based arguments against the causal parity thesis posit an (objective) difference that is either categorical, that is, whereby some factors (DNA, and perhaps some other) are causally specific with respect to (for example) RNA synthesis but others are not, in a clear-cut way; or settle for a quantitative type, such that DNA is causally more specific than other types of factors. Having presented the wider context of the discussions, we can now turn to the available formulations of specificity. Specificity, a Few Ways First, some working distinctions are in order. Indeed, because specificity is a key and pervasive concept in biology, we need to avoid easy misunderstandings. Very roughly, we can identify a biological sense and a philosophical sense of specificity. In the first case, we are talking about specificity as used by biologists to refer to distinct biological phenomena 3 . Even when the biological uses of specificity can, as it happens, be further analyzed philosophically, we can conceive of them as distinct -at least, in the sense an explicandum and its explicatum are distinct (Carnap 1950) 4 . In the second case, we are talking about causal concepts, that is, concepts for features of causal relations that are the object of philosophical enquiry. This is the sense of specificity features in the causal parity debate, and has to do, in a nutshell, with the possibility that many different changes in the cause led to many different changes in the effect. When causal relations have this property, they can be exploited in various ways, allowing a fine-grained control of what happens with the effect (this is why it is also referred to as "fine-grained influence"). 5 In addressing the problem of causal selection, Kenneth Waters (2007) proposes that practices of causal selection in biology are often guided by the identification of the cause that actually made a difference, relative to a given population. However, he acknowledges that this alone fails to account for biologists' singling out of DNA in the context of protein synthesis, because factors other than DNA that are not similarly selected constitute actual difference makers too. The case requires, then, noting that not all actual difference makers are equal: some are causally specific. By this, he means that different changes in the sequence of nucleotides in DNA would change the linear sequence in RNA molecules in many different and very specific ways. Generalizing, Specificity (Waters) Many different changes in the cause produce many different changes in the effect. This is contrasted with the role of the RNA polymerase, as another salient factor that participates in the same process and contributes to the same effect (RNA synthesis) as DNA. Waters contends that the polymerase lacks this specificity because it is not the case that many different changes in this enzyme will lead to many different changes in the RNA product. Rather, such changes would slow down or stop the RNA synthesis altogether. Waters acknowledges that the idea resembles David Lewis' notion of influence. In Lewis' (2000) view, roughly, influence is a matter of there being "not too distant" alterations of the cause and the effect, which are connected counterfactually: Specificity (Lewis) C influences E iff there is a substantial range C1, C2,… of not too distant alterations of C, and a range E1, E2… of alterations of E, at least some of which differ, such that if C1 had occurred, E1 would have occurred, and if C2 had occurred, E2 would have occurred, and so on. A fundamental difference between the two concepts here is that Waters takes specificity to characterize certain causal relations, while Lewis' influence was aimed at characterizing causal relations simpliciter (and fails to do so according to a broad consensus in the philosophy of causality literature, see e.g. Schaffer 2001). A different definition is provided by James Woodward (2010) within the interventionist framework. He claims to clarify Waters' notion of specificity by drawing on Lewis' notion of influence. The combination of both within the interventionist framework and terminology results in an idea of specificity as fine-grained control, which is presented intuitively with a radio analogy that compares its on/off switch to its tuning dial. There are many possible positions for the dial, many possible radio stations, and a relation holding between both that enables a fine-grained control over what is heard on the radio. I can intervene on the dial in many ways so as to tune various different stations. By contrast, the on/off switch, while causally relevant to whether a station is received, it has little influence on which one is received. One cannot intervene on the on/off button in order to tune various different stations, but only to turn the radio either on or off. Woodward defines specificity as follows (slightly simplified): Specificity (Woodward) There are a number of different possible states of C (c1… cn), a number of different possible states of E (e1… em) and a mapping F from C to E such that for many states of C each such state has a unique image under F in E (that is, F is a function or close to it, so that the same state of C is not associated with different states of E, either on the same or different occasions), not too many different states of C are mapped onto the same state of E and most states of E are the image under F of some state of C. F should describe patterns of interventionist counterfactual dependency. This definition is meant to allow for a better characterization of biological causes and, thus, a better comparison among them (Weber 2017a contains the most elaborated argument based on the notion above). Claims that DNA is a highly specific cause (or a dial-like cause) of protein synthesis mean that: "there are many possible states of the DNA sequence and many (although not all) variations in this sequence are systematically associated with different possible corresponding states of the linear sequences of the mRNA molecules. (…) Thus, varying the DNA sequence provides for a kind of fine-grained and specific control over which RNA molecules or proteins are synthesized" (p. 306). Claims that the polymerase is not specific mean that interventions on it do not provide fine-grained control. The role of the RNA polymerase, by contrast, switch-like. In the case of maximal specificity ('ideal'), the mapping is a 1-1 function (every value of the effect variable maps onto one value of the cause variable, and viceversa). Woodward notes that this might not be the rule in real-life cases, as they are typically not bijective. In such cases, he contends, we have more specificity the closer the mapping gets to a bijection (i.e., a one-to-one mapping). This is an explicit proposal that specificity is a property that comes in degrees (but Weber 2006 first suggested that this could be the next step). Waters' treatment of DNA and nongenetic causes seems to attribute either specificity or lack of specificity, and it can be odd to regard Lewis' influence as admitting of degrees, if the notion characterizes causation simpliciter (perhaps leading to the contentious idea of graded causation). In any case, Woodward's quantitative notion was taken seriously enough to motivate a measure of causal specificity. In fact, Paul Griffiths et al. (2015) put forward a measure of causal specificity (in the sense of fine-grained control above) based on the formalism of Shannon's mathematical theory of information (1948). Their motivation is that the previous literature on causal specificity is mostly "qualitative", and that the notion had not been made "adequately precise". Thus, they claim, "a merely intuitive approach to causal specificity is unlikely to be helpful in settling disputes like this (p. 531). The idea is that we can measure how much knowing the value set by an intervention on a causal variable reduces our uncertainty about the value of an effect variable. To know this, one needs to compare the entropy (or information) of the probability distribution of the values of the effect variable before and after intervention on the cause. The greater the difference in these entropies, the more uncertainty is reduced by intervening on the cause, and the more specific the relation is. Specificity (measure) The specificity of a causal variable is obtained by measuring how much mutual information interventions on that causal variable carry about the effect variable. 6 Three quantities are key in this framework: the entropy of the probability distribution of the effect values, H(E) ; the conditional entropy of E having set the value of the cause by intervention, denoted with a hat, H(E|Ĉ) ; and the mutual information between cause and effect having intervened on the cause, which obtains as H(E) − H(E|Ĉ) . In this way, causal specificity becomes identified with mutual information between interventions on the cause and effect: it tells us how much an intervention on a cause specifies an effect 7 . This measure is used to show that alternative splicing factors bear a degree of specificity comparable to that of DNA with respect to RNA sequence. Notably, the same measure has been used to show (without denying the specificity of alternative splicing factors) that DNA still scores higher in specificity in this sense (Weber 2017b). While the characterization of causal specificity has been refined and improved, we will see that crucial issues that remain and have been overlooked. Having depicted the rationale of the specificity argument and reviewed formulations of specificity, a question arises as to what sort of distinctions do the parity and anti-parity stances need or posit. This will prove relevant to the remainder of the paper, for reasons that will become clearer in the next section. What Kind of Distinctions does Specificity Enable? When we consider the question as to the sort of distinctions that are at play here (and as mentioned in passing in Sect. 3), two options come to mind: either specificity licenses categorical distinctions, or it enables distinctions in degrees. In the latter case, I will refer to specificity as a quantitative property, covering both views that yield comparative ascriptions ('A is more/less specific than B') and views that allow assigning a numerical value ('A scores 1 bit in specificity', e.g. following the measure above). This choice is generally relevant in the parity debate, as it bears on the strength of anti-parity claims. More importantly, it will prove relevant to this paper, as I will make the case that the difficulties with causal specificity that affect its use in parity arguments arise even for what seems to be the most plausible option, namely the quantitative distinctions. Thus, is the specific/nonspecific distinction meant to be quantitative or categorical? What sort of specificity-based distinction is rejected by the causal parity thesis and defended by opponents? One option is to consider the relevant distinction to be categorical, this is, where causal parity requires showing that there are no categorical differences between biological factors, and non-parity requires showing that categorical distinctions are possible. Griffiths and Gray's claim above about there being "nothing that divides the resources into two fundamental kinds" seems to go along this line. There is no property of developmental resources on the basis of which we could draw a sharp distinction. The way Waters treat the DNA vs. polymerase situation seems to accept exactly these terms for the discussion: DNA is specific, the polymerase is not. This sounds like a categorical distinction. The radio analogy, in turn, reinforces this contrast: dials and switches are different kinds of things. However, more recent contributions to the debate (from both parties) seem to have shifted towards a quantitative point of view. Note that quantitative here should cover both the measure and a comparative concept ("more/less specific than…"). Woodward's notion of specificity (and hence specificity-based distinctions) is explicitly quantitative at least in a comparative sense: in real-life cases where the mapping will typically not be bijective, we have more specificity the closer the function gets to a bijection. (I have doubts that the radio analogy, introduced by Woodward himself, is the best way to capture this notion of degrees of specificity. I turn to these issues in Sect. 7). The information-theoretic measure of causal specificity is another straightforward case where specificity is conceived as coming in degrees. In this case, specificity is not merely a comparative concept: we obtain numerical values for the degree of specificity. Now, what does the causal parity debate look like, from such quantitative angle? The point of causal parity would be to emphasize that different factors exhibit the same causal property, and put less weight on the degree to which this is so. They would hold that specificity cannot elevate a factor to a special status, or single it out, either because differences in specificity are negligible (which is an empirical matter and depends on the case) or because differences in degree would not in general ground the sort of ontological claim that non-parity is supposed to be. Opponents of the causal parity thesis, on the other hand, would put the emphasis elsewhere. They would stress how various causal factors exhibit a property to varying degrees despite it being the same. Consequently, they would hold that specificity does single out certain factors, that differential degree is what matters, and that this differential degree allows for exactly the sort of ontological differences that the stance requires. The recent discussion between Weber (2017a, b) and Griffiths et al. (2015) shows exactly this dialectic. The latter propose and apply the measure hoping to show that alternative splicing factors can be nearly as specific as DNA with respect to RNA synthesis. In turn, Weber (2017b) uses the same measure to show exactly the opposite: that even if other, non-genetic factors bear a specific relation to some effect, it will never exceed the degree of specificity characteristic of DNA. The discrepancies here stem from various decisions concerning the application of the measure to a concrete, particular case. Notably, it depends on the range of variation that should be considered, and on how the DNA causal variable is construed in the comparisons 8 . As much as these decisions are relevant, they should not give the impression that this is the extent of the problem. On the contrary, problems arise even under full agreement as concerns such decisions. Indeed, I will explain that the specific/nonspecific distinction made quantitative, while rings more plausible than the categorical alternative, is nevertheless itself puzzling (this is, not as a consequence of particular applications to concrete cases). This can now turn to the twofold nature of causal specificity. Two Components of Causal Specificity As things stand now, there is some tacit consensus that we have a good grasp of the specific/nonspecific distinction, at least at the level of intuitions. There is also consensus that this notion captures an interesting and distinct causal feature, notably but not exclusively in biology. In fact, recent disagreements featured in the literature stem more from diverging assessments of concrete cases, and/or different ways to implement the concept/measure in those cases, than from ambiguities or issues with the notion itself. I will now articulate some issues of the latter kind, centered around the specific/nonspecific distinction. The quantitative shift in the specificity debate, as reviewed above, favors the picture of a specificity gradient, ranging from minimum to maximum specificity. This idea seems prima facie unproblematic, as quantitative properties are ubiquitous. It might even seem more plausible than a qualitative/categorical view of specificity. So, the literature suggests the following picture, where switches are to be placed in the minimum extreme of the gradient: However, on a closer inspection, it is far from obvious that the switch-like cases (recall: 2 values for each variable, plus a bijective function relating them) should be placed there, in the minimum extreme of the specificity gradient rather than in the opposite one. The reason why this happens is the very nature of specificity, as we will now see. I submit that specificity, in every account, is a dual causal notion, this is, one about two components, that I will call repertoire and connectedness. Assuming the relevant relations are causal, and expressed here as terminologically neutral as possible, these components impose the conditions that: REPERTOIRE many possibilities exist on each side (cause; effect), and CONNECTEDNESS that the possibilities on both sides are connected in a particular, relevant way The formulations here are intendedly vague ("many", "in a particular or relevant way") for the sake of terminological neutrality, but we might as well turn to the Woodwardian terminology to express these ideas. REPERTOIRE refers to the number of different values that an effect variable and a cause variable can take. Note that CONNECTEDNESS is not simply about imposing a connection simpliciter (this would be trivial, as we are already dealing with causal relations). And it is also not about particular activities, actions, or the "special causal concepts" in an Anscombian sense (e.g. 'scratch', 'push', 'burn' (Anscombe 1971)). Rather, CONNECTED-NESS is about a further condition on the causal connection, irrespectively of the type of action. In Woodwardian terms, it can be expressed as a condition for the function connecting values of the cause and effect variables (in particular, imposing that the function be bijective, or similar). We can bring the radio example for a simple illustration. Suppose the relevant effect (what I wish to have a fine-grained control over) is the music for my car ride. (We must assume, for the sake of the example, that the following is a good causal description of the situation, e.g., that the level or grain of description is accepted as appropriate). Three scenarios can be compared. In a first scenario, my tuning dial that can take up about 20 positions and I have about 20 music options to choose from within my reach. This is a typical situation where each position in the dial corresponds to one radio station, so we have here a one-to-one mapping between values of the cause (dial positions) and values of the effect (music options). Now compare that situation to one where my dial has only four possible positions, and I am driving around a small village where I can only pick up four stations broadcasting music. The two scenarios show the same mapping of cause and effect; the difference here lies in the REPERTOIRE of both causal relata. Consider now a third situation we where we have again a 20-position dial. Suppose that five of these allow me to tune five different radio stations, transmitting various different kinds of music, but there is one program, "The best of Genesis" that is (for some reason) broadcast on 12 different stations. A country music program is, similarly, broadcast on three. There will be 12 different positions of the dial that would lead me to listen to exactly the same Genesis songs, three positions that will give me the same country music songs, and five remaining positions with which I can tune five different programs. In both the first and third scenarios, we have 20 possibilities for the dial. The difference between these two, then, corresponds to a difference in CONNECTEDNESS. To be fair, there has been some recent recognition that causal relations might differ, in terms of their degree of specificity, in respects involving what I call REP-ERTOIRE and CONNECTEDNESS (see Stegmann 2014 and Weber 2017a) 9 . 9 On the one hand, Griffiths et al. (2015) state that "Fine-grained influence requires both that the repertoire of effects is large and that the state of the cause contains a great deal of information about the state of the effect." (p. 550). However, all that matters in this proposal is the entropy of the effect (not that of the cause). My own understanding of repertoire is not that the condition applies only to the effect but to both cause and effect. Thus, the distinction is not fully acknowledged in their work. (More on this in Sect. 6, where I discuss an issue with SPEC). On the other hand, Bourrat (2019) makes claims about two "dimensions" of specificity, but in a different sense. What he has in mind is Woodward's distinction, in footnote 5, between one-cause-one-effect specificity and fine-grained influence. Because the informationtheoretic measure captures only the latter, he argues, an additional measure is needed that would capture the former. My claim is different, as I am not concerned with one-cause-one-effect specificity. I contend that it is within the idea of fine-grained specificity (and slight variations thereof) that we can find significant ambiguity. However, he also claims that causal specificity amounts to both measures. This can be read in two very different ways. If it means that both INF and one-cause-one-effect are equally legitimately causal concepts, to which he also subscribes explicitly, I see no problems (although I do find it misleading to speak of "amounting to"). If, on the other hand, he means that there really is a single notion of causal specificity that is really a combination of two ideas, then having separate measures for each would become a problem unless there is also a way of articulating the two (especially on the conceptual level). For this reason, I am inclined to the former interpretation. However, two things are missing in previous work on specificity. First, the distinction has not been directly addressed and, as a consequence, the question has not been posed whether such duality is essential to the notion, or simply a drawback from particular definitions that happen to be inadequate, but which could be amended. Secondly, previous recognition that specific relation might differ in two respects has not been followed by reflections on the problems this leads to. However evident the dual nature of specificity is (if evident at all), it has not been identified as problematic for the coherence or, at least, the operationalization of our causal property. Indeed, an important fact concerning these two components has been overlooked, namely that they are quite independent. Yet it is not difficult to observe that they are not mutually entailed: • REPERTOIRE does not entail CONNECTEDNESS: the existence of many possibilities does not necessitate a particular type of connection (e.g. a bijective function), • CONNECTEDNESS does not entail REPERTOIRE: the existence of a particular type of connection does not necessitate any particular number of possibilities 10 . The only link between the two components is that this is how, as a matter of fact, we seem to reason about causal specificity and, to this extent, is constitutive of the idea of causal specificity. It seems that by focusing on only one component at a time, causal specificity would remain undefined. Having identified the two (independent) components of specificity, the point I will make next is that this dual nature of causal specificity threatens the coherent attribution of specificity and raises issues that affect the purposes for which it is often invoked, as I will explain. Attributing Causal Specificity As we saw in Sect. 4, we can conceive of specificity arguments where specificity is either a categorical property or one that comes in degrees (either measurable or comparative). Spelling out the specific/nonspecific distinction as categorical would prove extremely difficult, and perhaps unnecessary, as we have clearer formulations of specificity that render it quantitative. Here, I will show that the attribution of causal specificity, and comparing degrees of specificity under a quantitative angle is hampered by the independence of the two components of specificity (Fig. 1). We can start by taking a closer look at the switch-like cases. A in Fig. 2 below corresponds to what is called a switch: two values for the causal variable (up; down), two values for the effect variable (off; on), and a bijective mapping from cause to effect. Because the two components of specificity are not mutually entailed, depending on which component is given more weight, intuitions regarding the specificity we should ascribe to switch-like cases will vary (Fig. 3). Indeed, switch-like cases can be considered either minimally specific, because there are very few possibilities, or maximally specific, because the function is bijective. This seems paradoxical, and perhaps even points to causal specificity being ill-defined. If a switch is an extreme case of a dial (rather than something categorically distinct), in which sense is a switch an extreme case? Arguably, these cases rank low in REPERTOIRE and high in CONNECTEDNESS. In the specificity debate, however, A-cases are introduced as contrasting with highly specific causal relation, this is, introduced as the opposite of a dial. But this must be acknowledged to work under the assumption that it matters more, if not exclusively (for the conclusion), that it scores low in REPERTOIRE. And the difficulty arises if we look at other types of situations, too. Consider case B: what should we make of it? We need to secure a place for A along a gradient if we are to maintain that, for example, case B is more specific than A (assuming this is an intuitive reaction). Thus, the problem is not simply that there may not be room for a sharp, categorical difference between switches and dials. If this were the extent of the issue, we could perhaps find reasons to settle for differences in degree. But my diagnosis is -the way I see it-more severe: even spelling out the difference is in degree becomes, conceptually speaking, a more elusive task than prima facie thought, as we now have degrees of two independent components of specificity which can pull in different directions. At this point, one might think that a measure of causal specificity would solve (or dissolve) these worries. As a measure of causal specificity, SPEC is the most unmistakable construal of specificity as a quantitative property. But more importantly, it is expected to serve as a simple device for establishing the exact degree of specificity of any given cases, the (allegedly) least specific ones included. In fact, the measure can be applied to switch-like cases just the same. Far from any categorical difference between switches and dials, the measure yields a specificity value (a mutual information value, but where this is to be interpreted causally as specificity) in a common currency of bits, regardless of whether the cases under scrutiny were initially conceptualized as specific, nonspecific, more or less specific, dial-like, or switch-like. Yet, that the quantitative approach of SPEC solves (or dissolves) the issues proves to be a problematic claim, and the reason has to do, again, with the twofold nature of specificity. In fact, if causal specificity is measured, as proposed by Griffiths and colleagues, by comparing the entropy of the probability distribution of the values of the effect variable before an intervention on the cause, and the entropy of the probability distribution of the effect variable having performed the intervention on the cause (this is, H(E) − H(E|Ĉ) ), then, as shown by Griffiths et al. (2015) themselves, the following two cases C and D become indistinguishable (and both also indistinguishable from A): They all yield a mutual information value of 1 bit, just like the switch case A. In case C we have an initial entropyBourrat of the effect of 1 bit, and because knowing the value of the cause leaves nil remaining uncertainty, the mutual information between Ĉ and E amounts to 1 bit. In the second case, D, the initial entropy of the effect is 2 bits (more values) and our uncertainty after knowing the value of the cause is not nil (either one of two values, thus 2 bits), so the mutual information value is exactly the same (1 bit). Yet, intuitively, we may want to say that they are different in important aspects 11 . The number of possible values for the causal variable is greater in C than in A and D; while the number of possible values for the effect variable is greater in D. In addition, in A, any change in the cause leads to a change in the effect, but not every change in the cause leads to a change in the effect in C or D (e.g., the change from C = c 1 to C = c 2 ). In any case, the issue with the SPEC measure might be deeper than merely failing to account for the difference among presumably different cases. The quantity that would make the difference visible is the entropy of the source (Griffiths et al. 2015). However, this quantity plays no role in determining the specificity value. Recall the quantities relevant to SPEC from Sect. 3: the entropy of the probability distribution of the effect values, H(E) ; the conditional entropy of E having set the value of the cause by intervention, H(E|Ĉ) ; and the mutual information between cause and effect 1 3 What is Causal Specificity About, and What is it Good for in… having intervened on the cause, which obtains as H(E) − H(E|Ĉ) . The entropy of the source plays no role because our C here is always an intervened C, this is, set to some value c i regardless of the possible repertoire we could have for the variable. All we need to measure specificity is the entropy of the probability distribution of the values of the effect variable before an intervention on the cause: H(E) , and the entropy of the probability distribution of the effect variable having performed the intervention on the cause: H(E|Ĉ) . Consequently, only Ĉ (but not C ) is relevant to the SPEC measure. But the full range of possible values for either variable should matter, according to previous notions of specificity. Hence, SPEC misrepresents REPERTOIRE, in that the full range of possible values for C is at the end of the day irrelevant and all that matters is Ĉ . For similar reasons, it also misrepresents CONNECTEDNESS, in that it only focuses on how one particular value of C maps with E value(s). This of course suggests that SPEC is not exactly a measure of causal specificity in any of the formulations in Sect. 4, but perhaps something different. It should be noted that others have discussed how to best implement the measure for particular comparisons (e.g. DNA and alternative splicing factors, Weber 2017b), but the measure itself has not been impugned or questioned as genuinely representing the philosophical idea of causal specificity. 12 If my analysis is correct and the notion of specificity is indeed about CONNECTEDNESS and REPERTOIRE, then, I believe that SPEC is not an adequate approach as it distorts both components. In any case, assessing the adequacy of a measure for a given property requires, at the very least, that we have a good grasp of the property to begin with, and this is what I am most concerned with. 13 13 A reviewer suggested to consider whether SPEC's emphasis on the probability distribution suggests a third component of specificity. This is an interesting suggestion, but I have some doubts about this way of looking at the role of probabilities here. I believe that while probabilities are relevant in general, they are not better viewed as a further component of specificity. I take that the reviewer refers not to the conditional probability of obtaining an effect given a cause, but to individual probabilities of the values of the cause/effect variables. For instance, the probability of me setting the tuning dial on position p i , the probability of me setting the dial on position p j , and so on. Now, I think that the characteristics of the causal structure as a whole are independent of individual probabilities in the sense above, at least, if we take available definitions of specificity. The way to think about the different states here should be rather counterfactual: if I were to set the dial on position p i , then, radio station r i would obtain. It does not matter how unlikely it is for me to do that for characterizing the causal structure. It seems to me that the probability of occurrence of p i which, suppose, is very low, would not by itself affect an ascription of specificity to the extent that one concedes the truth of the conditional 'if I were to set the dial on position p i , then, I would tune radio station r i '. In my view, individual probabilities are irrelevant to analyzing the causal specificity of a given causal structure, because specificity is not about any particular state but about the overall structure. However, this is not to deny that probabilities are relevant to the SPEC measure. Indeed, they are absolutely relevant to SPEC, insofar as they are essential to information theory and SPEC uses such formalism. But I believe that we should not draw the wrong conclusions from this fact for the simple reason that the measure might be failing to capture the notion of specificity (as I have suggested). Thus, I believe that that the key, distinct components of specificity are the two I have put forward. Having identified and explained these issues with the very notion of causal specificity, we must take stock of the particular implications for causal parity. I discuss this next. Implications for the Causal Parity Dispute in Biology Along the last few sections, I argued, contrary to what is assumed, that a quantitative view of specificity (either measurable or comparative) is not as intelligible and cannot be applied as systematically as it would be desirable for certain contexts -the causal parity quarrel in philosophy of biology being a salient case in point. I attributed this problem to the twofold nature of causal specificity, as REPERTOIRE and CONNECTEDNESS are independent conditions. Such independence makes it particularly problematic to deal with both components simultaneously and coherently. What does this mean for specificity-based arguments and the CP thesis? I see two possible reactions. One is to think that causal specificity is not a single property but a mixing of two distinct ones that are, in fact, not even predicated of the same items. In this case, REPERTOIRE is a property of cause/effect variables, one about the range of variation that is possible for them. CONNECTEDNESS becomes a property of the mappings between causes and effects, corresponding to different types of functions (and perhaps non-functions as well). Each property by itself is perfectly coherent, so splitting causal specificity into two distinct (genuine) properties means a way around our issues. But splitting causal specificity into two independent properties means that we cannot compare causal relations from a single point of view. Parity and nonparity necessarily become relative to either independent property, something like "parity REP " and "parity CON ". This seems acceptable, but it does complicate things. What should we conclude from cases where we have two competing causal factors, each ranking higher than the other in one of these two properties? In addition, it could be argued that neither REPERTOIRE nor CONNECTEDNESS alone capture the intuition behind the comparison of the roles of DNA and the polymerase in protein synthesis. It seems that systematic application comes with the cost of giving up the original intuition. Alternatively, we may want to retain the original intuition that certain causes, like DNA, are peculiar in that many different changes in the DNA lead to many changes in the effect, this is, some combination of both REPERTOIRE and CONNECTED-NESS. It seems that we do capture some interesting feature of genetic causation by such remarks, and there is no prima facie reason to disregard it. But, in turn, we may need to give up any pretension of systematic application, this is, that a concept of causal specificity (or a measure thereof) will lead to a systematic procedure whereby any two causal relations can be clearly compared and ranked in terms of their specificity if we wish to simultaneously take both components into account. Let us consider the implications of the latter case: can we compare molecular causes from the point of view of specificity? This will sometimes be possible, for instance, if we compare two cases that differ along one component only but score the same along the other, but we will not always have clear, compelling intuitions regarding particular cases whenever this is not the case. Some comparisons might be more difficult than others, and will probably be influenced by the wider context and goals of the comparison. More important is another set of questions: Can specificity ground substantial philosophical claims about the ways they contribute to an effect? Can specificity settle the parity quarrel in one way or another? In which sense is it helpful to characterize DNA as a dial-like cause, and the polymerase as a switch-like one? In general, is it helpful to characterize biological causes as either dials or switches? I take to be an inevitable consequence of this analysis that we should rethink the meaning and scope of specificity-based philosophical claims. Showing that DNA or RNA molecules are more specific kinds of factors than polymerases or alternative splicing agents (hence supporting a claim of nonparity) is not as straightforward as it seems, and this is not (simply) due to biological facts. This need not mean that characterizing biological causes (or otherwise) as specific, nonspecific, or relatively specific is not helpful: causal specificity still picks out some feature of certain type causal relations, it just may not be the best argument in support or rejection of causal parity. What other causal concepts would do a better job in this respect is the topic of another paper. Conclusions This paper undertook a deeper and encompassing exploration of causal specificity and the extent to which this notion can meet the goals that many have set for it. I first introduced the causal parity debate in philosophy of biology as the context of most discussions and elaborations on causal specificity, sketching the basic logic of specificity-based arguments against CP. Then, I reviewed several conceptualizations of specificity available in the literature. After raising the question as to what sort of property (categorical, quantitative) is specificity supposed to be, I laid out the rationale for a quantitative point of view. I argued, contrary to what seems to be assumed, that a quantitative view of specificity (either measurable or simply comparative), is not as intelligible and cannot be applied as systematically as it would be desirable. I attributed this problem to the twofold nature of causal specificity, as this property is about two logically independent conditions, REP-ERTOIRE and CONNECTEDNESS, which are not easily dealt with in tandem. The view in this paper is that, as long as such duality within the notion of causal specificity is not properly acknowledged, we cannot expect to arrive at anything but mixed and conflated claims that will not be particular profitable. I also contended that these are not minor issues, as they inevitably affect the very purposes for which specificity is invoked: in general, to compare/distinguish amongst causal factors; in particular, to settle the causal parity dispute in philosophy of biology.
11,573.8
2021-07-12T00:00:00.000
[ "Philosophy", "Biology" ]
Synthesis of ultrathin metal oxide and hydroxide nanosheets using formamide in water at room temperature † The rational preparation of ultrathin two-dimensional metal oxide and hydroxide nanosheets is the first and crucial step towards their utilization in both fundamental research and practical applications. From the perspective of the production cost, the further development of such materials remains a challenging task. Therefore, it is highly desirable to synthesize these materials via a simple and general strategy at room temperature. Besides, less attention has been paid to investigate their growth process, leading to ambiguous formation mechanisms and the lack of guiding principles for designing the targeted ultrathin 2D metal oxides and hydroxides. Here, 6 different ultrathin ( < 5 nm) 2D metal oxides and hydroxides have been successfully synthesized via a simple precipitation route in formamide aqueous solution at room temperature. Detailed investigations demonstrate that the formation of the ultrathin morphology relies on the inhibition of the z -direction growth by the – NH 2 groups of the formamide molecules. These findings broaden the fundamental understanding of 2D material formation mechanisms and inspire interest in extending this strategy to further systems. Our study opens a new avenue for an easy, general and room temperature synthesis of ultrathin 2D metal oxides and hydroxides. Introduction Ultrathin two-dimensional (2D) nanomaterials, such as metal oxides, 1-4 metal hydroxides, 5-7 metal chalcogenides, 8,9 metal carbides 10,11 and black phosphorus, 12,13 have been and continue to be of great interest to researchers since the experimental discovery of graphene. 14,15 In particular, attractive advantages of ultrathin 2D metal oxides and hydroxides set them apart from other materials owing to their wide structural and chemical variety, broad applications, potential low-cost mass production and shape-preserving transformation into other materials. Because the rational preparation of ultrathin 2D metal oxides and hydroxides represents the first and crucial step towards their utilization in both fundamental research and practical applications, considerable research efforts have been made in the past decade to develop a facile and scalable synthesis strategy. Although exfoliation of bulk layered materials into single-or few-layer nanosheets is regarded as one of the most promising synthesis strategies, such a top-down strategy comes inevitably with several inherent drawbacks such as relatively low yields, time-consuming steps, difficulty in scaling up and complicated multi-step operations. The recent rational trend to directly synthesize ultrathin 2D metal oxides and hydroxides via a bottom-up approach, especially via a wet-chemical bottom-up route, is attracting rising interest, 16,17 as a result of the advantages of this route associated with its time-saving, scalable, efficient and easy synthesis process. Accordingly, various methods belonging to this route, including pulsed-laser ablation, 18 one-step direct synthesis in the presence of special additives (sodium dodecyl sulfate, hydrophobic ionic liquid, formamide, etc.), [19][20][21] hydrothermal synthesis using a high concentration of H 2 O 2 as the solvent, 22 a microwave-assisted liquid phase method, 23 green synthesis of 2D nanomaterials at room temperature using liquid metal reaction media, 24,25 electrochemical deposition on certain substrates, 26 etc., have been explored. Note that one-step direct synthesis methods in the presence of special additives stand out from the competition with other wet-chemical bottom-up methods benefiting from their potential in avoiding the drawbacks of utilizing special facilities/reactants and the energy consumption via elevated reaction temperature during the synthesis process. This is in favor of realizing the low-cost and large-scale synthesis of ultrathin 2D metal oxides and hydroxides. Among different additives, formamide is the most commonly used one. 17,20,27 Unfortunately, current synthesis strategies involving formamide vary from one system to the other and preparing various target materials still remains challenging by using one and the same facile strategy at room temperature, 20 significantly impeding the general fabrication and the further development of relevant materials. Besides, progress in the preparation and application of corresponding ultrathin nanomaterials has not been paralleled by a concomitant development of their formation mechanism. More specifically, although previous studies proposed that the successful formation of ultrathin nanosheets results from the unusually high dielectric constant of formamide and interactions between the carbonyl groups of formamide and the hydroxyl layers on the hydroxide sheet surfaces, 20 the exact function of formamide on forming the ultrathin structure has not been fully demonstrated, which is detrimental to the rational design and preparation of the desirable target material. Summing up the above analysis, both fabricating ultrathin 2D metal oxides and hydroxides in the presence of formamide via an improved synthesis strategy and investigating the formation mechanism of such materials are lucrative. Herein, we describe a simple and general wet-chemical precipitation strategy, which involves the reaction between metal ions and OH − produced by the NH 3 -initiated protonation process, to produce hydroxide nanomaterials in formamide aqueous solution at room temperature. Different types of ultrathin 2D metal oxide and hydroxide nanosheets, namely Co 3 O 4 /Co(OH) 2 , γ-Fe 2 O 3 , Mn 3 O 4 , ZnO, Cu(OH) 2 ·H 2 O and Mg(OH) 2 /Mg 2 (OH) 3 Cl, with a thickness of less than 5 nm can be successfully synthesized under the same conditions except for changing the original metal salt. Note that obtaining a metal oxide phase instead of a metal hydroxide phase using the several systems is ascribed to the crystal phase change from hydroxide to oxide, which is caused by the topological transformation during the reaction process. Detailed investigations were carried out to study the formation mechanism of these ultrathin nanomaterials using Fourier transform infrared spectroscopy (FTIR) and nuclear magnetic resonance (NMR) spectroscopy. The obtained results reveal that, except for the unusually high dielectric constant of formamide and interactions between the carbonyl groups of formamide and the hydroxyl layers, the -NH 2 groups of formamide also play a key role in decreasing the thickness of the nanosheets. To the best of our knowledge, this is the first work clarifying the effect of -NH 2 groups on inhibiting the growth of nanosheets along the z-direction, providing important insights into guiding the synthesis of the desired ultrathin 2D metal oxides and hydroxides. Since the growth mechanism depends mainly on the layered structure character of the target materials instead of the individual character of metal elements, we believe that this strategy can be extended to a vast number of other systems, offering new opportunities for an easy, general, low-cost and room temperature synthesis of ultrathin 2D metal oxide and hydroxide nanosheets. Materials Ammonium hydroxide solution (28-30 wt% solution of NH 3 in water) was obtained from Acros Organics (Geel, Belgium). FeCl 2 ·4H 2 O was bought from Alfa Aesar. MgCl 2 ·6H 2 O and Cu(CH 3 COO) 2 ·H 2 O were obtained from Fluka. Other reagents were purchased from Sigma-Aldrich. All chemicals were of analytical grade and were used as received without any further purification. Water used for the preparation of aqueous solutions was purified using a Millipore-Q water purification system. Synthesis of nanosheets A typical procedure is described as follows: a solution containing precursor (X mM) was prepared by dissolving the metal salts in a mixed solution of water (Y mL) and solvent B (Z mL). After ultrasonic dissolution, the vial was then sealed with Parafilm and three pinholes were drilled allowing for gas diffusion. Subsequently, the vial was put into a sealed blue cap bottle (100 mL) with 2 mL of concentrated ammonium hydroxide solution. After the diffusion for 12 hours at room temperature, the precipitate found at the bottom of the vial was washed two times by centrifugation (9000 rpm, 5 min) with Milli-Q water and dried at room temperature. More details about the synthesis protocol are provided in the ESI. † Characterization Transmission electron microscopy (TEM) images and selected area electron diffraction (SAED) patterns were acquired on a Zeiss Libra 120 microscope operating at 120 kV. The TEM samples were prepared from ethanol dispersion or water solution on conventional TEM grids (carbon-coated copper grids, 400 mesh, supplied by Quantifoil GmbH). X-ray powder diffraction (XRD) data were acquired on a D8 ADVANCE and DAVINCI.DESIGN (Bruker) X'pert diffractometer with Cu Kα radiation. Scanning electron microscopy (SEM) was conducted using a Zeiss CrossBeam 1540XB reaching a resolution of up to 1.1 nm at 20 kV and the acceleration voltage can be changed from 0.1 to 30 kV. It was equipped with an SE2 and an InLens detector. Attenuated-totalreflection-infrared spectroscopy (ATR-FTIR) was conducted on a Vertex 80 V spectrometer (Bruker Optics). The thickness of the nanomaterials was characterized with a JPK NanoWizard atomic force microscope (AFM) in the intermittent contact mode using a silicon tip. X-ray photoelectron spectra (XPS) were obtained on a Thermo ESCALAB 250XI with Mg Kα radiation (hυ = 1253.6 eV) as the excitation source. High-resolution TEM (HRTEM) images were obtained on a JEOL-2010 microscope operated at 200 kV. The titration setup is a commercial, computer-controlled titration system manufactured by Metrohm. The titration software is Tiamo 2.2. Solution phase 13 C NMR spectra were recorded using a Bruker Avance III 400 operating at 400 MHz. Sample preparation procedures for the measurement of solution phase 13 C NMR spectrum can be found in the ESI. † The 3D crystal structures below were extracted from the ICSD database using Materials Studio 5.5 (Accelrys) and displayed after improvements to the number of units. Results and discussion As illustrated in Scheme 1, the synthesis strategy used mainly involves the diffusion of ammonia vapor (NH 3 ) originating from ammonium hydroxide into formamide aqueous solution containing metal ion salts, triggering a reaction between the NH 3 molecule and water molecule of the reaction solution to generate OH − , namely NH 3 + H 2 O ⇌ NH 4 + + OH − , which could increase the pH value of the solution. The increased pH enables the hydrolysis of metal ions to initiate the precipitation of metal hydroxides. In this strategy, formamide molecules play a key role in influencing the thickness of the final hydroxide product. Note that by simply changing the type of the metal salt precursors, six different ultrathin 2D nanosheets could be successfully synthesized using the same protocol. The crystallinity of these nanosheet products was studied by powder XRD. As displayed in Fig. 1 and ZnO, is ascribed to the crystal phase change from hydroxide to oxide caused by the topological transformation during the reaction process, which has been well documented by previous studies. [28][29][30][31] XPS was employed to evaluate the surface composition and electronic states of these metal oxide and hydroxide nanosheets. The survey spectra provided in Fig. S1 † indicate the existence of O and corresponding metal elements for these six ultrathin 2D nanosheets, which agrees well with the composition of the aforementioned metal oxide and hydroxide nanosheets. The high-resolution XPS spectra of these metal elements are shown in Fig. 2 to analyze their electronic states. As presented in Fig. 2A [34][35][36][37][38][39] which is in line with the aforesaid XRD results. The morphology of these six 2D materials was investigated by conducting SEM and TEM. As shown in Fig. 3A-F, all of them show 2D nanosheet morphologies, but with different sizes. To be more specific, the Mn 3 O 4 , ZnO, and Mg(OH) 2 / Mg 2 (OH) 3 Cl nanosheets possess a lateral size of about several μm, while the Co 3 O 4 /Co(OH) 2 , γ-Fe 2 O 3 and Cu(OH) 2 ·H 2 O nanosheets are several hundred nanometers in lateral size. The planar 2D morphology can be further verified by the TEM results (Fig. 3G-L). In the TEM images, the high transparency and the flexible feature represent indirect proof of the ultrathin thickness of these nanosheets. To determine the exact thickness, AFM was utilized and the results indicate that most of the 2D metal oxide and hydroxide nanosheets have a thickness of 3 ± 2 nm (Fig. S2 †), manifesting the formation of the ultrathin structure. Further characterization was performed using SAED and HRTEM to evaluate the crystallinity of the 2D nanosheets. In the SAED patterns, the well-ordered bright spots of the various nanosheets ( Fig. 3M-R) reveal their high-quality single-crystalline nature, with the exception of the typical ring-pattern of Cu(OH) 2 (100) plane of Mg(OH) 2 , respectively, which further confirmed the high crystallinity and the exposed faces of these nanosheets. To establish the guiding synthesis principles for other ultrathin nanosheets, it is necessary to fully understand the factors that determine the formation of the here obtained six ultrathin nanosheets. Reference experiments were conducted using the same synthesis protocol but without the addition of formamide to verify the influence of formamide on the formation of ultrathin nanosheets. The SEM images provided in Fig. S3 † show the 2D morphology of the obtained reference products, suggesting their preferential growth habit along one layer during the growth process, which can be explained by the preferential growth along one layer for materials with a layered structure under suitable experimental conditions. 23 It is known that Co(OH) 2 , (Fig. 4A), and thus the preferential growth along one layer could be easily realized. Note that compared to these obtained reference products, we found that the nanomaterials synthesized in formamide aqueous solution show thinner structures, implying that formamide molecules have a positive effect on decreasing the thickness of the nanosheets. To understand the driving force originating from formamide for inhibiting the growth of nanosheets along the z-direction, taking the Mg(OH) 2 /Mg 2 (OH) 3 Cl system as an example, additional reference experiments were performed using the same protocol but with the replacement of formamide by other organic solvents or additives possessing various functional groups, structures, dielectric constants, etc. Fig. S4 † presents the molecular formula of the organic solvents or additives used and the morphology of the corresponding final products as observed by SEM. It can be seen that nanomaterials synthesized in the reaction solutions containing dimethylformamide or ethanol tend to grow thicker, demonstrating that dimethylformamide or ethanol might have no influence over inhibiting the z-direction growth. In contrast, ultrathin nanosheets could be obtained when formamide was replaced with one of the following organic solvents or additives: ethanolamine, ethylenediamine, glycine and hydrazine. Note that ethanolamine has a low dielectric constant (Table S1 †) and carbonyl groups are absent in hydrazine, ethanolamine and ethylenediamine, which calls into question that the formation of ultrathin structures is related to the unusually high dielectric constant of organic solvents and interactions between the carbonyl groups of organic solvents and the hydroxyl layers on the hydroxide sheet surfaces. 20 More importantly, based on the fact that ultrathin nanosheets could form when hydrazine (NH 2 -NH 2 ) was used as the organic solvent (Fig. S4A †), it is reasonable to infer that, except for the aforesaid two factors, the -NH 2 groups also play a key role in the inhibition of the z-direction growth. FTIR was utilized to verify the interaction between the -NH 2 groups and the formed hydroxide via monitoring the signal change of the functional groups during the reaction process. Note that we monitored the reaction process of oneoff addition of 2 mL of concentrated NH 3 ·H 2 O into the reaction solution, because it is difficult to in situ monitor the reaction process of NH 3 -diffusion into the reaction solution. As shown in Fig. S5, † the one-off addition of 2 mL of concentrated NH 3 ·H 2 O into the reaction solution leads to the formation of nanosheets with a similar morphology to that of the NH 3 -diffusion method, indicating that the former is a suitable alternative to investigate the interaction between the formed ultrathin hydroxide nanosheets and the -NH 2 groups. As shown in Fig. 4B, the intensity of the peak located at 3702 cm −1 , which can be ascribed to the O-H stretching mode of the free Mg-OH groups, presents a gradual increase during the first 2500 seconds, indicating the continuous formation and growth of the Mg(OH) 2 /Mg 2 (OH) 3 Cl nanomaterials during this period. Meanwhile, during the first 2500 seconds, the feature at 1625 cm −1 (Fig. 4C) corresponding to the -NH 2 scissoring vibration shows a shape change and a red-shift of 1603 cm −1 , while there is no obvious signal change for the CN stretch, CH bend and CO stretch, suggesting the possible interaction between the -NH 2 groups and the hydroxyl layers of hydroxide. To gain further insights into the role of -NH 2 groups at a molecular level, 13 C NMR spectroscopy was applied to investigate six different samples. As shown in Fig. 4D, the spectra of pure water after introducing NH 3 and aqueous solution containing MgCl 2 after introducing NH 3 show no obvious peak belonging to the resonance of the carbon atoms, suggesting the absence of carbon. In the spectra of pure formamide aqueous solution, formamide aqueous solution containing MgCl 2 , formamide aqueous solution after introducing NH 3 without MgCl 2 and formamide aqueous solution containing MgCl 2 after introducing NH 3 , an obvious peak at 168.1 ppm can be seen, which is ascribed to the resonance of the carbon atoms from formamide. Note that one additional peak located at 172 ppm (grey arrow) is only detected in the spectrum of formamide aqueous solution containing MgCl 2 after introducing NH 3 . The emergence of this new peak indicates the change of the atomic environment of some carbon atoms, which is probably caused by the interaction between the -NH 2 groups connecting with the carbon atoms and the hydroxyl layers of hydroxide. The new peak is absent in the spectrum of formamide aqueous solution after introducing NH 3 without MgCl 2 , suggesting the unchanged atomic environment of the carbon atoms for such a reaction system, which supports our idea that the NH 2groups connecting with the carbon atoms interact with the hydroxyl layers of hydroxide instead of the free hydroxide ions in aqueous solution. Based on the aforesaid analysis, although formamide may serve multiple functions in inhibiting the growth of nanosheets along the z-direction, we infer that, except for the unusually high dielectric constant of formamide and interactions between the carbonyl groups of formamide and the hydroxyl layers, the -NH 2 groups of formamide play a key role during this process. These findings can advance our understanding of the formation mechanism of ultrathin nanomaterials obtained from the bottom-up fabrication route using formamide as the layer growth inhibitor, and such a perspective could guide us to select the suitable layer growth inhibitor for the future preparation of ultrathin 2D nanomaterials. Conclusions In summary, this study has proposed a general and facile strategy to prepare ultrathin 2D metal oxide and hydroxide nanosheets at room temperature by using a wet-chemical precipitation method, which has been successfully utilized to synthesize Co 3 O 4 /Co(OH) 2 , γ-Fe 2 O 3 , Mn 3 O 4 , ZnO, Cu(OH) 2 ·H 2 -O and Mg(OH) 2 /Mg 2 (OH) 3 Cl nanosheets. The -NH 2 groups of formamide molecules play a key role in further decreasing the thickness of the nanosheets by inhibiting the growth of the nanosheets along its z-direction, which is supported by the results of FTIR spectroscopy and NMR spectroscopy. These findings can not only broaden the fundamental understanding of the formation mechanism but also provide a guiding principle for selecting a suitable layer growth inhibitor. Since the growth mechanism is mainly dependent on the layered structure character, this approach should be versatile and transferable to more materials with a similar structure, opening up new opportunities for an easy, general and low-cost synthesis of ultrathin 2D metal oxide and hydroxide nanosheets. Conflicts of interest The authors declare no competing financial interest.
4,572.4
2021-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Unsupervised Binary Code Translation with Application to Code Similarity Detection and Vulnerability Discovery Binary code analysis has immense importance in the research domain of software security. Today, software is very often compiled for various Instruction Set Architectures (ISAs). As a result, cross-architecture binary code analysis has become an emerging problem. Recently, deep learning-based binary analysis has shown promising success. It is widely known that training a deep learning model requires a massive amount of data. However, for some low-resource ISAs, an adequate amount of data is hard to find, preventing deep learning from being widely adopted for binary analysis. To overcome the data scarcity problem and facilitate cross-architecture binary code analysis, we propose to apply the ideas and techniques in Neural Machine Translation (NMT) to binary code analysis. Our insight is that a binary, after disassembly, is represented in some assembly language. Given a binary in a low-resource ISA, we translate it to a binary in a high-resource ISA (e.g., x86). Then we can use a model that has been trained on the high-resource ISA to test the translated binary. We have implemented the model called UNSUPERBINTRANS, and conducted experiments to evaluate its performance. Specifically, we conducted two downstream tasks, including code similarity detection and vulnerability discovery. In both tasks, we achieved high accuracies. Introduction In software security research, binary code analysis plays a significant role.It can play a vital role in various tasks, such as vulnerability discovery, code plagiarism detection, and security auditing of software, etc., without accessing the source code.Today, software is very often compiled for various Instruction Set Architectures (ISAs).For example, IoT vendors often use the same code base to compile firmware for different devices that operate on varying ISAs (e.g., x86 and ARM), which causes a single vulnerability at source-code level to spread across binaries of diverse devices.As a result, cross-architecture binary code analysis has become an emerging problem (Pewny et al., 2015;Eschweiler et al., 2016;Feng et al., 2016;Xu et al., 2017;Zuo et al., 2018).Analysis of binaries across ISAs, however, is non-trivial: such binaries differ greatly in instruction sets; calling conventions; general-and special-purpose CPU register usages; and memory addressing modes. Recently, we have witnessed a surge in research efforts that leverage deep learning to tackle various binary code analysis tasks, including code clone detection (Luo et al., 2017a;Zuo et al., 2018), malware classification (Raff et al., 2018;Cakir and Dogdu, 2018), vulnerability discovery (Lin et al., 2019;Wu et al., 2017), and function prototype inference (Chua et al., 2017).Deep learning has demonstrated its strengths in code analysis, and shown noticeably better performances over traditional program analysis-based methods (Song et al., 2013;BAP, 2011;Luo et al., 2014;Wang et al., 2009b) in terms of both accuracy and scalability. It is widely known that training a deep learning model usually requires a massive amount of data.As a result, most deep learning-based binary code analysis methods have been dedicated to a high-resource ISA, such as x86, where large-scale labeled datasets exist for training their models.But for many other ISAs (e.g., ARM, PowerPC), there are few or even no labeled datasets, resulting in negligent focus on these low-resource ISAs.Moreover, it is labor-intensive and time-consuming to collect data samples and manually label them to build datasets for such low-resource ISAs.In this work, we aim to overcome the challenge of data scarcity in low-resource ISAs and facilitate crossarchitecture binary code analysis. Our Insight.Given a pair of binaries in different ISAs, they, after being disassembled, are represented in two sequences of instructions in two assembly languages.In light of this insight, we propose to learn from Neural Machine Translation (NMT), an area that focuses on translating texts from one human language into another using machine/deep learning techniques (Sutskever et al., 2014).There are two types of NMT systems.One is supervised and requires a large parallel corpus, which poses a major practical problem.Another is unsupervised, which removes the need for parallel data.As unsupervised NMT has the advantage of requiring no cross-lingual signals, we adopt unsupervised NMT for binary code analysis.Our Approach.Drawing inspiration from the unsupervised neural machine translation model Undreamt (Artetxe et al., 2018b), we design an unsupervised binary code translation model called UNSUPERBINTRANS.UNSUPERBINTRANS is trained in a completely unsupervised manner, relying solely on mono-architecture corpora.Once trained, UNSUPERBINTRANS can translate binaries from a low-resource ISA to a high-resource ISA (e.g., x86).Consequently, when dealing with a binary in a low-resource ISA, translating it into a high-resource ISA enables us to use a model trained on the high-resource ISA to test the translated binary code.UNSUPERBINTRANS can be applied to a variety of applications, such as vulnerability discovery and code similarity detection. We have implemented UNSUPERBINTRANS1 , and evaluated its performance.Specifically, we conduct two critical binary analysis tasks, including code similarity detection and vulnerability discovery.In both tasks, we achieve high accuracies across different ISAs.For example, a model trained on x86 can successfully detect all vulnerable functions in ARM (even the model has not been exposed to any vulnerable functions in ARM during its training), indicating the exceptional translation capabilities of UNSUPERBINTRANS. Below we summarize our contributions: • We propose a novel unsupervised method for translating binary code from one ISA to another, such that the translated binary shares similar semantics as the original code. • We have implemented the model UNSU-PERBINTRANS and conducted evaluations on two important binary analysis tasks.The results show that UNSUPERBINTRANS can successfully capture code semantics and effectively translate binaries across ISAs. Background A control flow graph (CFG) is the graphical representation of control flow or computation during the execution of programs or applications.A CFG is a directed graph in which each node represents a basic block and each edge represents the flow of control between blocks.A basic block is a sequence of consecutive statements in which the flow of control enters at the beginning and leaves at the end without halt or branching except at the end.As shown in Figure 1(a), given a piece of source code, its corresponding CFG is shown in Figure 1(b), where each node is a basic block.Similarly, we can also generate the corresponding CFG for a piece of binary code.We here use the source code as an example for simplicity. Motivation and Model Design This section first discusses the motivation behind our proposal for unsupervised binary code translation, and then presents the model design. Motivation Consider the vulnerability discovery task, a longstanding problem of binary code analysis, as an example, binaries in different ISAs are analyzed in detail to find vulnerabilities.Nowadays, applying machine/deep learning to binary analysis has drawn great attention due to the exceptional performance.However, it usually requires a large amount of data for training.Unfortunately, many a time it is difficult to collect such a large amount of training data, especially for low-resource ISAs. There are some ISAs that are commonly used (e.g., x86) and it becomes easy to collect large training data for such ISAs.It would be a great advantage if the availability of such large training data of high-resource ISAs could facilitate the automated analysis of binaries in low-resource ISAs, where sufficient training data is not available. For example, suppose there is a large training dataset for the ISA X, however, we need to analyze a binary b in the ISA Y, where the available training data is insufficient.As a result, it is difficult to train a model for analyzing b in Y.In order to resolve the problem, our idea is to translate b from the ISA Y to X, resulting in a translated binary b' in X.Then, we can leverage the large training dataset of X to train a model, which can be used to perform prediction on the translated version b' in X. Learning Cross-Architecture Instruction Embeddings A binary, after disassembled, is represented as a sequence of instructions.An instruction includes an opcode (specifying the operation to be performed) and zero or more operands (specifying registers, memory locations, or literal data).For example, mov eax, ebx is an instruction where mov is an opcode and both eax and ebx are operands. 2In NMT, words are usually converted into word embeddings to facilitate further processing.Since we regard instructions as words, similarly we represent instructions as instruction embeddings. Challenge.In NLP, if a trained word embedding model is used to convert a word that has never appeared during training, the word is called an out-ofvocabulary (OOV) word and the embedding generation for such words will fail.This is a well-known problem in NLP, and it exacerbates significantly in our case, as constants, address offsets, and labels are frequently used in instructions.How to deal with the OOV problem is a challenge. Solution: Preprocessing Instructions.To resolve the OOV problem, we propose to preprocess the instructions using the following rules: (1) Constants up to 4 digits are preserved because they usually contain useful semantic information; constants more than 4 digits are replaced with <CONST>. (3) Other symbols are replaced with <TAG>.Take the code snippets below as an example, where the left one shows the original assembly code and the right one is the preprocessed code.We adopt fastText (Bojanowski et al., 2017) to learn MAIE for an ISA.The result is an embedding matrix X 2 R V ⇥d , where V is the number of unique instructions in the vocabulary, and d is the dimensionality of the instruction embeddings.The ith row X i⇤ is the embedding of the ith instruction in the vocabulary. Given two ISAs, where one is a high-resource ISA and another a low-resource ISA, we follow the aforementioned steps to learn MAIE for each ISA.After that, we map MAIE of the low-resource ISA into the vector space of the high-resource ISA by adopting the unsupervised learning technique vecmap (Artetxe et al., 2018a).Note that other unsupervised learning methods for generating crosslingual word embeddings (Conneau et al., 2017;Ruder et al., 2019) also work. Translating Basic Blocks Across ISAs Drawing inspiration from the unsupervised neural machine translation model Undreamt (Artetxe et al., 2018b), we design an unsupervised binary code translation model, named UNSUPERBIN-TRANS, to translate a piece of binary code (i.e., a basic block) from one ISA to another ISA.The model architecture of UNSUPERBINTRANS is shown in Figure 2. First, CAIE are generated for two ISAs, where one is a high-resource ISA (x86) and another a low-resource ISA (see Section 3.2).Then, we train UNSUPERBINTRANS (the grey dashed box; the detailed process is discussed below).The training only uses mono-architecture datasets and CAIE generated from the mono-architecture datasets.The mono-architecture dataset for each ISA contains a large number of basic blocks from this ISA.Finally, the trained model can translate a basic block from the low-resource ISA to x86, and the translated basic block shares similar semantics as the original one.Through the translation of binaries from a low-resource ISA to x86, we can use a model trained on x86 to analyze the translated code.This capability allows UNSUPERBINTRANS to address the data scarcity issue in low-resource ISAs.UNSUPERBINTRANS can be applied to various binary analysis tasks, such as code similarity detection and vulnerability discovery. As UNSUPERBINTRANS is designed based on Undreamt, below we briefly discuss how Undreamt works.Please refer to (Artetxe et al., 2018b) for more details.Undreamt alleviates the major limitations of NMT, i.e., the requirement of very large parallel text corpora.Undreamt achieves very good results for translation due to the three main properties of its architecture.These properties are dual structure, shared encoder, and fixed embeddings in the encoder.First, the dual structure enables bidirectional translation capabilities to be achieved through training.Second, the shared encoder produces a language-independent representation of the input text corpus.Then the corresponding decoders for a particular language can transform that representation into the required language.Finally, with the help of fixed embeddings in the encoder, Undreamt can easily learn how to compose wordlevel representations of a language to build representations for bigger phrases.These embeddings are pre-trained and kept fixed throughout the training which makes the training procedure simpler. The training procedure consists of two main underlying components.Denoising and On-thefly backtranslation.Although the dual property of Undreamt helps it gain preliminary translation knowledge, eventually such procedure results in a simple copying task where the trained model may result in making word-by-word replacements of the source and target languages.Therefore, to improve the translation capability of the trained model, Undreamt injects noise into input data by adding some random swaps of contiguous words.Then the model is asked to recover the original input text by denoising the corrupted data.This process of adding and removing the noise helps the model to learn the actual relationship between words in the sentences for the given languages. We implement UNSUPERBINTRANS upon Undreamt.During the testing phase, when provided with a basic block from one ISA, we first represent it as a sequence of CAIE, which is fed to UNSUPERBINTRANS.UNSUPERBINTRANS then translates it to a basic block in another ISA, which shares similar semantics as the source basic block.We next disassemble these binaries using IDA Pro and extract basic blocks.Each instruction in basic blocks is then preprocessed.Finally, we build four mono-architecture datasets for x86 and another four for ARM.Each mono-architecture dataset contains a set of preprocessed basic blocks corresponding to one ISA and one optimization level.Table 1 shows the statistics of our datasets, including the number of functions, the number of unique instructions (i.e., vocabulary size), and the total number of instructions for each optimization level in terms of the two ISAs, x86 and ARM. Training UNSUPERBINTRANS.Given two datasets corresponding to two different ISAs and the same optimization level, we first use fastText to generate MAIE for each ISA, which are then mapped to a shared space to generate CAIE by vecmap.After that, we train UNSUPERBINTRANS using the two datasets and CAIE to translate basic blocks from one ISA to another.Note that UNSU-PERBINTRANS is based on unsupervised learning; we do not need to determine which basic block in one ISA is similar to a basic block in another ISA. Quantitative and Qualitative Analysis We first calculate the BLEU score to measure the quality of our translation.A BLEU score is represented by a number between zero and one.If the score is close to zero, the translation quality is poor.On the other hand, the translation quality is good when the score is close to one. For each x86 function, we first find its similar ARM function: following the dataset building method in InnerEye (Zuo et al., 2018), we consider two functions similar if they are compiled from the same piece of source code, and dissimilar if their source code is rather different.As a result, for each ARM function, denoted as S, upon identifying its similar x86 function R, we include this similar pair into our dataset.For each ARM function in the dataset, we use UNSUPERBINTRANS to translate each of its basic blocks into x86, resulting in a translated function in x86, denoted as T .After that, we compute the BLEU score between T and R. Finally, we compute the average BLEU score for all functions in the dataset.We obtain 0.76, 0.77, 0.77, 0.76 BLEU scores, for the four different optimization levels, respectively, when using fastText for instruction embedding generation.Moreover, when we use word2vec, we obtain 0.77, 0.77, 0.77, 0.76 BLEU scores, for the four different optimization levels, respectively.Thus, our model achieves good translation. Table 2 shows three randomly selected examples.Due to space limits, we use basic blocks as examples.In each example, 1) the source ARM is the (2) a small number of operands are not predicted correctly but reasonable.For example, in the first example, the fifth instruction in the reference x86 block is TEST AL, AL, while the predicted instruction in the translated x86 block is TEST, EAX, EAX.Opcodes, which determines the operation to be performed, captures more semantics compared to operands.Moreover, the random allocation of registers across different optimization levels diminishes their significance.We thus can conclude that UNSUPERBINTRANS consistently attains exceptionally high-quality translations.This results in superior performance in various downstream tasks, as detailed in the following sections. Downstream Applications We consider two downstream tasks as follows. Function Similarity Comparison.Given two binary functions, f 1 and f 2 , in two different ISAs, X and Y, our goal is to measure their similarity.If they are similar, they may copy from each other (i.e., stem from the same piece of source code).To achieve this, we translate each basic block of f 1 in the ISA X to a basic block in the ISA Y, resulting in a function f 0 1 in Y.After that, we have two binary functions, f 0 1 and f 2 , within the same ISA Y.For each function, we generate its function embedding, which is computed as the weighted sum of the CAIE of all instructions in this function.The weight of each CAIE is calculated as the termfrequency (TF).We then compute the cosine similarity between their function embeddings to measure their similarity. Vulnerability Discovery.We consider three vulnerabilities: CVE-2014-0160 (or Heartbleed bug) (Banks, 2015), CVE-2014-3508, and CVE-2015-1791.We follow a similar way as above to compute the functions embeddings for the vulnerable functions and benign functions.We use Support Vector Machine (SVM), a widely used machine learning model to detect vulnerabilities. Since there is usually only one vulnerable function (positive sample) for training SVM, the dataset becomes extremely imbalanced.To handle the imbalance issue, Random Oversampling (ROS) (Moreo et al., 2016) and Synthetic Minority Over-sampling Technique (SMOTE) (Chawla et al., 2002) are adopted.For each vulnerability case, we first train an SVM model using the vulnerable functions and benign functions in x86.We then use the trained model to detect the vulnerability in ARM functions.During detection, we first translate each basic block in the ARM function to a basic block in x86 and use the SVM model (that has been trained on x86) to test the translated function. Function Similarity Comparison For each optimization level, we first randomly select 7, 418 x86 functions.For each selected x86 function, we search for its similar and dissimilar function in ARM.We consider two functions similar if they are compiled from the same piece of source code, and dissimilar if their source code is rather different.We assign the label "1" to each similar function pair, and the label "0" to each dissimilar function pair.We next generate function embeddings for each function.In NLP, many methods exist for composing word embeddings to sentence/document embeddings.We choose the following way.We calculate the term-frequency (TF) for each instruction in a function and then multiply it by the corresponding CAIE of that instruction.We next add all the weighted CAIE and use the sum as the function embedding.Note that for an ARM function, we first translate it to an x86 function using UNSUPERBIN-TRANS, and then compute its function embedding using the method discussed before.To measure the function similarity, we calculate the cosine similarity between their function embeddings and then calculate the accuracy. We perform the experiment for all the optimization levels.Table 3 shows the results.As UN-SUPERBINTRANS needs to first generate MAIE (see Section 3.2) and then perform the translation, we also explore different tools for generating MAIE and assess which one yields better performance.Specifically, we use fastTtext and word2vec.The results demonstrate that we achieve reasonable accuracies for each optimization level.Moreover, fastText leads to better performance. Note that in this task, we use a weighted sum of instruction embeddings to generate function embeddings.As there are many advanced methods for composing word embeddings, including those using machine/deep learning models (Lin et al., 2017;Kenter et al., 2016;Xing et al., 2018), we expect that by adopting these advanced methods, we can achieve even higher accuracies. Vulnerability Discovery We consider three case studies as follows. Case Study I. CVE-2014-0160 is the Heartbleed vulnerability in OpenSSL (Banks, 2015).It allows remote attackers to get sensitive information via accessing process stack memory using manually crafted data packets which would trigger a buffer overread.We obtain the vulnerable binary function in x86 and ARM from OpenSSL v1.0.1d. Case Study II.For CVE-2014-3508, contextdependent attackers are able to obtain sensitive information from process stack memory via reading output from some specific functions.The vulnerable function is collected from OpenSSL v1.0.1d. Case Study III.For CVE-2015-1791, a race condition occurs when the vulnerable function is used for a multi-threaded client.A remote attacker can create a denial of service attack (double free and application crash) when attempting to reuse a ticket that had been obtained earlier.The vulnerable function is also collected from OpenSSL v1.0.1d. Our Results.For each case study, we select 9,999 benign functions.However, we only have one vulnerable sample.To address this unbalanced sample issue, we use ROS (Moreo et al., 2016) and SMOTE (Chawla et al., 2002).We set the parameters k_neighbors to 2 and sam-pling_strategy to 0.002.We try different values of sampling_strategy, including 0.5, 0.25, and 0.02, etc., and find that with the value of 0.002, we obtain enough oversampling data and yield good results.As we set k_neighbors to 2, we duplicate the vulnerable function three times, so that the initial minority sample size is more than the value of k_neighbors. We then compute the function embedding for each function.We use the function embeddings of x86 vulnerable functions and benign functions to train an SVM model.Then, the trained SVM model is transferred to test the translated code of each ARM function, without any modification.In this experiment, we use fastText to generate MAIE as it gives better results than word2vec. Baseline Method.Our work aims to address the data scarcity issue in low-resource ISAs.To achieve this, we introduce UNSUPERBINTRANS, which can translate binaries from a low-resource ISA to a high-resource ISA.By applying UN-SUPERBINTRANS, we can use the SVM model trained on x86 to test a binary in ARM by translating the ARM binary to x86.To showcase the effectiveness of the translation, we perform a baseline comparison.The baseline model we consider is an SVM model trained and tested on ARM, without employing any translation.As one would expect, the model trained and tested on ARM is likely to outperform a model trained on x86 and tested on the translated code (from ARM to x86).That is, the baseline model represents the best-case scenario.Moreover, if the performance difference between the baseline and our model is small, it signifies effective translation. Table 4 shows the results.We can see that our model consistently demonstrates close proximity in most cases when comparing to the baseline.The performance of UNSUPERBINTRANS across all performance metrics is impressive.For example, UNSUPERBINTRANS has a 100% true positive rate, indicating it identifies all the ARM vulnerable functions accurately.Therefore, we can conclude that UNSUPERBINTRANS has exceptional translation capabilities and can be applied to the vulnerability discovery task with high reliability. Efficiency We evaluate the training time of UNSUPERBIN-TRANS, which needs to first generate MAIE using fastText (Part I) and then learn CAIE (Part II).After that, we train UNSUPERBINTRANS using Related Work The related work can be divided into: traditional and machine/deep learning based.Each one can be further divided into: mono-architecture and crossarchitecture based. Machine/Deep Learning-based Code Similarity Comparison Mono-Architecture Approaches.Recent research has demonstrated the applicability of machine/deep learning techniques to code analysis (Han et al., 2017;Ding et al., 2019;Van Nguyen et al., 2017;Phan et al., 2017;Yan et al., 2019;Massarelli et al., 2019).Lee et al. propose Instruction2vec for converting assembly instructions to vector representations (Lee et al., 2017).PalmTree (Li et al., 2021) generates instruction embeddings by adopting the BERT model (Devlin et al., 2018).However, all these approaches work on a single ISA. Cross-Architecture Approaches.A set of approaches target cross-architecture binary analysis (Feng et al., 2016;Xu et al., 2017;Chandramohan et al., 2016;Zuo et al., 2018;Redmond et al., 2019).Some exploit the code statistical.For example, Gemini (Xu et al., 2017) use manually selected features (e.g., the number of constants and function calls) to represent basic blocks, but ignore the meaning of instructions and dependency between them, resulting in significant information loss.InnerEye (Zuo et al., 2018) uses LSTM to encode each basic block into an embedding, but needs to train a separate model for each ISA.For these approaches, in order to train their models, cross-architecture signals (i.e, labeled similar and dissimilar pairs of code samples across ISAs) are needed, which require a lot of engineering efforts.For example, InnerEye (Zuo et al., 2018) modifies the backends of various ISAs in the LLVM compiler (LLVM) to generate similar and dissimilar basic block pairs in different ISAs, while the dataset collection is identified as one of the challenges of this work.We instead build an unsupervised binary code translation model that does not require large parallel corpora.More importantly, our work can resolve the data scarcity problem in low-resource ISAs. The concurrent work, UniMap (Wang et al., 2023), also aims to address the data scarcity issue in low-resource ISAs.We proposed entirely different approaches.UniMap learns cross-architecture instruction embeddings (CAIE) to achieve the goal.In contrast, our work takes a more progressive stride by translating code from low-resource ISAs to a high-resource ISA.As shown in Figure 2, UniMap stops at the CAIE learning stage, while our work moves beyond this point, concentrating on binary code translation.As we translate binary code to a high-resource ISA, our approach has several advantages.One advantage is that we can generate function embeddings by directly summing the CAIE of instructions within a function and use the function embeddings to measure similarity (as illustrated in the function similarly comparison task).This eliminates the need for a downstream model training, which however is required by UniMap.Another advantage is that we can directly use the existing downstream models that have already been trained on the high-resource ISA to test the translated code (as illustrated in the vulnerability discovery task).However, UniMap needs to retrain the existing models using the learned CAIE. Discussion Intermediate representation (IR) can be used to represent code of different ISAs (angr Documentation).Given two pieces of binaries from different ISAs, which have been compiled from the same piece of source code, even if they are converted into a common IR, the resulting IR code still looks quite different (see Figure 1 and 3 of (Pewny et al., 2015) and the discussion in (Wang et al., 2023)).As a result, existing works that leverage IR for analyzing binaries across ISAs have to perform further advanced analysis on the IR code (Pewny et al., 2015;David et al., 2017;Luo et al., 2019bLuo et al., , 2023)). Conclusion Deep learning has been widely adopted for binary code analysis.However, the limited availability of data for low-resource ISAs hinders automated analysis.In this work, we proposed UNSUPERBIN-TRANS, which can leverage the easy availability of dataset in x86 to address the data scarcity problem in low-resource ISAs.We conducted experiments to evaluate the performance.For vulnerability discovery, UNSUPERBINTRANS is able to detect all vulnerable functions across ISAs.Thus, UNSU-PERBINTRANS offers a viable solution for addressing the data scarcity issue in low-resource ISAs. NLP-inspired binary code analysis is a promising research direction, but not all NLP techniques are applicable to binary code analysis.Therefore, works like ours that identify and study effective NLP techniques for binary code analysis are valuable in advancing exploration along this direction.This work demonstrates how NLP techniques can be leveraged in binary code analysis and underscores the practical utility of such approaches. Limitations In the evaluation, we consider two ISAs, x86 and ARM.There are many other ISAs, such as MIPS and PowerPC.We leave the evaluation on whether our approach works for other ISAs as future work. In this work, we consider instructions as words, which is a natural choice and works well according to our evaluation.However, there exist other choices.For instance, we may also consider raw bytes (Liu et al., 2018;Guo et al., 2019;Raff et al., 2018) or opcode and operands (Ding et al., 2019;Li et al., 2021) as words.An intriguing research work is to study which choice works the best and whether it depends on ISAs and downstream tasks. Given the large variety of binary analysis tasks and their complexity, we do not claim that our approach can be applied to all tasks.The fact that it works well for two important tasks (vulnerability discovery and code similarity detection).Demonstrating our approach for other applications is of great use.Much research can be done for exploring and expanding the boundaries of the approach. Figure 1 : Figure 1: Control flow graph and basic block. CAIE and two mono-architecture datasets (Part III).The total training time is the sum of the three parts.Part I approximately around 20 minutes for MAIE generation of ARM and x86.Part II takes around 15 minutes for CAIE generation.Part III takes approximately 16 hours for the training of each optimization level.Thus, the total training time is around 16.5 hours. Table 1 : Statistics of datasets. Table 2 : Three examples of assembly code translations. Table 3 : Accuracy of the function similarity task. Table 4 : Performance of the vulnerability discovery task (%).
6,608
2024-04-29T00:00:00.000
[ "Computer Science", "Engineering" ]
The pre-existing population of 5S rRNA effects p53 stabilization during ribosome biogenesis inhibition Pre-ribosomal complex RPL5/RPL11/5S rRNA (5S RNP) is considered the central MDM2 inhibitory complex that control p53 stabilization during ribosome biogenesis inhibition. Despite its role is well defined, the dynamic of 5S RNP assembly still requires further characterization. In the present work, we report that MDM2 inhibition is dependent by a pre-existing population of 5S rRNA. INTRODUCTION Proliferating cells are characterized by elevated production of cellular components needed for the survival of new born cells. Protein synthesis is necessary not only to ensure cellular functions but also to achieve the proper outcome for cell division. Therefore, regulation of ribosome biogenesis and cell cycle must be tuned to guarantee the survival of doubling cells [1]. Different studies described how positive stimuli like nutrient, growth factors, cytokines and mitogens increase both ribosome biogenesis and protein synthesis to allow appropriate cell growth and proliferation. In contrast, under stress situation like starvation, hypoxia or DNA damage, proliferating cells negatively modulate ribosomes production to reduce protein synthesis and block cell cycle progression [2]. The main strategy used by cycling cells to coordinate cell proliferation and ribosome biogenesis is to share regulatory elements [3]. In eukaryotes, ribosome biogenesis requires the activity of all the RNA Polymerases. In particular, RNA polymerase I (POLI) regulates the transcription of a precursor rRNA (45S), which after a multi-step processing, give rise to the maturation of three rRNA species 28S, 5.8S and 18S. The transcription of ribosomal proteins mRNAs is dependent by RNA polymerase II (POLII), while RNA polymerase III (POLIII) is responsible for 5S rRNA, tRNAs and some small nuclear RNAs transcription [4,5]. While 28S, 5.8S, 5S rRNA and large ribosomal proteins constitute (LRPs) the 60S subunit, the 40S subunit is composed by 18S rRNA and small ribosomal proteins (SRPs). Several oncogenes, like c-Myc, are able to enhance the production of ribosomal components, by the stimulation of all the three RNA Polymerases [6][7][8][9], while tumor suppressors like p14 ARF or p53, are able to bind and inhibit the activity of several transcription factors like SL1 and UBF, needed for Polymerase I transcription, leading to ribosome biogenesis inhibition [10,11]. On the other hand, several alterations in ribosome biogenesis, such as rRNA transcription inhibition or rRNA processing impairment, lead to a coordinated induction of p53 activity. In fact, different ribosomal proteins such as RPL11, RPL4, RPL5, RPL23, RPL37, RPS15, RPS20,RPS27a, RP27,RPS27l, RPS25 exert extraribosomal functions and are able to bind and inhibit the E3 ubiquitinase Mouse Double Minute 2 (MDM2) leading to the stabilization of the tumor suppressor p53 [12][13][14][15][16][17][18][19][20][21]. Recent studies better characterized the ribosomal proteins/MDM2/p53 axis and identified the 5S rRNA a mandatory factor for p53 stabilization in response to ribosome biogenesis inhibition. In particular, it has been demonstrated that 5S rRNA is present in a 60S preribosomal complex with RPL5 and RPL11, defined as 5S RNP, which is presently considered as the key regulator of MDM2 inhibition [22,23,24]. There is evidence that 5S rRNA and RPL5 are very abundant in the nuclear fraction respect to RPL11, which accumulates in the nucleoplasm only when ribosome biogenesis is blocked [22]. This may suggest that MDM2 inhibition could be dependent by 5S RNP assembly during ribosome biogenesis inhibition. In the present work we found that 5S rRNA neosynthesis inhibition do not efficiently affect p53 stabilization after inhibition of rRNA transcription. Thus, in order to verify if MDM2 inhibition can be dependent by a pre-existing rather than a newly synthesized 5S rRNA fraction, we measured both the populations bound to MDM2 during rRNA transcription Research Paper Oncotarget 4258 www.impactjournals.com/oncotarget inhibition. We found that pre-existing 5S rRNA represents the major fraction bound to MDM2 under ribosomal stress, thus indicating its importance in p53 stabilization. Moreover, our results suggested that RPL5 and RPL11 are both necessary for 5S rRNA accumulation in cells. Indeed, the recruitment of the pre-existing population of 5S rRNA, allows the efficient inhibition of the increasing MDM2 molecules, during ribosome biogenesis inhibition. TFIIIA depletion leads to 5S rRNA neosynthesis inhibition Previous works have been characterized the involvement of 5S RNP in p53 stabilization after ribosome biogenesis inhibition. In particular, it has been demonstrated that RPL11, RPL5 and 5S rRNA act as a ternary complex in a mutually dependent manner [23]. On the other hand, 5S rRNA and RPL5 are very abundant in non-ribosomal fraction and distributed between nucleolus, nucleoplasm and cytoplasm, while RPL11 is principally present in the nucleolus and cytoplasm, without any accumulation in the non-ribosomal fraction. Moreover, after ACTD treatment, the level of non-ribosomal RPL11 considerably increased in the nucleoplasm, in contrast to 5S rRNA and RPL5 [22]. To better clarify the dynamics of 5S RNP assembly, in the present work we investigated the recruitment of neo-synthesized and pre-existing 5S rRNA populations to MDM2 binding during ribosome biogenesis inhibition. In a preliminary set of experiments, we set up the conditions to down-regulate the expression of 5S rRNA, according to previous studies [22,23]. In order to inhibit the transcription of 5S rRNA by RNA Polymerase III (POLIII), we employed a siRNA against TFIIIA, the transcription factor responsible for POLIII specific recruitment on 5S rRNA promoter, as described previously. After 72 hours from the beginning of the transfection procedure, we measured the level of TFIIIA expression in human derived Colon Carcinoma cells (HCT116), by RT-qPCR, obtaining an efficient inhibition of about 90% respect to control cells ( Figure 1A), transfected with non-silencing siRNA (NS). After TFIIIA interference, the amount of newly synthesized 5S rRNA was efficiently reduced, as shown by the analysis of captured 5-Ethynyl Uridine (5-EU) labeled 5S rRNA via click chemistry based approach (RNA nascent capture kit, Thermo scientific). 5S rRNA neosynthesis inhibition was also confirmed by autoradiographic analysis (Supplementary Figure S1A). In contrast, total 5S rRNA amount is not significatively affected by TFIIIA interference (Figure 1C), despite the 60S production reduction observed by polysome profile analysis ( Figure 1D) and the failure of 28S processing by auto radiographic analysis of rRNA precursors (Supplementary Figure S1B), Thus suggesting that a 5S rRNA non-ribosomal fraction is still present in cells after TFIIIA inhibition. 5S rRNA neosynthesis inhibition do not efficiently abrogates p53 stabilization after inhibition of ribosome biogenesis In order to verify how 5S rRNA neosynthesis inhibition affects p53 stabilization, we treated TFIIIAsilenced HCT116 cells with Actinomycin D (ACTD, 8 nM) for 4 and 8 hours. Western blot analysis indicated that p53 levels are in general much lower in TFIIIA silenced samples, in comparison with control cells. However, by treating cells with ACTD, the amount of p53 progressively increased, despite the inhibition of 5S rRNA transcription ( Figure 2A). In contrast, this was not the case in cells in which the expression of RPL11 or RPL5 was reduced by RNA interference. In fact, in HCT116 cells, RPL11 RNA interference ( This data was confirmed by using a different set of siRNA against TFIIIA and RPL11 mRNAs (Supplementary Figure S3). Moreover, a slighter difference between TFIIIA and RPL11 or RPL5 depleted samples was observed in terms of p53 stabilization after the selective inhibition of POLI transcription by treatment with the non-intercalating agent CX-5461 (Supplementary Figure S4). These data suggest that the suppression of 5S rRNA synthesis do not abrogate p53 response to inhibition of ribosome biogenesis, in contrast to RPL11 and RPL5 reduction. After ribosome biogenesis inhibition, the assembly of the 5S RNP complex occurs under a reduction of 5S rRNA neosynthesis The above reported data indicated that during ACTD treatment, p53 stabilization occurred also in absence of 5S rRNA neosynthesis. In order to ascertain whether in this case the 5S rRNA no longer constitute the 5S RNP complex, we performed TFIIIA RNA interference in HCT116 cells and we evaluated the amount of 5S rRNA co-immunoprecipitated with MDM2 at 4 and 8 hours of ACTD treatment. The data obtained showed that, in control cells, ACTD treatment led to a progressive time dependent increase of co-immunoprecipitated 5S rRNA and MDM2 ( Figure 3A, Figure 3B). Interestingly, also in TFIIIA silenced cells, we found a similar correlation, thus indicating that at 8 hours of ACTD treatment the 5S rRNA bound to MDM2 was not the neo-synthesized fraction. Moreover, RPL11 was co-immunoprecipitated with MDM2 and 5S rRNA despite TFIIIA silencing, thus suggesting that the formation of the typical 5S Oncotarget 4259 www.impactjournals.com/oncotarget RNP complex occurred ( Figure 3A). In contrast, after reducing RPL11 expression by RNA interference, we observed a clear reduction of 5S rRNA and MDM2 coimmunoprecipitated after ACTD treatment for both 4 or 8 hours, indicating that the RPL11 newly synthesized fraction is involved in p53 stabilization ( Figure 3B). Taken together, these data indicated that the reduction of 5S rRNA neosynthesis does not efficiently hinder 5S RNP binding to MDM2 after ribosome biogenesis inhibition. Pre-existing 5S rRNA is recruited for MDM2 binding during ribosome biogenesis inhibition The data reported above suggest that a nonribosomal fraction of 5S rRNA, stored in the cell, take part in the assembly of the 5S RNP complex during a prolonged inhibition of rRNA transcription. To verify this hypothesis, we measured changes in different populations of 5S rRNA bound to MDM2 under ribosome biogenesis inhibition. In particular, we labeled HCT116 cells with 5-Ethynyl Uridine (5-EU) in a dose which did not stabilize p53 (Supplementary Figure S5) for 8 hours and in presence or absence of ACTD treatment. As represented in Figure 4A, we extracted RNA from MDM2 immunoprecipitated samples (total fraction), then we captured the labeled fraction (neosynthesized fraction) via click chemistry approach (Nascent RNA capture kit ® , Life technologies, Eugene, Oregon, USA) and collected the unlabeled fraction from the supernatant of the capturing phase (pre-existing fraction). Following this approach we observed that after ACTD treatment, the increase in the total amount of MDM2 immunoprecipitated 5S rRNA is mainly dependent by the pre-existing population of 5S rRNA rather than the neo-synthesized captured fraction, even under 5S rRNA neosynthesis inhibition ( Figure 4B and 4C). These results demonstrate that under ribosome biogenesis inhibition, in addition to neo-synthesized 5S rRNA, a pre-existing fraction of 5S rRNA is recruited in 5S RNP, ensuring the efficient inhibition of all the increasing MDM2 molecules. DISCUSSION Here we reported that 5S rRNA synthesis did not abrogate p53 accumulation due to ActD treatment, in contrast to RPL5 or RPL11 neosynthesis inhibition. Therefore, this data might suggest that the stabilization of p53 after inhibition of ribosome biogenesis can occur without the contribution of 5S rRNA, being exclusively due to RPL11 and RPL5 MDM2 binding. This appears to be in contrast with the observation that under a prolonged down-regulation of rRNA synthesis by POLR1A RNA interference, p53 stabilization was induced as a consequence of MDM2 inhibition by the 5S RNP ternary complex which contains RPL11, RPL5 and 5S rRNA [23]. In order to clarify this apparent discrepancy, we measured the total amount of 5S rRNA bound to MDM2 during a progressive hindering of ribosome biogenesis by ActD treatment. We found that, despite the inhibition of Oncotarget 4261 www.impactjournals.com/oncotarget 5S rRNA synthesis, a progressive increase of 5S rRNA coimmunoprecipitated with MDM2 occurred. These results confirmed that during ribosome biogenesis inhibition the 5S rRNA is necessary for the formation of the 5S RNP, thus suggesting that a pre-existing population of 5SrRNA is involved in 5S RNP assembly. To verify this point, we measured the amount of total, neo-synthesized and pre-existing 5S rRNA from MDM2 immunoprecipitated samples after ActD treatment. The data obtained showed that under inhibition of ribosome biogenesis, the increase in total 5S rRNA coimmunoprecipitated with MDM2 observed is mainly dependent by the pre-existing population, even after TFIIIA interference. Since RPL5 and RPL11 silencing led to a strong inhibition of p53 induction after ACTD treatment, it was very likely that the newly synthesized RPL5 and RPL11 are responsible for stored 5S rRNA recruitment and accumulation in cells. On the other experimental design relative to the collection of 5S rRNA fractions from MDM2 immunoprecipitated samples. We labeled HCT116 cells with 5-EU and we performed MDM2 immunoprecipitation on the whole protein extract. The RNA extracted from immunoprecipitated samples was in turn subjected to the capturing of labeled fraction (by Click-it Nascent RNA capture kit, Life technologies, Eugene Oregon, USA) with the collection of the unlabeled fraction or was used directly as a total RNA fraction. RNA fractions were reverse-transcribed and qPCR was performed in order to measure the different amount of 5S rRNA (see also materials and methods). (B) Western Blot analysis of MDM2 immuoprecipitation. Whole protein extract was collected from HCT116 cells treated with ACTD (8nM) for 8 hours, 72 hours after siNS or siTFIIIA transfection procedure. Representative Western blots show the level of MDM2 both in INPUT or MDM2 and IGG immunoprecipitated samples (IP). (C) Fold changes of neo-synthesized and pre-existing 5S rRNA fractions relative to total 5S rRNA variations bound to MDM2 after TFIIIA interference and ACTD treatment. HCT116 cells were labeled for 8 hours, as indicated above by the experimental design. Whole protein extraction was immunoprecipitated with an antibody against MDM2 and RNA was extracted. Labeled and unlabeled RNA fraction was isolated via click chemistry approach and neo-synthesized, pre-existing and total 5S rRNA amount was evaluated by RT-qPCR (see also experimetal procedures). Graph bars represent mean ± SEM of three independent experiments. Oncotarget 4263 www.impactjournals.com/oncotarget hand, data indicated that RPL5 and RPL11 exert different dynamics relative to 5S RNP assembly. In fact, It has been shown that RPL11 is localized principally in the nucleolus and cytoplasm and significantly increases in nucleoplasm only after ACTD treatment, in contrast to RPL5 and 5S rRNA which are accumulated between nucleolus, nucleoplasm and cytoplasm. Moreover, RPL5 is required in 5S rRNA maturation [22] and accumulates with 5S rRNA in a non-ribosomal pre-5S RNP complex [24,25], while the accumulation of RPL11 in the nucleus was related to PICT1 activity [26,27], which was identified as a mediator of 5S RNP incorporation into nascent 60S subunit [25]. In order to better characterize the role of RPL11 and RPL5 in 5S RNP assembly during a progressive inhibition of ribosome biogenesis, we compared the amount of RPL11 and RPL5 in cytoplasmic and nuclear extracts from HCT116 cells after ACTD treatment. Our data demonstrated that the amount of RPL11 in the nucleus was increased after 8 hours of ACTD administration. In contrast, we found that the amount of RPL5 in the nuclear extract remained constant during ACTD treatment (Supplementary Figure S6). While this data are in contrast to previous studies [28], they reflect other experimental observations [29,21]. The observed stability of RPL5 distribution in the nuclear fraction could reflect its role in premature binding of 5S rRNA and in the accumulation of the pre-5S RNP complex. Since the inhibition of RPL5 expression abrogated the progressive stabilization of p53 under ribosome biogenesis inhibition, it is very likely that the newly synthesized RPL5 continuously bind 5S rRNA, thus allowing a progressive storage of a pre-5S RNP complex. However, we found that 5S rRNA stability is dependent by both RPL5 and RPL11 synthesis, since after their inhibition by RNAi, the amount of neo-synthesized 5S rRNA is drastically reduced (Supplementary Figure S7). Thus, the absence of RPL11 or RPL5 synthesis would abrogate the assembly of pre-5S RNP complex, possibly due to their mutual protection [28]. This may well explain why the inhibition of RPL5 or RPL11 expression, but not that of 5S rRNA, hinders p53 stabilization also after long-term rRNA synthesis inhibition. The observation that RPL11 progressively accumulates in the nucleus during the inhibition of rRNA transcription, suggest that after ribosome biogenesis blockage the availability of newly synthesized RPL11 could control the formation of the 5S RNP complex by recruiting the stored pre-5S RNP complex (for a schematic representation see Figure 5). Our results shed new light on the dynamics of 5S RNP complex assembly under inhibition of rRNA transcription. Cell lines, culture conditions and siRNA transfection Human cancer-derived cell lines were cultured in monolayer at 37°C in humidified atmosphere containing 5% CO2. HCT116, and A549 were cultured in Dulbecco's modified Eagle's medium (Sigma Aldricth, Saint Louis, MO, USA), while MCF7 in RPMI medium (Sigma Aldricth, Saint Louis, MO, USA) supplemented with 10% fetal bovine serum (FBS, Sigma Aldrich, Saint Louis, MO, USA). Actinomycin D (Sigma Aldrich, Saint Louis, MO, USA) was used at a final concentration of 8 nM for the indicated times. siRNA transfection was performed by Lipofectamine RNAi Max and OPTIMEM medium (Applied Biosystems, Foster City, CA, USA) as manufacturer's protocol. Each siRNA was transfected at a concentration of 50 nM and cells are always treated for the indicated times after 72 h from the beginning of the transfection procedure. Two set of siRNAs were used. The sequences of the first set, targeting TFIIIA, RPL11 and RPL5 mRNA are listed in supplemental materials and methods section, while non silencing siRNA (Quiagen, Hilden, Germany), was used as transfection control. For the second set, non silencing siRNA, or sequences targeting TFIIIA or RPL11 were purchased from Life technologies, Eugene, Oregon, USA. RNA extraction, reverse transcription and real time qPCR RNA extraction from whole cell lysate or from immunoprecipitated samples was performed using Tri Reagent (Ambion, Austin, TX, USA) according to the manufacturer's protocol. Reverse transcription was performed using High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA) following the manufacturer's protocol. The relative amounts of TFIIIA mRNA, RPL11 mRNA, RPL5 mRNA, 5s rRNA was evaluated by Sybr Green method (Applied Biosystems, Foster City, CA, USA) using primer's sequences listed in supplemental materials and methods section, while b-glucoronidase was used as standard control and quantified with TaqMan Gene expression assay (Applied Biosystems, Foster City, CA, USA). Real time qPCR was performed on an ABI prism 7000 Sequence Detection system (Applied Biosystems, Foster City, CA, USA) with the 2E−ΔΔCT method. The mean ∆CT value of the control sample was used in each experiment to calculate the ∆∆CT value of sample replicates. Statistical analysis on the data obtained was performed by using the paired T-Test on three experimental replicates by considering significative data with a P value < 0,05. Autoradiographic analysis of 3 H-Uridine labeled rRNA For 18S and 28S rRNA processing analysis or evaluation of neo-synthesized 5S rRNA and tRNAs, pulsechase labeling was performed. In brief, 72 hours after transfection with siRNA against TFIIIA or NS siRNA, HCT116 cells were incubated in medium containing 2,5 μCi/mL of 3 H-Uridine for 1 hour (indicated as Pulse). Cells were then changed to medium containing non-radioactive Uridine at the concentration of 1 mM and harvested after 4 h (here described as Chase). Total RNA was isolated with TRIreagent (Ambion, Austin, TX, USA) and the extracted RNA was quantified by spectrophotometric analysis and the radioactivity incorporation was evaluated with β-Counter analysis. For 18S and 28S rRNA processing analysis, we collected samples both after 1h of Pulse labeling or after a chase time of 4 h and equal amount of 3 H-Uridine labeled RNAs were resolved in Formammide loading Dye (25% Formammide; 3,3 mM EDTA PH 8; 85 ug/ ml Blue Bromophenolo/Xylene Cyanol) by 1% agaroseformaldehyde gel (in 1X MOPS/6% Formaldehyde running buffer, with EtBr at 0,05 mg/ml ). For small RNA analysis, samples were collected after a chase time of 4h and electrophoresed in Formammide Loading Dye on a 10% polyacrylamide (19:1)/TBE/7M Urea Gel in TBE 0.5X running buffer at 200V constant for 1h. At the end of the run, the gel was stained with etBr at 0.5 mg/ml in TBE 0.5X for 10 min at room temperature while shaking and destained for 10 min in TBE 0.5X at room temperature while shaking. For both RNA electrophoresis, relative images were collected on UV transillumiator. Labeled RNA was then transferred on Hybond N+ membrane (GE Healthcare, Little Chalfont, Buckinghamshire, UK) by semidry transfer in TBE 0.5X for small rRNAs or by backward capillarity transfer in 20X SSC for 18S and 28S evaluation. RNAs were UV-crosslinked to the membrane by Stratalinker ® UV Crosslinker and the membranes was treated with EN 3 HANCE™ Spray Surface Autoradiography Enhancer (Perkin Elmer, Waltham, MA, USA). Finally, for labeled RNAs autoradiographic detection, the membranes were exposed for 6 days at −80°C on ECL Amersham Hyperfilms (GE Health Care, Little Chalfont, Buckinghamshire, UK). Whole cell protein extraction, nuclear/ cytoplasmic fractionation and western blot analysis Whole cell protein extraction was performed in lysis buffer (KH2PO4 0.1M pH 7.5, NP-40 1%, added with Complete protease inhibitors cocktail (Roche Diagnostics, Basel, Switzerland) and 0.1mM b-glycerolphosphate for 20 min. and cleared by centrifugation at 14000 RCF for 20 min. Nucelar/cytoplamic fractionation was performed by cell pellets were lysis in Hypotonic buffer (10 mM Hepes pH 7.9, 1.5 mM MgCl2, 10 mM KCl) supplemented with protease inhibitors cocktail (Roche Diagnostics, Basel, Switzerland), and centrifuged to separate the cytoplasmic fraction from the nuclear pellet. Nuclei were lysed in a 1: Co-immunoprecipitation assay Immunoprecipitation assay was performed by cell lysis on ice in immunoprecipitation buffer (25 mM Tris HCl, pH 7.5, 150 mM KCl, 5 mM MgCl2, 1 mM EGTA, 10% glycerol, 0.8% Igepal/NP40, 0,4 U/µl RNAsi OUT and protease inhibitors cocktail from Roche Diagnostics, Basel, Switzerland).The lysates were cleared by centrifugation and quantified by Bradford protein assay (Bio-Rad Laboratories Hempstead, UK). Equivalent amounts (200 µg) of protein were incubated at 4°C with rotation overnight in immunoprecipitation buffer with 3 µg of anti-MDM2 (H-221; Santa Cruz Biotechnology, Santa Cruz, CA, USA) Protein A/G-coated agarose beads (Santa Cruz Biotechnology, Santa Cruz, CA, USA) were added to the extracts and mixed by rotation for an additional 3 hours at 4°C. The beads were washed four times with immunoprecipitation buffer. At the end of the final wash, beads were resuspended in Laemmli loading buffer for western blot analysis or in TRIreagent (Ambion, Austin, TX, USA) for RNA extraction. The extracted RNA was divided in two fractions and used directly for total 5S rRNA analysis or to retrive the 5-EU labeled and the unlabeled RNA fraction by using the nascent RNA capture kit ® (Life technologies, Eugene Oregon, USA), as indicated after. Neo-synthesized and pre-existing 5S rRNA evaluation Neo-synthesized and pre-existing RNA amounts were evaluated by Nascent RNA Capture Kit ® (Life technologies, Eugene Oregon, USA) following manufacturer's instructions. In particular, HCT116 cells were seeded in 100 mm culture dishes and were labeled by 5-Ethynyl Uridine (5-EU) at 50 µM for the indicated times after 72 hours from the beginning of RNA interference. Cells were then collected after a pulse labeling of 1 hour and a chase time of 2 hours with 1 mM Uridine or after a continuous 5-EU labeling (8 hours) in presence or absence of 8 nM ACTD. At the end of the treatments, RNA was extracted with TRIreagent (Ambion, Austin, TX, USA) from whole cells (for global 5S RNA neosynthesis evaluation) or from MDM2 immunoprecipitated samples (for the measurement of neo-synthesized and pre-existing 5S rRNA bound to MDM2). The same amount of RNA (200 ng) was used directly for total fraction evaluation or was modified via click chemistry. Click chemistry based biotinylation was carried out in click-it reaction buffer (25 μl Click-iT EU buffer, 4 μl 25 mM CuSO4 and 1,25 μl of 10 mM Biotin azide, reaction buffer1, reaction buffer 2 and reaction buffer 3 only after 3 minutes) for 30 min at room temperature. Biotinylated RNA was precipitated by ammonium acetate for 10 minutes. After 10 minutes of centrifugation at 14000 RCF the pellet was washed twice with ETOH at 75%. All the retrieved RNA was then
5,440.4
2016-12-09T00:00:00.000
[ "Biology" ]
G-Design of Complete Multipartite Graph Where G Is Five Points-Six Edges In this paper, we construct G-designs of complete multipartite graph, where G is five points-six edges. Introduction Let K v be a complete graph with v vertices, and G be a simple graph with no isolated vertex.A G-design (or G-decomposition) is a pair (X, B), where X is the vertex set of K v and B is a collection of subgraphs of K v , called blocks, such that each block is isomorphic to G and any edge of K v occurs in exactly a blocks of B. For simplicity, such a G-design is denoted by G-GD (v).Obviously, the necessary conditions for the existence of a G-GD(v) are where d is the greatest common divisor of the degrees of the vertices in V(G). Let be a complete multipartite graph with vertex set , where these X i are disjoint and is a collection of subgraphs of called blocks, such that each block is isomorphic to G and any edge of occurs in exactly a blocks of B. When the multipartite graph has k i partite of size n i 1 ≤ i ≤ r, the holey G-design is denoted by . When (also known as G-decomposition of complete multipartite graph K n (t)). On the G-design of existence has a lot of research.Let k be the vertex number of G, When k ≤ 4, J. C. Bermond proved that condition (1) is also sufficient in [1]; When k = 5, J. C. Bermond gives a complete solution in [2].When G = S k , P k and C k , K. Ushio investigated the existence of G-design of complete multipartite graph in [3].In this paper, G-designs of complete multipartite graph, where G is five points-six edges is studied.Necessary and sufficient conditions are given for the G-designs of complete multipartite graph K n (t).For graph theoretical term, see [4]. Fundamental Theorem and Some Direct Construction Let G be a simple graph with five points-six edges (see Graph 1). G is denoted by (a, b, c)-(c, d, e). The lexicographic product 1 2 of the graphs G 1 and G 2 is the graph with vertex set V(G 1 ) × V(G 2 ) and an edge joining (u 1 , u 2 ) to (v 1 , v 2 ) if and only if either u 1 is adjacent to v 1 in G 1 or u 1 = v 1 and u 2 and v 2 are adjacent in G 2 .We are only concered with a particular kind of lexicographic product, , Take any  latin square and consider each element in the form   , ,    where  denotes the row,  the column and  the entry, with 1 , , l     .We can construct l 2 graphs G.Copyright © 2012 SciRes. Let K be a subset of positive integers.A pairwise balanced design (PBD(v, K)) of order v with block sizes from K is a pair (Y , ),where is a finite set (the point set) of cardinality v and B is a family of subsets (blocks) of which satisfy the properties: 2) Every pair of distinct elements of Y occurs in exactly a blocks of . Let K be a set of positive integers and If there exists a G-HD(t k ) where k  {3, 4, 5, 6, 8}, then there exists a G-HD(t n ) where n ≥ 3. Proof.Let X be n(n ≥ 3) element set and Z t be a modulo t residual additive group.For K = {3, 4, 5, 6, 8}, take Y = X × Z t , by applying Lemma 2.2, we assume that (X, ) be a PBD(n, k).In the A, we take a block A, for A A k K   , as there exists a G-HD(t k ), let A × Z t be the vertex set of G-HD(t k ) and block set be . be a , so (Y, ) be a G-HD(tn). A A Similar to the proof of lemma 2.5, We have the following conclusions. Graph 1 . Let G be a simple graph with five points-six edges.
988.4
2012-07-30T00:00:00.000
[ "Mathematics" ]
Satellite Data Potentialities in Solid Waste Landfill Monitoring: Review and Case Studies Remote sensing can represent an important instrument for monitoring landfills and their evolution over time. In general, remote sensing can offer a global and rapid view of the Earth’s surface. Thanks to a wide variety of heterogeneous sensors, it can provide high-level information, making it a useful technology for many applications. The main purpose of this paper is to provide a review of relevant methods based on remote sensing for landfill identification and monitoring. The methods found in the literature make use of measurements acquired from both multi-spectral and radar sensors and exploit vegetation indexes, land surface temperature, and backscatter information, either separately or in combination. Moreover, additional information can be provided by atmospheric sounders able to detect gas emissions (e.g., methane) and hyperspectral sensors. In order to provide a comprehensive overview of the full potential of Earth observation data for landfill monitoring, this article also provides applications of the main procedures presented to selected test sites. These applications highlight the potentialities of satellite-borne sensors for improving the detection and delimitation of landfills and enhancing the evaluation of waste disposal effects on environmental health. The results revealed that a single-sensor-based analysis can provide significant information on the landfill evolution. However, a data fusion approach that incorporates data acquired from heterogeneous sensors, including visible/near infrared, thermal infrared, and synthetic aperture radar (SAR), can result in a more effective instrument to fully support the monitoring of landfills and their effect on the surrounding area. In particular, the results show that a synergistic use of multispectral indexes, land surface temperature, and the backscatter coefficient retrieved from SAR sensors can improve the sensitivity to changes in the spatial geometry of the considered site. Introduction and Literature Review Over the last decades, the ever-growing world population and industrial production, human activities, and urbanization have led to an increase in waste production [1]. Improper waste management has a negative impact on human health and the environment. In particular, poor waste management practices can lead to serious contamination of the soil, water, and atmosphere [2,3]. Primarily due to economic advantages, landfills and/or open dump sites are still a common method for the disposal of municipal solid waste (MSW). In certain countries, such as those in the European Union, specific targets exist that aim to decrease the volume of waste sent to landfills in favor of more sustainable disposal methods [4]. However, in many developing countries, waste management is often a low-priority issue, and improper and unsustainable disposal practices are used. As a result, landfills are often not engineered, and the waste is indiscriminately disposed of in an unsanitary manner [5]. The potential impacts of landfills on the environment comprise soil, water, and air pollution, which mean impacts on human health, agriculture, and The aim of this article is to review and collect the most significant works on the identification and monitoring of waste dumping sites using satellite RS technologies. Section 1 introduces the review and presents its outcomes, which are classified according to the measurement typology. In particular, the research works are categorized according to the classification reported in Figure 1. This includes two main classes: the single-parameter-based analysis and the multiple-parameter-based analysis. The former, described in Section 1.1, includes studies that distinctly used information from a specific portion of the electromagnetic spectrum, while the latter includes studies that combined such information. These and are described in Section 1.2. The single-parameter-based analysis includes the following sub-classes: • Vegetation-indexes-based analysis (presented in Section 1.1.1): works that mainly use vegetation indexes for the identification and monitoring of the landfill sites are discussed in this section. In more detail, the considered indexes include the normalized difference vegetation index (NDVI), normalized difference water index (NDWI), and other specific indexes that can be obtained through multispectral sensors. • Land-surface-temperature-based analysis (presented in Section 1.1.2): this section reviews the papers that use land surface temperature (LST) for analyses related to a landfill site. LST is derived from thermal infrared channel of the sensors on board the Landsat satellites. • SAR-based analysis (presented in Section 1.1.3): this section includes the papers that use synthetic aperture radar (SAR) data. identification and monitoring of waste dumping sites using satellite RS technologies. Section 1 introduces the review and presents its outcomes, which are classified according to the measurement typology. In particular, the research works are categorized according to the classification reported in Figure 1. This includes two main classes: the singleparameter-based analysis and the multiple-parameter-based analysis. The former, described in Section 1.1, includes studies that distinctly used information from a specific portion of the electromagnetic spectrum, while the latter includes studies that combined such information. These and are described in Section 1.2. The single-parameter-based analysis includes the following sub-classes: • Vegetation-indexes-based analysis (presented in Section 1.1.1): works that mainly use vegetation indexes for the identification and monitoring of the landfill sites are discussed in this section. In more detail, the considered indexes include the normalized difference vegetation index (NDVI), normalized difference water index (NDWI), and other specific indexes that can be obtained through multispectral sensors. • Land-surface-temperature-based analysis (presented in Section 1.1.2): this section reviews the papers that use land surface temperature (LST) for analyses related to a landfill site. LST is derived from thermal infrared channel of the sensors on board the Landsat satellites. • SAR-based analysis (presented in Section 1.1.3): this section includes the papers that use synthetic aperture radar (SAR) data. The multiple parameter-based analysis, instead, includes the following sub-classes: The multiple parameter-based analysis, instead, includes the following sub-classes: • Multiple-optical-parameters-based analysis (presented in Section 1.2.1): works that use the LST in conjunction with vegetation indexes are included in this section. • SAR-and Multiple-optical-Parameters-based analysis (presented in Section 1.2.2): this section includes the papers that use SAR data combined with multispectral optical sensors and vegetation indexes or LST measurements. Thereafter, Section 2 presents the application case studies. We applied the most common state-of-the-art techniques to known landfill sites in order to test the capabilities of the specific procedures and to provide a comprehensive overview of attainable results from sensor-based satellite measurements. This section includes a description of the sensor-based indexes used for the analysis and the obtained results. Finally, Section 3 provides the conclusion. It must be noted that in this review, we focused on the use of satellite RS data that represent the most widely applied category of data due to their wide distribution, availability, global coverage, and ease of access. Nevertheless, a few methods based on aerial platforms also exist [15], and monitoring by means of unmanned aerial vehicles (UAVs) is increasingly being adopted [16,17]. It is also worth mentioning that RS of the atmosphere can also represent a valid instrument for landfill studies by exploiting the gas emissions from dumping sites. Thus far, methane (CH 4 ) emissions were used by [18] to locate landfills using data acquired by an aircraft. Single-Parameter-Based Analysis This section describes the works that separately exploit satellite-sensor-derived information acquired at the visible-near infrared-short wave infrared (VIS-NIR-SWIR), thermal infrared (TIR), and microwave parts of the electromagnetic spectrum, respectively. It consists of the following subsections: the vegetation-indexes-based analysis, land-surfacetemperature-based analysis, and SAR-based analysis. Vegetation-Indexes-Based Analysis A method was proposed by [19] for landfill monitoring by exploiting both satellite and ground-based data. Multispectral satellite data acquired by means of very high resolutions (between approximately 2 and 0.5 m), Pleiades and WorldView2 sensors, and aerial orthophotos were used to identify points of interest for monitoring the area. Two spectral indexes were analyzed: the NDVI and the global environmental monitoring index (GEMI), which equally depends on reflectance in the visible-near infrared range but is less affected by atmospheric effects and the type of sensor used, although it is more sensitive to bare soil variation than the NDVI. These indexes were used for a multi-temporal change detection analysis of the area. The obtained information was applied to enhance in situ geomatic surveys. Results obtained for a study site in Calabria, Italy, demonstrated that satellite data can effectively support land surveys, detecting critical points and highlighting most important changes. Additionally, Ref. [20] utilized the relationship between dump-induced contamination and vegetation stress to identify uncontrolled landfills and validated their technique on the watershed area of the Venice lagoon, Italy. They integrated GIS and RS information acquired by the multispectral, pan-sharpened IKONOS sensor with a spatial resolution of 1 m. RS imagery was used to identify stressed vegetated areas associated with the dump. Specifically, information available from local authorities was used to identify known illegal landfills and as training and calibration information for a machine learning algorithm for the classification of stressed vegetated areas. The automatic classification method was integrated with a visual interpretation of aerial photographs and GIS information, such as road networks and population density. As a result, vegetation stress proved to be a good indicator for illegal landfill detection, and RS proved to be a successful technique for landfill identification. Vegetation stress induced by the presence of dump sites is also an element of study in [21], in which several multispectral vegetation indexes were compared to identify the most suitable index for assessing environmental risks caused by MSW open dumps. The authors analyzed NDVI, SAVI, and MSAVI (modified soil-adjusted vegetation index, a modification of SAVI) obtained from Landsat 8 acquisitions over dump areas in Pakistan and compared them with three different criteria. The indexes' variation was studied in comparison to the distance from the source of contamination, finding that before the start of the dumping, no variation in vegetation health was observed in the dumps' surroundings. On the contrary, after MSW dumping, it was found that vegetation stress increased as the distance from the landfill decreased. They found that the MSAVI is the most suitable index for this purpose. The combination of GIS and RS is also presented in [22] and for risk assessment. The former developed an algorithmic criterion to compare the hazardous effects induced by different MSWs and utilized GIS to organize input data and very high-resolution data from QuickBird to obtain land cover maps of around the study area. The latter generated a risk assessment map of illegal waste-dumping areas by combining GIS and RS information. They used multispectral images acquired by the FORMOSAT-2 satellite to analyze the spectral characteristics of dumping areas, known from field surveys and EPA (Environmental Protection Administration) archives. They integrated this information in a risk assessment model composed of other geospatial factors related to waste dumping. The integration of spectral features greatly improves the risk assessment mapping results, although higher spectral resolution imagery may be useful in avoiding the misclassification of waste materials with bare soil. Additionally, Ref. [14] applied spectral features to investigate the possibility of identifying landfill areas using Landsat 5 satellite images, digital orthophotos, and a Corine 1990 land cover map. They analyzed the spectral signature in all seven bands of the thematic mapper sensor for eight classes of interest (including the "dump" class) individuated using the ISODATA (iterative self-organizing data analysis) automatic pre-classification technique. A multitemporal comparison of the images allowed the authors to discriminate the dump spectral signature from the others, including those related to classes with similar behavior, such as urban area class. The digital orthophotos and land cover map were used as an aid for the elaboration, validation, and interpretation of the Landsat 5 data. A change detection procedure was developed by [23] that aimed to distinguish waste burial sites. Two acquisitions from Landsat 7 and Landsat 8 were processed by means of the GRASS (geographic resources analysis support system) GIS, according to two methodologies. Similar to [14], the first procedure made use of an unsupervised classification method to group pixels with a similar spectral signature and then classified them with a maximum likelihood classifier. The second method was based on the tasseled cap transformation and discriminated the most changed areas by means of a "Greenness empirically normalization index" obtained from the greenness vegetation index. To validate the method, ground truth maps were realized through a comparison and visual analysis of available archive images with a high spatial resolution (from 1 m to 0.18 m). This second method led to significant and applicable results. A method to detect micro-dumps and greenhouse gases through the use of satellite RS data was developed by [24]. In particular, the suggested procedure was applied to Pleiades multispectral data with a spatial resolution of 2 m in the RGB and near infrared bands. The authors exploited both the spatial and spectral features extracted from the images through the scattering transform algorithm. The spatial characterization was achieved by extracting textural features from the images, while the spectral features were based on the brightness information. The obtained results showed good performance in greenhouse gas detection; on the contrary, the results of the micro-dump recognition, characterized by a small dimension and spectral and spatial inhomogeneity, reported a high false alarm rate. Finally, Ref. [25] proposed a method for identifying and ranking landfill sites using the city of Faisalabad as a case study. Indeed, the city is one of the largest industrial centers in Pakistan, and solid waste derived from its huge production is often dumped randomly near populated areas. RS data and GIS were used to rank the areas for developing or expanding an existing landfill site from the most suitable sites to the worst sites. As ranking criteria, three indexes derived from Landsat 8 acquisitions were used in addition to physical factors, including water bodies, roads, and the population. The indexes were the NDVI, NDWI, and the normalized difference built-up index (NDBI). The latter was used to assess the urban built-up size, the geographical distribution of the study area, and a comprehensive picture of the urban land cover. Additionally, Ref. [26] utilized the NDVI and land use (LU)/land cover (LC) maps for the identification and ranking of possible dumping sites in Saudi Arabia, using Sentinel-2 data. The obtained maps and NDVI were integrated with other geophysical and spatial information to obtain a set of information for selecting the most appropriate site for waste disposal. The effectiveness of a multitemporal analysis of the LST, obtained by Landsat satellites monitoring landfill sites, was proven by [27]. In this work, the study area, data, and methodology proposed were used with the aim of determining the waste-dumping regions in a landfill using Landsat Thematic Mapper and Enhanced Thematic Mapper Plus images. Images that covered a temporal range of 10 years were collected during the summer and winter seasons. The results demonstrated the effectiveness of the LST for determining waste location, especially considering the summer acquisition when the heat flux is more pronounced. Landsat data were also used in [28]: they successfully developed a methodology for identifying landfill fires by correlating them with landfill temperature anomalies. From Landsat 5 and Landsat 8 images, they obtained LST maps over a 17-year period. Also, Ref. [29] estimated LST using Landsat data acquired from 2000 to 2016 to demonstrate the correlation between this parameter and the amount of solid waste buried in the landfill. Subsequently, they used landfill-site-specific information in addition to other meteorological parameters to estimate the LST with a neural network algorithm. In [30], the authors assessed the LST distribution for two large landfill sites in Vietnam using Landsat 8 satellite images. They derived the LST from two images, acquired in 2019 and 2020, that showed the two different test sites. They proved that the landfill surface temperature is often higher than 40 • C and is much higher than the surrounding area, even compared to built-up areas. Additionally, the results show that the difference between the highest LST (at the landfill) and the lowest LST (at the area covered by vegetation) in both test areas was about 20 degrees. SAR-Based Analysis Ali et al. [31] used a single Radarsat-1 C-band SAR acquisition to monitor multiple MSW sites in the city of Riyadh, Saudi Arabia. In particular, they processed one Radarsat-1 backscattering image (in F2 mode), and a digital elevation model (DEM) was obtained using stereo-SAR techniques to orthorectify the image. Eventually, image segmentation was performed to map waste disposals by applying a supervised classification algorithm in PCI software version 10, using ground-truth samples as input. The results confirmed that satellite SAR imagery and digital image processing techniques could be valuable for MSW site monitoring. This study used the SAR backscatter to monitor waste disposal areas; the employment of SAR interferometry for the same purpose is also found in literature. Indeed, since several factors related to the observed surface affect the SAR backscatter signal, the combination of multiple images can provide more robust information on the characteristics of a landfill. For example, Ref. [32] monitored the change in waste disposal volume in a landfill located in Athens by using two pairs of ENVISAT ASAR images and applied the SAR interferometry technique to generate two DEMs. The two DEMs, referring to the first and (assumed as reference product) second dates, respectively, were subtracted to derive variations of the elevations. In particular, changes in the waste volume were examined using elevation profiles obtained with the two DEMs. The interferometric SAR (InSAR) technique was also applied in [33] in combination with unmanned aerial vehicle (UAV) photogrammetry and ground measurement in order to obtain information on landfill deformation and evaluate the risks related to the waste disposal site. The authors considered one of the largest engineered sanitary landfills in China, the Tianziling landfill, as a study case. In particular, Sentinel-1 data were used to assess deformation during the monitoring period, supported and integrated with ground-based and UAV data. The multi-source information was then used to identify the potential subsidence area of the landfill for the risk assessment analysis. The results showed that the approach permitted the detection of deformation in the range of millimeters to meters, allowing for the identification of unstable areas for risk estimation. In [34], the authors used a differential InSAR technique for monitoring landfill settlements in the Montegrosso-Pallareta landfill in Italy. They used multi-temporal COSMO-SkyMed interferometric data to mon-itor a low-magnitude landfill settlement with millimetric accuracy. In [35], the authors used the multitemporal InSAR technique for the time-series monitoring of deformation at the Xingfeng landfill in China. Multitemporal InSAR allows for the elimination of the influences of temporal and spatial decorrelation and atmospheric noise. The authors used images from Sentinel-1A, Radarsat-2, ALOS-2, and TerraSAR-X/TanDEM-X to measure the settlement and thickness of the landfill. The results showed that the coherence of the landfill during the initial stage of closure is primarily affected by temporal decorrelation, caused by considerable gradient deformation rather than by spatial decorrelation. Moreover, the results revealed the correlation between the time of settlement and thickness, with the settlement of younger areas usually found to be greater than that of older areas. Multiple-Parameter-Based Analysis This section describes works that combine satellite-sensor-derived information acquired at the VIS-NIR-SWIR, TIR, and microwave parts of the electromagnetic spectrum, respectively. It consists of the following subsections: multiple-optical-parameters based analysis and the combination of SAR and multiple-optical-parameters based analysis. Multiple-Optical-Parameters-Based Analysis Several authors proposed an approach that makes use of both temperature information and multispectral indexes. The methodology proposed in [36] follows the method already examined in [27] except for the use of different study sites and the introduction of an NDVI visual analysis. From Landsat Thematic Mapper data, they derived LST and NDVI images for a total of 81 acquisitions in different seasons throughout the year. The results confirmed that the LST of the landfill is higher than the surrounding vegetation and air temperature, and that this difference is more pronounced during the spring and summer seasons. In particular, they found that the LST is generally higher for an active landfill compared to a closed one, and it is higher for open landfill stages than capped landfill stages. Nevertheless, the LST was found to be affected by moisture caused by rain or snow. In fact, precipitation provides a suitable environment for bacteria to proliferate, inducing a high rate of gas production and heat generation. Finally, the NDVI was used to analyze vegetation conditions in and around the landfill. Additionally, Ref. [21], following their previous work [21], integrated the LST into the analysis of vegetation indexes with the aim of evaluating the bio-thermal influence of two dumping sites in Pakistan on the surrounding areas. The results previously obtained for the vegetation indexes were confirmed; moreover, they used Landsat 8 data to demonstrate that the LST decreases moving away from the sites. In 2008, Yang et al. [37] combined RS data in a GIS environment to analyze the leachate and gas emissions from landfills in a metropolitan area of Jiangsu Province, China. Particularly, the LST, the NDVI, and the CMI (clay minerals index) were considered in the study. According to the authors, these choices were motivated by the fact that differences in the LST between actual landfill sites and the surrounding areas are correlated with the speed and significance of the biodegradation processes occurring in the landfill and the release of the related gases. The NDVI, due to its correlation with the water content in the soil, acts as an indicator of the presence of leachates; the CMI allows for an assessment of the clay concentration in the surface since this material is commonly used to prevent the release of contaminated water and other liquids by confining landfills. Concerning the employment of RS data, a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was retrieved and processed to compute the LST and the multispectral indexes. Eventually, the obtained results were combined with available digital maps and regional data to perform a spatial analysis of the landfill areas by examining their environmental conditions and distance from water bodies and infrastructures. In 2018, Sarp et al. [38] analyzed the impact of an abandoned aggregate quarry used for uncontrolled waste disposal on groundwater, the LST, land surface moisture, and vegetation cover using GIS and RS technologies. In particular, for each of these domains, an index derived from the Landsat 8 imagery was selected for the years 2014 and 2017. In detail, the Landsat 8 sensor TIRS was employed to compute the LST and assess the changes that occurred during the specified time interval as an indicator of the influence of solid waste deposition on thermal absorption and emissions. Moreover, the multispectral indexes NDMI and SAVI were used to assess the impact of solid waste disposal on surface moisture and surrounding vegetation, respectively. Furthermore, Pearson's correlation coefficient was utilized to evaluate the degree of correlation between the variables mentioned above, i.e., LST, NDMI and SAVI, for both the time periods: 2014 and 2017. Eventually, groundwater contamination was assessed in the study area, with samples taken from four wells close to the site. According to the obtained results, solid waste disposal caused groundwater pollution, increased surface temperature, and reduced soil moisture. In 2021, Ganci et al. [39] proposed the employment of multispectral satellite images to characterize the state of activity of landfills since, according to the authors, when active, waste disposal sites show differences in surface conditions compared to neighboring areas. Indeed, regarding the usage of RS data, Landsat (4,5,7,8), EOS-Aster, Sentinel-2, and Doves-Planetscope acquisitions were used to derive the LST, NDVI and NDWI as indicators of the surface temperature, state of vegetation, and soil moisture, respectively. In particular, the employment of the NDWI allowed for an analysis of the soil moisture to identify the leachate contamination of the aquifers. Preliminary results, obtained by combining the indexes computed for the site and the surrounding areas, show how this approach can identify known the areas and levels of activity of known landfill sites. The LST was also used in combination with other parameters in [40]. In this work, the authors presented a characterization of the largest uncontrolled dump site in Egypt. They used RS data and other geological and geophysical data. They used Landsat 5 and Landsat 8 images to derive the LST, soil moisture maps, and LU/LCmaps. The LU/LC maps were used to derive the population settlement increment and to correlate the LST and soil moisture maps with the propagation of the municipal waste dump sites. The soil moisture map was used to characterize the leachate flow, while the LST was used to manifest the dump site surface temperature and to correlate the temperature to the quantity of waste. In the region of Saskatoon, Canada, Ref. [41] used satellite imagery and vector data to produce a probability map of illegal dump sites. The authors used the night-time light data from the visible infrared imaging radiometer suite (VIIRS) sensor, which is on board the joint NASA/NOAA Suomi National Polar orbiting Partnership (Suomi NPP) satellite, and the data obtained from the Landsat 8 satellite. The former were used to identify low-light regions with lower anthropogenic activities to identify suitable sites for landfill expansion, and the latter were used to derive the LST and the MSAVI index. Satellite data were combined with vector data that included highways, railways, and the locations of regular disposal sites to produce a probability map by means of the simple additive weighting (SAW) method. Related to the Landsat 8 satellite data, they found that the MSAVI is a significant factor for the detection of illegal dump sites, while the LST is less effective for the identification of smaller-scale landfills. Similarly, in [42], the authors considered the same study area to detect illegal dump sites using vector parameters and satellite images. The vector parameters were active, regular landfills and highways. Additionally, they used the LST, the enhanced vegetation index (EVI), derived from MODIS data, and the formaldehyde total column (HCHO), derived from the Aura ozone monitoring instrument (OMI). The EVI is an alternative to the NDVI that minimizes errors due to atmospheric conditions and canopy background noise. The results showed that the LST and highways are important parameters for the identification of illegal dump sites; the latter are important because road network intensity and accessibility appear to be important for their occurrence. The EVI is mined by the predominating uniform land cover of grassland and shrubs. Instead, the HCHO is instead less spatially sensitive, limiting its potential as an indicator. SAR-and Multi-Optical-Parameters-Based Analysis Ottavianelli et al. [43] introduced a visual assessment of hyperspectral and SAR RS for solid waste landfill management in the UK. In detail, the ERS and Envisat SAR images were processed to investigate the backscatter and interferometric coherence sensitivity to structural changes due to a landfilling operation. Moreover, a hyperspectral scene acquired by the PROBA-1 main instrument, the Compact High-Resolution Imaging Spectrometer (CHRIS), was employed (as a red-green-blue composite) to monitor the activities carried out in a landfill. According to the authors, this proved to be more adequate than multispectral data. Yonezawa [44] proposed the joint use of multispectral and SAR data by employing ALOS PRISM, AVNIR-2, and PALSAR acquisitions for monitoring waste disposal sites. In particular, through a visual inspection of pan-sharpened PRISM and AVNIR-2 images, junkyards were identified in the target area. Moreover, PALSAR backscatter images were also examined in HH polarization for both flight directions (ascending and descending) to identify surface changes at waste disposal sites by visual interpretation. Eventually, panchromatic and multispectral QuickBird data allowed for the identification of a wastetire disposal, leveraging its difference in the spectral signature with the surrounding land cover types. Cadau et al. [45] proposed an integrated system for landfill detection and monitoring using EO data, particularly by involving multispectral and SAR data. While optical data are used mainly for 2D monitoring purposes, SAR information is useful for detecting both 2D and 3D landfill variations. For this purpose, concerning optical data (RAPIDEYE and SPOT-5), the DDI index was introduced and tested, combining optical-derived vegetation indexes with image textural features. Furthermore, regarding SAR data, COSMO-Skymed interferometric pairs were processed. In particular, the HH and VV interferometric pairs (ascending and descending paths) were analyzed to extract the 3D surface elevation model. The authors stated that this procedure could be applied to structures characterized by a regular geometry (with high coherence values) such that the signal phase information allows for the detection of height variations. Additionally, Cadau et al. proved that, in general, the decrease in the coherence between different SAR image pairs is correlated to a change in the landfill geometry's scattering mechanisms. Moreover, night-time ASTER images, particularly TIR bands, were processed to retrieve the LST trends and detect potential thermal anomalies. Furthermore, the study analyzed the presence of new potential landfills by analyzing the DDI map and ASTER LST map. Moreover, Agapiou et al. [46] investigated the sensitivity of multispectral, hyperspectral, and SAR data for olive mill waste disposal monitoring. Concerning multispectral acquisitions, very high resolution (VHR) data were employed, including Pleiades (0.50 m), SPOT 6 (1.5 m), QuickBird (0.60 m), WorldView-2 (0.40 m), and GeoEye 1 (0.40 m) images. Particularly, the VHR images were processed by computing two indexes, combining the bands in the visible and VNIR part of the spectrum: NDVI and OOMW (olive oil mill waster index). Moreover, a PCA (principal component analysis) was tested to identify the optimum linear combinations of the original bands. Then, an IHS (intensity-huesaturation) transformation was applied to perform the resulting image fusion. Eventually, the ISODATA algorithm and the LSU technique were used to discriminate the olive mill waste disposal by performing an unsupervised classification. A COSMO-SkyMed image (2.5 m) was processed to identify the disposal areas, which were distinguished as black targets due to low backscattering of the signal (a significant presence of water). Moreover, a hyperspectral EO-ALI image (30 m) was employed through fusion techniques with the SAR image. However, the olive mill waste disposal areas were not successfully distinguished despite the improved spatial resolution. Summary and Discussion of the Literature Review In the first section, we reviewed the role of satellite RS in landfill monitoring. The analyzed studies reveal that satellite data offer promising results and represent a relevant Sensors 2023, 23, 3917 10 of 25 instrument for the aforementioned task. Satellite acquisitions deliver a global view of the area of interest at a low cost and within a short time frame, and different kind of sensors on board provide advanced information in various portions of the electromagnetic spectrum. The reviewed works were classified according to the information retrieved from the satellite acquisitions, and four main categories of analysis were identified: vegetation-indexes-based analysis, land-surface-temperature-based analysis, multiple-optical-parameters-based analysis, and SAR-and multiple-optical-parameters-based analysis. The review revealed that satellite data are mainly used with the purpose of identifying and monitoring landfill sites, both regular and illegal ones, and to also estimate the hazardous effects of dumping sites on their surrounding areas and to assess their environmental impact. Only a few studies used satellite data for more specific analyses, such as monitoring landfill emissions, i.e., leachates and gases, estimating the amount of buried waste, identifying landfill fires, and monitoring waste from olive mills. This suggests that satellite data are mainly applied for the investigation of large-scale events instead of more refined analyses. In fact, the sensors used in the analyzed literature are mostly high-spatial-resolution sensors, e.g., those on board the Landsat missions. Fewer studies made use of the very-high-resolution instruments carried by the Pléiades, WorldView-2, SPOT 6, QuickBird, GeoEye 1, and COSMO-SkyMed satellites. This lower number of research studies may be due to the limited spectral resolution and data distribution policy. Most of the analyzed works exploited the information obtained from optical instruments for their specific purposes, while a smaller part used SAR acquisitions. Related to the former, the measurements and indexes calculated were almost always combined in order to take advantage of the heterogeneity of the information. The most common parameters used for the analysis were the NDVI, NDWI, and LST, which have proven to be effective in the identification and characterization of landfill areas. In relation to the NDVI and NDWI, the works also demonstrated the ability of the indexes to assess the negative effects of landfills on the vegetation of surrounding areas. Furthermore, the LST was also shown to be able to identify a dump site, and the correlation between an increase in this value and the presence of a landfill was demonstrated. A lower number of articles used indexes such as the GEMI, SAVI, MSAVI, NDBI, or CMI, which are very specific indexes mainly used to obtain secondary information for particular tasks. For example, the NDBI was used to discriminate buildings to define an urban area, and CMI was used to detect clay on top of a landfill. The DDI receives a separate mention, as it is not so common but was specifically developed for landfill identification. Multispectral images were also often used to locate landfills by deriving their spectral signature. However, this was found to be very similar to the case of built-up areas, and additional analyses were required for a correct discrimination, e.g., a multitemporal analysis. Concerning SAR acquisitions, they were often used in combination with optical images. SAR was used mainly for landfill monitoring purposes, exploiting the radar backscatter to observe variations in the surface scattering mechanism. Moreover, the interferometry technique was used to assess changes in the volume of the site and to derive DEMs of the area of interest. The optical images were used to facilitate the visual inspection and to obtain DDI or LST measures. In general, the analyzed works provided successful results, especially those that used the LST as an indicator and SAR-based methods, alone or in combination with other measurements. Spectral information is, on the contrary, more susceptible to errors as waste disposal sites are often misclassified as urban areas or bare soil. Furthermore, several studies have shown that the dimensions of landfill sites are decisive for the analysis. As expected, small landfill areas are more difficult to identify, and this characteristic makes the detection of newborn waste disposal sites difficult. We can therefore assume that, generally speaking, RS-based methods are effective and efficient for the identification and monitoring of large and known waste disposal sites. On the contrary, the use of this technology is more challenging in the case of smaller sites. It follows that RS techniques perform less in tasks such as the early warning or detection of illegal dumps. Certainly, the use of very high-resolution images can overcome these limitations, but these are often not as widely and freely distributed as Landsat or Sentinel acquisitions. Case Studies Beginning from the review on the application of EO data for landfill monitoring, some relevant techniques were selected and tested on predetermined tests sites to demonstrate some application cases. The aim is to illustrate the behavior of the satellite-based measurements related to changes in the sites. In particular, the NDVI, NDWI, DDI and LST were selected for a multispectral data analysis, while for SAR data, the backscatter coefficient was considered. The analyses considered the above-mentioned indexes independently and in combination with each other. As an approach, a multitemporal analysis was performed. Firstly, very-high-resolution (VHR) historical images from Google Earth were visually analyzed to define the sites' geometries. Indeed, polygons were manually delineated through a visual analysis of these images. Moreover, the same images were inspected to observe possible changes in the sites over the past five years. This method allows for the comparison of images representing the same scene at different moments in time. Consequently, when changes affecting the site (e.g., land cover modification events and spatial variations within the sites' perimeter) were visible, satellite data were downloaded and processed to observe the possible correlated variations of the multispectral indexes, LST, and SAR backscatter. As a result, to provide a complete overview of such variations, in this section it is possible to find panels representing the above-mentioned indexes as images centered on the study sites. In addition to this visual representation, to quantify what is observable from the images, numerical estimates based on the aforementioned indexes are provided. Particularly, the tables contain the mean values of indexes computed over the areas affected by change in the sites. These areas were delineated through a visual interpretation of the Google Earth VHR historical images. Concerning multispectral data, Sentinel-2 acquisitions were involved. Notably, a single image per year was considered for the comparison so that all the acquisitions are related to the same year period to consider the seasonal effects on the surface reflectance. Regarding the LST, Landsat 8 images close to the Sentinel-2 acquisition dates assumed for the other optical indexes' computations were considered and processed according to what was mentioned in Section 2.1.4. In addition, concerning the SAR data, Sentinel-1 images were considered to obtain the backscatter coefficient by downloading and processing the relevant products. Regarding the choice of the study sites, known dumping sites located in the regions of Pakistan and Mongolia were utilized as test sites. Indeed, as stated in the Introduction, MSW management represents a priority, especially in low-and middle-income countries, where the volume of waste generated is increasing rapidly, and engineered waste disposal systems are often not present. The regions of interest are reported in Figure 2, which shows the countries of Pakistan and Mongolia and the dumping site located in Pakistan, near the city of Faisalabad, and the three sites located in Mongolia, in the city of Ulaanbaatar. Table 1 specified the products utilized for the analysis and the relative satellite acquisition platform. The area of acquisition is also reported. Regarding the choice of the study sites, known dumping sites located in the regions of Pakistan and Mongolia were utilized as test sites. Indeed, as stated in the Introduction, MSW management represents a priority, especially in low-and middle-income countries, where the volume of waste generated is increasing rapidly, and engineered waste disposal systems are often not present. The regions of interest are reported in Figure 2, which shows the countries of Pakistan and Mongolia and the dumping site located in Pakistan, near the city of Faisalabad, and the three sites located in Mongolia, in the city of Ulaanbaatar. Table 1 specified the products utilized for the analysis and the relative satellite acquisition platform. The area of acquisition is also reported. NDVI The NDVI is the most known and used vegetation index for quantifying green vegetation by measuring plant health based on how a plant reflects light (usually sunlight) at specific frequencies. The NDVI calculation normalizes green leaf scattering in the near infrared (NIR) wavelength and chlorophyll absorption in the red wavelength (RED), according to the following expression: where NIR and RED are the reflectance provided by Sentinel-2 Level L2A band 8 and band 4 products, respectively [47]. The equation above always results in an NDVI value between −1 and +1. A number between −1 and 0 suggests an inanimate object, such as roads, buildings, water bodies, or dead plants. Values close to zero generally correspond to barren areas of rock, sand, or soil. Low, positive values represent unhealthy plants and stressed leaves (and scrubland or grassland as the type of vegetation). In contrast, high values indicate healthy plants and leaves (and dense evergreen forest as the type of vegetation). NDWI The NDWI can also be considered an independent vegetation index complementary to the NDVI. The NDWI is sensitive to changes in the water content of vegetation canopies. The following equation (Equation (2)) describes how the NDWI is calculated: where GREEN and NIR are the reflectance provided by Sentinel-2 Level L2A band 3 and band 8 products, respectively [48]. The equation above results in an NDWI value between −1 and +1. A number between −1 and 0 suggests an inanimate object, such as roads, buildings, water bodies, or dead plants. Moreover, the higher the NDWI value between 0 and 1, the greater the vegetation canopies' water content. Moreover, the NDWI is insensitive to atmospheric conditions. DDI and SAVI The SAVI (soil-adjusted vegetation index) is a modification of the NDVI to take into account vegetation density [49], and it is defined as (Equation (3)): where NIR and RED are as defined above, and L is a canopy background adjustment factor, assumed equal to 0.5. The SAVI is used to calculate the DDI. The latter is not a widely used index for RS applications since, in contrast to the indexes mentioned above, it is a specific indicator for dump detection and monitoring. Indeed, it extends the NDVI idea by using a soil-adjusted vegetation index and combining it with the local texture of an image at the waste site. The index is based on the following formula (Equation (4)): where Entropy is a statistical measure of randomness that can be used to characterize the texture of an input image around a target pixel [45]. In this context, it is applied to the vegetation index images through the gray-level co-occurrence matrix. (GLCM) [50]. Land Surface Temperature The LST is a key parameter that can be retrieved from TIR remotely sensed data. Many algorithms are available for LST estimation from satellite data. A method for surface temperature retrieval from Landsat data using a single band (Band 10) algorithm, based on the radiative transfer equation, has been implemented, e.g., in [51], in case of lakes, but it can be generalized to land surfaces by adopting appropriate values of surface emissivity. To this aim, the emissivity can be assumed on the reference of an empirical relation reported below (see Equation (5)), which was suggested in a previous work [52]. The proposed algorithm for the land surface temperature estimation consists of deriving the atmospheric corrected spectral radiance for a single Landsat thermal band, exploiting a NASA web tool which provides three parameters requested in the radiativetransfer-based Planck equation. The first step in this workflow is radiometric calibration, which obtains the spectral radiances from the digital number values measured by sensors on board satellites (as shown in Equation (6)). where L 1 is the spectral radiance, DN is the digital number, L min λ and L max λ are the spectral radiances scaled on the base of DN min (the minimum pixel value before radiometric calibration) and DN max (the maximum pixel value before radiometric calibration) reported in the metadata of Landsat products. Then, atmospherically corrected spectral radiance values can be obtained from the following Equation (7): where the atmospheric upwelling (L↑) and downwelling (L↓), radiance, and the total atmospheric transmission (τ) are derived from the NASA-developed web tool for correcting thermal bands in Landsat data, which can be accessed at the following link: http://atmcorr. gsfc.nasa.gov. (Last accessed on 24 February 2023). The land surface temperature is finally obtained (see Equation (8)) by inverting Planck's function, where K 1 and K 2 are constants reported in the metadata of each Landsat satellite product (e.g., K 1 = 774.885 and K 2 = 1321.079 for Landsat-8/TIRS). In order to mitigate possible year-dependent climatological effects, each LST map reported in the results was obtained by averaging the LST values retrieved through the aforementioned procedure applied to two Landsat products collected in a period consistent with the acquisition days of the images used for computing the other indexes considered for the analysis. Backscatter Coefficient In literature, the SAR data employment revealed significant results, mainly by providing information about the structural changes occurring in landfills or, generally speaking, waste disposal sites. Indeed, since multispectral data provide details concerning the radiometric characterization of the observed surface, SAR acts as a complementary technology due to its geometry-based observables. Moreover, with SAR data, it is also possible to measure a variation occurring on the landfill surface when the optical signal is saturated or not exploitable because of weather conditions [45]. From an operational perspective, the radar signal is transmitted by the satellite down to the Earth's surface and scattered back towards the satellite's antenna by the Earth's surface itself. Concerning this mechanism, SAR backscatter consists of part of that signal recorded by the onboard satellite's sensor. Many factors related to the surface structure influence the SAR backscatter, including the surface's roughness and its electrical conductivity (or dielectric constant). Rough surfaces will backscatter more radar energy back to the satellite radar antenna, while smooth surfaces will backscatter much less or even no energy back to the satellite radar antenna. A second influence on radar backscatter is due to the surface's electrical conductivity (dielectric constant), which is directly related to the moisture content of the soil. A high moisture content means a high backscatter, and a low moisture content means a low backscatter. Another factor influencing the SAR backscatter is the local incidence angle, which is defined by the incident radar beam and the normal to the intercepting surface since reflectivity from distributed scatterers decreases with increasing local incidence angles. Concerning the backscatter analysis, a couple of SAR acquisitions related to the two years considered were processed to obtain the backscatter coefficient in both polarization (VH and VV). Regarding the processing, the two images were ingested in the SNAP software version 8 to progressively apply all the needed operators to obtain the backscatter coefficient. Particularly the following sequence of processing function were applied. i. Apply Orbit File; ii. The above-mentioned functions represent the typical operators needed to process Sentinel-1 SLC data as described, e.g., in [53,54]. Vegetation-Indexes-Based Analysis The Tsagaan Davaa landfill is located in Mongolia. A land cover change in the southernmost part of the landfill can be observed between 2020 and 2021, as shown on the left in Figure 3. Despite the cloud cover, the change was still visible. As can be found in the literature, the NDVI is a well-established index that is also frequently used in landfill detection because it can correlate vegetation stress to potential waste-induced contamination. For this purpose, the NDVI was applied to this site, and it is shown on the right side of the image before and after the event. The index clearly highlights the area of interest in both 2020 and 2021, reporting a lower value within the landfill than the surrounding areas. The event is also clearly represented by the NDVI index, which rose in 2021 when the change occurred. To quantify the increase in NDVI values, the mean values over the area where the land cover change occurred (black dashed polygons in Figure 3) were computed before and after the above-mentioned event, and the values are reported in Table 2. The results show an increase in the NDVI values, which was expected due to the removal of waste material and the resulting land cover change suggested by the visual interpretation of the Google Earth imagery. The NDWI was analyzed for the Morangiin Davaa landfill in Mongolia, shown in Figure 4 on the left. The NDWI was used since the soil water content can be related to the leachate produced by waste. In this site, between 2018 and 2019, a waste disposal site expansion in the southern part occurred. An increase in the NDWI values is noted for the site's southern part, according to expectations. Moreover the above-mentioned NDWI increase can be observed not only in the illustration (Figure 4), but also by computing the mean values over the event-related area of the landfill (Table 3), suggesting the potential presence of leachates in the site, particularly in the area affected by the new waste disposal (in the southern area). The NDWI was analyzed for the Morangiin Davaa landfill in Mongolia, shown in Figure 4 on the left. The NDWI was used since the soil water content can be related to the leachate produced by waste. In this site, between 2018 and 2019, a waste disposal site expansion in the southern part occurred. An increase in the NDWI values is noted for the site's southern part, according to expectations. Moreover the above-mentioned NDWI increase can be observed not only in the illustration (Figure 4), but also by computing the mean values over the event-related area of the landfill (Table 3), suggesting the potential presence of leachates in the site, particularly in the area affected by the new waste disposal (in the southern area). Red line: landfill perimeter, black dashed line: area affected by the land cover change. The NDWI was analyzed for the Morangiin Davaa landfill in Mongolia, shown in Figure 4 on the left. The NDWI was used since the soil water content can be related to the leachate produced by waste. In this site, between 2018 and 2019, a waste disposal site expansion in the southern part occurred. An increase in the NDWI values is noted for the site's southern part, according to expectations. Moreover the above-mentioned NDWI increase can be observed not only in the illustration (Figure 4), but also by computing the mean values over the event-related area of the landfill (Table 3), suggesting the potential presence of leachates in the site, particularly in the area affected by the new waste disposal (in the southern area). In the Narangiin Enger landfill, Mongolia, an event occurred between 2020 and 2021: a land cover change in the southern area of the site, shown on the left in Figure 5. The DDI was developed specifically for landfill detection, and an application example is shown for this site. The land cover change is distinctly detected since a decrease in DDI values in that specific area of the site was observed. Also in this case, the mean values over the southern area (black dashed polygon in Figure 5) of the site were computed before and after the land cover change (Table 4). A decrease from −1.12 to −5.65 occurred in the area of interest, confirming what was expected. Table 4. NDWI mean values computed over the southern area of the Narangiin Enger landfill site. An increase between 2020 and 2021 occurred due to land cover change. Index 2020 2021 DDI −1.121 −5.645 this site. The land cover change is distinctly detected since a decrease in DDI values in that specific area of the site was observed. Also in this case, the mean values over the southern area (black dashed polygon in Figure 5) of the site were computed before and after the land cover change (Table 4). A decrease from −1.12 to −5.65 occurred in the area of interest, confirming what was expected. Land-Surface-Temperature-Based Analysis The LST was computed for the Main Faisalabad landfill in Pakistan (see Figure 6). It appeared that there were no evident changes in the surface temperature distribution in the images representing the two years considered and, overall, in the shape of the landfill, the LST was able to effectively delineate the perimeter of the landfill from the surrounding area. It can be noted that the LST values in 2021 were higher than those registered in the same period of 2020 for the same scene, but what is important to underline is that the spatial distribution of the LST values in the images as proportionally conserved. As previously mentioned, since there were no evident changes between 2020 and 2021, the mean values were computed over the entire site area to investigate the possibility of delineating the landfill perimeter using the temperature information. In this case, the comparison of the LST between the two years could be affected by differing annual climatology. To consider this effect, an overall temperature shift was computed between the two images by considering only the area outside the landfill, obtaining a shift value of 5.9 °C. As mentioned, it is worth focusing on the difference between the site's temperature and the surrounding area's temperature. For this purpose, the mean LST was computed Land-Surface-Temperature-Based Analysis The LST was computed for the Main Faisalabad landfill in Pakistan (see Figure 6). It appeared that there were no evident changes in the surface temperature distribution in the images representing the two years considered and, overall, in the shape of the landfill, the LST was able to effectively delineate the perimeter of the landfill from the surrounding area. It can be noted that the LST values in 2021 were higher than those registered in the same period of 2020 for the same scene, but what is important to underline is that the spatial distribution of the LST values in the images as proportionally conserved. As previously mentioned, since there were no evident changes between 2020 and 2021, the mean values were computed over the entire site area to investigate the possibility of delineating the landfill perimeter using the temperature information. In this case, the comparison of the LST between the two years could be affected by differing annual climatology. To consider this effect, an overall temperature shift was computed between the two images by considering only the area outside the landfill, obtaining a shift value of 5.9 • C. As mentioned, it is worth focusing on the difference between the site's temperature and the surrounding area's temperature. For this purpose, the mean LST was computed inside (LST IN) and outside (LST OUT) the site to quantify the difference in both the two examined years. The results, shown in Table 5, suggest a temperature gap of about 5 • C that was conserved over time. Table 5, suggest a temperature gap of about 5 °C that was conserved over time. SAR-Based Analysis Regarding the indexes derived from optical data, SAR images were processed to obtain the backscatter coefficient in VH and VV polarizations. In this regard, the site of New Faisalabad, located in Pakistan, was selected as a reference site for the analysis. The site is reported on the left side in Figure 7, as shown in Google Earth. In these images, a SAR-Based Analysis Regarding the indexes derived from optical data, SAR images were processed to obtain the backscatter coefficient in VH and VV polarizations. In this regard, the site of New Faisalabad, located in Pakistan, was selected as a reference site for the analysis. The site is reported on the left side in Figure 7, as shown in Google Earth. In these images, a significant expansion in the southern area of the site can be observed between 2020 and 2021. SAR-and Multiple-Optical-Parameters-Based Analysis Further to the above-illustrated analysis of the single selected parameters, i.e., multispectral indexes, the LST, and SAR-derived backscatter, their joint application is worth showing. In particular, for the sake of simplicity, the three vegetation indexes and the LST were computed on the site adopted for the SAR-based analysis, i.e., the New Faisalabad landfill. As represented in Figure 8, the three indexes, i.e., NDVI, NDWI, and DDI, revealed the dump area relative to its surroundings. However, the land cover pattern affected the accuracy of the results. For example, the large, bare soil surface outside the dump, in the bottom left corner of the images, reported values for the indexes comparable to those inside the dump. Despite this, the southern part of the area of interest showed a slight decrease in the NDVI and an increase in the NDWI and DDI in 2021, according to expectations regarding the change in the analysis. In addition, the pixel distribution of the three indexes appeared more homogeneous in the 2021 images, causing the dump boundaries to be more defined in the southern part due to the waste disposal that occurred. As part of the analysis, the LST was computed from images acquired in the two considered years (see Figure 8). Despite the slightly different temperature ranges observed in the two years, it can be noted that the southern part of the landfill (which is the part under investigation) was affected by an increase in the LST values when compared to the surrounding areas. This outcome is aligned with the results obtained using the other indexes considered. As shown in Figure 7, the backscatter highlighted the perimeter of the landfill, since high values of the backscatter coefficient occurred inside the site boundaries, while the surrounding area was characterized by lower values. The expansion of the site can be assumed, since the backscatter coefficient increased for both polarizations in the southern area of the site. The increasing backscatter coefficient can be attributed to a change in the scattering mechanism, confirming the expected surface variation suggested by the visual interpretation of the Google Earth images. Indeed, the surface of the area subject to the expansion changed from bare soil to a waste disposal that, assuming its heterogenous composition and geometry, brought an increase in the signal intensity measured by the radar sensor. From the comparison represented in Table 6, it is possible to notice an increase of about 4 dB from 2020 to 2021 in both polarizations. SAR-and Multiple-Optical-Parameters-Based Analysis Further to the above-illustrated analysis of the single selected parameters, i.e., multispectral indexes, the LST, and SAR-derived backscatter, their joint application is worth showing. In particular, for the sake of simplicity, the three vegetation indexes and the LST were computed on the site adopted for the SAR-based analysis, i.e., the New Faisalabad landfill. As represented in Figure 8, the three indexes, i.e., NDVI, NDWI, and DDI, revealed the dump area relative to its surroundings. However, the land cover pattern affected the accuracy of the results. For example, the large, bare soil surface outside the dump, in the bottom left corner of the images, reported values for the indexes comparable to those inside the dump. Despite this, the southern part of the area of interest showed a slight decrease in the NDVI and an increase in the NDWI and DDI in 2021, according to expectations regarding the change in the analysis. In addition, the pixel distribution of the three indexes appeared more homogeneous in the 2021 images, causing the dump boundaries to be more defined in the southern part due to the waste disposal that occurred. As part of the analysis, the LST was computed from images acquired in the two considered years (see Figure 8). Despite the slightly different temperature ranges observed in the two years, it can be noted that the southern part of the landfill (which is the part under investigation) was affected by an increase in the LST values when compared to the surrounding areas. This outcome is aligned with the results obtained using the other indexes considered. Regarding the previous analysis, the mean values were computed over the area affected by the change (the southern area of the site, highlighted with a black dashed line in Figure 8) for 2020 and 2021. The computed values show an expected trend for all the indexes (Table 7). Particularly, the NDVI decreased while the NDWI increased by about 0.04 and 0.03, respectively. In addition, the DDI increased, showing a higher difference from 2020 to 2021 when compared to the other multispectral indices (3.78). As previously mentioned, the comparison between the two LST images acquired in different years is affected by a climatology-dependent shift. As in the previous case study, the shift was computed between the two images by considering only the area outside the landfill, obtaining a value of 6.2 • C. Moreover, to evaluate the temperature change due to the waste disposal, the LST was computed by averaging the images over the specific area, i.e., the southern part of the site. As a result, a difference of 8.5 • C was obtained so that if the climatology-dependent temperature shift is subtracted, an increase of 2.3 • C can be associated with the waste disposal that occurred. To provide an overview of the variations related to all the parameters, the SAR backscatter in both polarizations is also listed in Table 7. Discussion Due to the rapid urbanization process and the even more rapid increase in the volume of waste generated, efficient waste management is essential in current society. Globally, and especially in low-and middle-income countries, most of the waste produced is dumped or discarded in various types of landfills, which are not always equipped with appropriate systems to collect leachates and gases. These are often open dumps or uncontrolled landfills that pose a real threat to environmental protection and public health. The identification and monitoring of these types of waste disposal systems are required in order to improve MSW management. RS represents a valid method for landfill monitoring since it can overcome the difficulties caused by on site surveys, which are often time-and resource-consuming. Presently, RS platforms can carry a wide variety of heterogeneous sensors on board with different spatial and spectral resolutions, which are capable of distantly monitoring the Earth's surface and extracting valuable information for this specific purpose. This work reviewed the methods existing in the literature for the detection, mapping, and monitoring of landfill sites, both authorized and illegal, through satellite RS technologies. In fact, the large amount of available and often open-access data acquired by different satellite sensors has allowed for the development of these types of applications, making this topic a subject of increasing interest to the scientific community. An analysis of the literature showed that satellite RS data are sometimes used in combination with other methodologies, such as aerial photography, GIS support, and field surveying. Among the available satellite sensors, both passive, multispectral sensors and SAR sensors are considered. In the first case, a large number of the analyzed works made use of freely distributed Landsat data, but very high-resolution images, such as those acquired by the Pleiades and WorldView satellites, were also considered. Multispectral products are utilized to estimate well-known multispectral indexes, such as the NDVI and NDWI, or to derive specific indexes for dump detection, such as the DDI. Moreover, a wide number of studies make use of Landsat sensors to derive the LST. On the other hand, SAR sensors, including the instruments carried by Sentinel-1, Radarsat, Envisat, and Cosmo-SkyMed, are used to study variations occurring in the landfill sites. SAR images are mainly applied to derive information about water content and volume changes through SAR interferometry, or to monitor deformation by analyzing time-series acquisitions. The acquisition from optical and SAR sensors are often combined to exploit the complementary information or for visual inspection purposes. In addition, it must also be pointed out that other RS platforms, such as aircraft and lately, UAVs in particular, also exist and are used for a similar purpose and with similar equipment. The results presented in the analyzed works demonstrate the validity of satellite RSbased methodologies. For example, thermal anomalies derived from the LST can aid in deriving landfill locations or waste disposals inside the landfills, and also in correlating the heat flux emitted from waste to the quantity of buried waste. The main drawback is that the LST can be affected by local temperature anomalies caused by precipitation or urban heat islands. Additionally, the vegetation health status obtained by the NDVI represents an aid in locating dumping areas and assessing their impact on the environment. The landfill spectral signatures can also be exploited for the generation of land cover maps, although they have been shown to be prone to misclassification with urban areas. Regarding SAR data, interferometry is used to generate elevation models to obtain the height of the waste volume. Furthermore, backscatter coherence is studied to correlate this parameter to changes in waste volume and the backscatter signal to obtain information about moisture content. These complementary parameters, derived from multispectral and SAR sensors, are often combined for a comprehensive analysis. These state-of-the-art techniques were applied to selected landfill sites in order to provide an overall analysis. Hence, the NDVI, NDWI, and DDI were derived from Sentinel-2 images, the LST was obtained through Landsat sensors, and the backscatter coefficient taken from Sentinel-1 products. For the Tsagan Davaa site, the NDVI proved to be sensitive to the land cover change since the values significantly increased over the relevant area. The NDWI analyzed for Morangin Davaa also increased as expected in correspondence with the site spatial variation (a new waste disposal). It is worth noting that this index is strictly related to the presence of water or liquid material. This means that its sensitivity to landfill changes, as for the other considered indexes, can be influenced by the waste type and composition. Concerning the DDI, the analysis conducted for the site of Narangin Engeer showed a sensitivity of the index to land cover changes. Particularly, the DDI values drastically decreased when waste materials were not more visible in the VHR Google Earth image over the analyzed area. The LST, as demonstrated by the analysis of the Main Faisalabad landfill, also proved to be an effective measure for site identification. As mentioned in the related paragraph, the comparison of the LST values computed from images acquired in two different years was affected by a climatology-dependent shift, which must be considered. Taking into account this shift, using the LST images allowed for the separation of the landfill site from the surrounding area, proving that the LST can be useful for site identification and monitoring. Concerning the SAR-based analysis, which was conducted for the site of New Faisalabad, the backscatter coefficient was shown to be sensitive to the landfill structure since the delineation of the site seemed to have been properly distinguished from the surrounding area. Moreover, the occurrence of a new waste disposal appears to have been correlated to the increase in the backscatter (in the two polarizations, VH and VV) from both a visual and numerical perspective. Eventually, on the same site, a multiple-parameters-based analysis was performed. Particularly, the NDVI, NDWI, and DDI succeeded in highlighting the landfill area and the changes that occurred. Despite this, it is possible to note that nearby areas with particular land cover showed index values comparable to those inside the landfill. On the other hand, the LST and backscatter seemed to properly highlight the perimeter of the site and the variation occurring after the new waste disposal. In general, the measures obtained from the multispectral and SAR images can contribute to the identification of the landfill site and its changes. In particular, when compared to the optical indexes and LST products, SAR backscatter seems to be more sensitive to the geometry of the landfill sites since the radar signal strictly depends on possible changes of the scattering mechanisms occurring on the surface. Moreover, it is worth mentioning that the combined use of all the selected parameters can improve the quality and robustness of the analysis with respect to the landfill site evolution. Indeed, temporal comparisons obtained using only one of the parameters can be affected by some influencing factors. Particularly, as previously mentioned, the vegetation indexes may show similarities between the spectral response inside the landfill and some areas on the surrounding surface. On the other hand, the LST and SAR-derived backscatter can be affected by climatology-dependent temperature shifts and different soil conditions (e.g., soil moisture content), respectively. Given all these considerations, the multi-parameter approach may represent an enhanced monitoring strategy for landfill evolution. Specifically, if we consider the results obtained for the case study, a decrease in the NDVI values and an increase in all the other parameters could be associated with the occurrence of a new waste disposal. Conclusions Landfilling remains a common method used worldwide for waste disposal, despite causing several negative impacts to the environment. The leachate produced by the waste seeps into the ground surface, contaminating the water and soil, and the gases produced by the waste piles affect the air, impacting local settlements and the whole natural environment. It follows that the identification and monitoring of these sites is essential to ensure a high quality of life and to preserve the health of humans and all other species and their natural habitats. This review focused on the use of sensors onboard satellite platforms for the remote identification and monitoring of dumping sites. The number of relevant studies in the state of the art demonstrates the growing interest of the scientific community in this important topic for environmental safety. The use of satellite sensors opens numerous possibilities, allowing for the exploitation of spectral-based indexes and measurements that can be retrieved from different kinds of multi-and hyperspectral sensors and SAR sensors. We hope that our work will help researchers attain a comprehensive overview of the potential of satellite data for landfill identification, which is also outlined by means of a short practical analysis. Moreover, future significant developments in monitoring landfills with remote sensors can be provided by the growing use of new platforms, i.e., UAVs, the information retrievable from recent satellite atmospheric missions, such as GhGSat and Sentinel-5P, and hyperspectral data, such as those provided by the PRISMA and EnMAP missions, which will lead to fruitful future advanced monitoring procedures in this field.
15,888.6
2023-04-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Unique Thermal Properties of Clothing Materials Abstract Cloth wearing seems so natural that everyone is self‐deemed knowledgeable and has some expert opinions about it. However, to clearly explain the physics involved, and hence to make predictions for clothing design or selection, it turns out to be quite challenging even for experts. Cloth is a multiphased, porous, and anisotropic material system and usually in multilayers. The human body acts as an internal heat source in a clothing situation, thus forming a temperature gradient between body and ambient. But unlike ordinary engineering heat transfer problems, the sign of this gradient often changes as the ambient temperature varies. The human body also perspires and the sweat evaporates, an effective body cooling process via phase change. To bring all the variables into analysis quickly escalates into a formidable task. This work attempts to unravel the problem from a physics perspective, focusing on a few rarely noticed yet critically important mechanisms involved so as to offer a clearer and more accurate depiction of the principles in clothing thermal comfort. • Flexible and pliable to conform to human body, yet with desirable durability and resistance to various physical impacts for body protection. • Soft and smooth to touch but with considerable resistance to abrasion and punctuation. • Permeable for body perspiration and skin breathing, while with certain proof to external liquid penetration. • Biocompatible with human body, yet chemically active for dying and printing. • Light in weight to reduce wearing burden but rigid enough for body shaping and protection. One of the major functions of clothing is for body protection against various hazards such as environmental, chemical, physical, and hygienic perils, and thermal protection is one of the essentials. Cloth has been with us for thousand years, and we all wear clothes incessantly and keep such intimacy through entire life. Even so cloth has seldom been taken as a materials system for thorough scientific research, as stated by Pan in ref. [3] "It is, therefore, surprisingly puzzling that the materials are so critically indispensable to us and, yet, have been so taken for granted that many rarely pause to think about textiles from a materials science point of view. Textiles, in fact, remain to be poorly understood in terms of materials sciences rigor." There have been some books with systematic treatment of the cloth thermal comfort. [2,[5][6][7] However when coming to current research activities in this area, what one may find is the utter rarity of the publications on the topic: typing the key phase "cloth thermal comfort" into the Web of Science generated a mere 23 articles, and very few of them deal with the fundamental principles of clothing, and none of those appears sufficiently comprehensive. Of the 23 papers, one with highest citations [8] is on an attempt to mimic the wool fiber structure into a manmade fiber, by assembling multiscale fibrils into a tree-like channel net, thus providing an efficient heat transfer property. Another one [9] is on the theoretical analysis of water vapor diffusion through porous semipermeable barrier textile material, to evaluate its applicability for sport applications. One of the major causes of this situation may be attributed to the scientific intricacy of the topic. However incredible it may sound to many including scientists, investigation of the physics pertaining to clothing is rather complex: it involves several subsystems including human body, the surrounding environment, and the multilayered cloth, jointly forming a triad system with such influencing parameters as temperature, humidity, air flow, phase change dynamics, and random causes, on top of the aforementioned complex material behaviors of cloth. This at Cloth wearing seems so natural that everyone is self-deemed knowledgeable and has some expert opinions about it. However, to clearly explain the physics involved, and hence to make predictions for clothing design or selection, it turns out to be quite challenging even for experts. Cloth is a multiphased, porous, and anisotropic material system and usually in multilayers. The human body acts as an internal heat source in a clothing situation, thus forming a temperature gradient between body and ambient. But unlike ordinary engineering heat transfer problems, the sign of this gradient often changes as the ambient temperature varies. The human body also perspires and the sweat evaporates, an effective body cooling process via phase change. To bring all the variables into analysis quickly escalates into a formidable task. This work attempts to unravel the problem from a physics perspective, focusing on a few rarely noticed yet critically important mechanisms involved so as to offer a clearer and more accurate depiction of the principles in clothing thermal comfort. Introduction Cloth can be defined as a designed 3D textile structure made of 2D fabrics that are networks of yarns formed in turn by spinning fibers. Cloth wearing is a behavior unique to human beings of virtually all societies, and it seems so natural that everyone is self-deemed knowledgeable and has some expert opinions about it. In fact, cloth has become such an indispensable part of human body and is often referred to as our "second skin." [1,2] (Note: In our discussions herein, the terms cloth and apparel are used synonymously, referring to the 3D structure formed by 2D fabrics, whereas textile as a more general term denotes the materials made of fibers (natural or manmade). The term "clothing", on the other hand, is used to represent the act/status of wearing cloth.) In order to serve the human body with multifaceted functions, clothes as materials are hierarchal, porous, and flexible with friction-induced system integrity so as to possess at least the following attributes: [3,4] least partly explains why comprehensive analysis of the clothing process dealing with those complications has rarely been done by researchers. [1,10] A further alarming sign is from a high profile paper [11] on clothing thermal comfort published in a prime journal, accompanied by an invited review article [12] and even an animated video clip. The paper itself dealt with the topic in an overly simplified frame and the analysis conducted is conceptually flawed, as disclosed in an e-letter to the editor [13] and detailed here in next section. This exposes an astonishing ignorance about clothing physics, even among some veteran physicists, and thus prompted the initiation of this article. So this work attempted to uncover several critical yet so far largely ignored issues including, change sign in temperature gradient between body and ambience; the behavior nonaffinity between cloth and its components; cloth as a porous composite, etc. Still it is not intended to be a comprehensive review of the topics, as there are a few books [2,5,6] and published articles [1,14] for that. In the discussions below, to maintain focus on the targeted issues, even though there may be various improved forms of the original theories, we only use the one sophisticated enough, not necessarily the most updated/accurate, to facilitate our analysis. A Dubious New Cloth To reveal the hidden issues in clothing thermal comfort, we will first examine and discuss the idea and related flaws in the work by Hsu et al. [11] of facilitating body heat radiative dissipation using a polyethylene (nanoPE) fabric. For simplicity, they ignored such factors including various heat conductions and convections which will be discussed in a later section here. The nanoporous nanoPE used is a nonwoven piece with specified nanosized holes that, according to the authors, [11,12,15] allow the infrared radiation from the human body to dissipate through the new fabric easily, but shields the external visible light, thus eliminating the associated incoming heat while making the body visibly opaque to human eyes. The test results indicated that the nanoPE fabric swatch is "2.0 °C lower than a cotton sample." [11] Most recently, the same group extended their idea of IRtransparent nanoPE further and developed a dual-mode fabric with an asymmetrical (dual) thermal emitter embedded in the asymmetrical thickness of nanoPE so that flipping the fabric will switch between the radiative functions of cooling and heating. [16] This new extension however inherits the same blunders from the first paper. [11] Acceding to their assumed validities of existing heat transfer theories to this case, for a piece of fabric swatch in contact with body skin in a steady state and ignoring the convection as in ref. [11] the specific heat flux (energy per time per area) P transferred through the swatch is caused by both radiation (P 1 ) and conduction (P 2 ) as 1 2 P P P = + In the steady state, the temperature inside the cloth surface equals to the skin temperature T s , yet the temperature T of outside cloth surface can be any point between T a ≤ T ≤ T s , depending on the value of the thermal Biot Number of the swatch, [17] where T a is the ambient temperature and T a < T s . In clothing cases at this temperature range, the radiation part is much more dominant, [1,10] i.e., P 1 >> P 2 , we can limit our discussion to thermal radiation as did in ref. [11], adopting the same symbols and formulation originally in ref. [11] where σ in the equation is the Stefan-Boltzmann constant, A is the radiating surface area, and ε is the emissivity factor. T s is again the skin temperature at =33.5 °C (isothermal) and T a is the ambient temperature. A simple plot based on Equation (2) is shown in Figure 1, where for easy comparison the vertical axis represents the relative heat dissipation-the heat dissipation P at any given ambient temperature T a normalized by the maximum value achieved when the ambient temperature is at the minimum, say T a = 20.0 °C. It is clear from the result that at the practically fixed skin temperature T s , a smaller T a (<T s ) will generate a larger p value, i.e., a more effective radiation heat release in a cooler environment. This will be more apparent if the thermal conduction P 2 is included as it is also a decreasing function of the ambient temperature T a . In the study, [11] the authors took all the measurements at the ambient temperature T a = 23.5 °C (74.3 °F). However, at this level, body heat would unlikely be a concern for many people, after all it is still not warm enough for air conditioning (AC) whose set point is nearly 2 °F higher at T a = 76.0 °F, according to ANSI/ASHRAE standards. [18] Once the ambient temperature T a is beyond 76.0 °F (24.4 °C), the AC is on and the relative heat dissipation in Figure 1 is reduced, close to 0.70 of that when T a = 20.0 °C, versus 0.76 when T a = 23.5 °C. So the nanoPE will no longer be 2.0 °C cooler than the cotton at this higher T a . If the ambient temperature T a further increases to around the skin temperature T s = 33.5 °C, this nanoPE will completely stop functioning! In other words, once the weather is really hot, exceeding the body temperature, this nanoPE will lose all its cooling power! That is, this nanoPE fabric indeed promotes body chilling, but only in cool days when T a < T s , or it resolves a nonexistent problem. The Culprits in the Problem Upon examining the problems to further expose the blunders in this IR-transparent nanoPE work, several causes below are found responsible. Changing Sign of the Temperature Gradient The problem above shows how little we really understand the physics in clothing, and more importantly, it divulges a rarely noticed yet very critical fact. Note again in Figure 1, once we extend the ambient temperature T a along the x axis further beyond the skin temperature T s = 33.5 °C (92.3 °F) into a hot day, the relative heat dissipation will change its sign and turn into negative, meaning once the ambient temperature exceeds the skin temperature, instead of dissipating body heat to the ambient, this nanoPE approach will induce heat from the ambience to the body. This new clothing would actually exacerbate the scorching of human body in hot days! Thermal transfers via conduction, convection, and radiation are all temperature gradient dependent. In this case, the temperature gradient is s a As the skin temperature T s = 33.5 °C is practically fixed, and the ambient temperature T a may be lower (cooler), equal to or greater than (hot) T s . That is, the sign of the temperature gradient, hence the direction of heat transfer, will be reversed accordingly as the weather gets hot. That is, the entire system will experience a reversal in heat transfer direction around T a = T s . According to the heat energy conservation equation, a complete form for Equation (2) for a clothed body can be, for steady state and within certain limit, expressed as [19] [ ] where P is the total heat flux; M is metabolic rate energy generated by body, always positive; W is mechanical work done by body to the ambience, always −W; E is rate of total latent heat loss due to respiration and sweat evaporation, always -E; [Q] = total rate of sensible heat loss from skin (dry heat exchange), can be positive or negative as discussed below. When the total heat flux P = 0, the heat body received and dissipated cancelled out so the body is in a steady heat balance; otherwise when P > 0, body receiving more heat, or P < 0, body losing more heat. Of the parameters on the RHS of Equation (4), the metabolic body rate M and the work performed by body W remain virtually constant in this study so the body thermal comfort depends only on the remaining two factors; the evaporation of the latent sweat E, which is always negative, i.e., sweating always promoting body cooling, and the sensible heat exchange [Q]. The signs of [Q] follow the sign of the temperature gradient; the positive sign represents the sensible heat from the ambient added to the body, while negative sign is the heat released from the body to the ambient, i.e. [ ] if 0, a cool day if 0, a hot day In practice, improving body cooling would be necessary only in hot days when the ambient temperature exceeds the skin temperature, i.e., T a ≥ T s or ΔT < 0, so that [Q] = -Q, fetching additional heat to the body. Eq. 3 becomes so that P increases. Actions must be taken (e.g., body sweating more to raise −E or taking off cloth to reduce +Q) to return the body to the original thermal equilibrium. When weather is too hot T s << T a , we have to resort to other external means like a fan or air conditioner. Whereas in cold days Our body has to lose more body heat via Q. Stress the subtlety of [Q]. In a hot day ΔT < 0, so that [Q] = -Q: the body is gaining more heat in a hot day because of [Q]. While in a cold day ΔT > 0, so that [Q] = Q: the body is losing more heat due to [Q]. In other words, regardless in winter or summer, except at temperatures close to skin temperature, [Q] is always working against body comfort, as illustrated in Figure 2. The implication of this reversing nature of [Q] can be profound; any scheme attempting to alter the cloth thermal comfort by controlling the sensible heat loss [Q], i.e., changing the thermal conductivity, convectivity and radiativity, would lead unexpectedly the opposite result. This is exactly the problem when using the nanoPE methodology in ref. [11]. Therefore, when assessing the efficacy of fabric thermal properties, the sign of the temperature gradient has to be such that it is consistent with the actual cloth wearing situation. By the way, among the three sensible heat loss mechanisms, usually heat radiation is by far the most effective. [1,10] In severe cold weather, e.g., in the arctic area T s > T a , as our body heat is still escaping via the direct heat loss −Q, a shield layer made by materials with high irradiativity such as metals is often adopted, preferably as a liner inside the cloth, to retain the body heat. Whereas in a severely hot case T s > T a , e.g., on-site viewing a live volcano, the temperature gradient dictates the direct heat dumping +Q from the ambient to the body. It is desirable to put the metallic sheet outside the cloth to fend off the external heat. Then it is apparent that the evaporation heat loss E, being independent of the temperature gradient, is the only mechanism that provides an indispensable and effective route for body cooling. The Nonaffinity Phenomenon Another intriguing thermal phenomenon in clothing, very puzzling for some alert consumers, is the seemingly disconnection between the thermal properties of a cloth and its constituent fibers. The same (e.g., cotton) fibers can be made into a T-shirt to keep body cool in summer; or into a thick coat to keep body warm in winter, i.e., the thermal properties of the constituent fiber seem to have no connection with the thermal properties of the cloth system. We call this disconnection a nonaffinity [3] between the behaviors of the system and its constituents. This problem of the relationships between the microconstituents and the macrosystem is of course typical in statistical physics. The secret here lies in the fact a cloth is a porous composite and the fibers in the cloth are not the only, not even the most essential, element in determining the system thermal performance. Giving the excellent thermal insulation capacity of the air in the pores, the fibers only serve as the medium building those pores. where air stays still and thus dominates the cloth thermal properties. In other words, when coming to cold weather, it is the air in the cloth that keeps our body warm. A more rigorous analysis is given in ref. [20], concluding that our tactilely sensed warmth is actually related to a compound parameter termed thermal effusivity ε, defined as where κ is the thermal conductivity, ρ is the specific density, and c p is the specific heat capacity of the material. Hence a material with higher effusivity ε value will render a warmer touch feeling. For cloths made of the same fibers, the cloth density ρ can vary by several orders of magnitude, [21,22] while the κ and c p values change only mildly. Therefore, we adjust the thermal behaviors of cloth by controlling its density; for the same fiber type, when fabric density is high as in a T-shirt, it can keep us cool in summer, whereas in winter we wear cloth made of low density fluffy fabric as in a coat to maintain the body warmth. That is, when coming to thermal behavior of the cloth, the air in pores, no longer an unintended entity, plays the dominant role. Furthermore this nonaffinity in clothing thermal comfort suggests that the primary factor impacting cloth thermal comfort is the structure (or the porosity) of the cloth, and the fiber type plays a negligible role. Such nonaffinity exists only in porous structures not in continuum. For instance, pulling a fiber out of a cloth results in a single fiber that lost all the cloth characteristics, whereas a gold particle from a nugget is still gold. Further, this nonaffinity exists between any two links along the hierarchy of a cloth system, e.g., therefore the properties of a single fabric swatch tested ex situ may not be taken directly as the properties of a cloth on a human body as did in ref. [11]. Nonetheless, fiber types play important roles in other cases. For example, fiber wicking/wetting properties are type dependent-cloth made of natural fibers are definitely more comfortable. Fiber type will also impact the fabric surfaces which trap different amount of boundary air layer depending upon the roughness of the surface. In addition, it will alter the contact feeling and the surface thermal properties. Furthermore, some of the water from the condensed perspiration will also reside in the fabric, causing the change of fabric thermal conductivity, and leading to reactions such as sorption heat [23] and the "heat pipe" effect, [24] depending on fiber types. Some are discussed below. Fiber Sorption Heat Another often ignored, nonetheless important mechanism in cloth thermal comfort has to do with the fiber sorption heat. Except in absolutely dry or raining cases, a cloth is a composite formed by fiber, air, and moisture. Consequently, any property of the cloth is a contested result of the contributions from all three constituents. There are two potential sources for moisture, the ambient relative humidity and the body sweating. [25] When polymeric fibers absorb water, the interactions between polymer chains and water molecules are largely exothermic so that the sorption heat is generated, exerting significant thermal effect on the cloth, as reported by Mordon and Hearle: [23] "For example, ongoing from an atmosphere of 18 °C, 45% r.h., indoors to one of 5 °C, 95% r.h., outdoors, the regain of wool would change from 10% to 27%. A man's suit, weighing 1.5 kg, would give out 6000 kJ owing to this change, that is, as much heat as the body metabolism produces in 12 h." Cloth Fitness, Air Gap Between Cloth Layers and Other Important Factors Clothing is only physiologically necessary in cold weather when T s > T a , whereas wearing cloth in summer is largely a societal etiquette. Usually we have to wear undergarments in addition, thus introducing the air gap between cloth layers even if the cloth is a perfect fit so that the heat leaking via sides is negligible. As increased air in the interlayer gap will promote thermal insulation, [10] while the air gap, as well as body movement, will incite convection. These factors influence the system thermal balance drastically. Unfortunately, due to the complex nature of the interlayer air gap between cloth layers and of the various body contour, the air gap scale, distribution, physical state, etc., incorporating this effect into theoretical analysis is still a pending challenge. [26] Contact Comfort versus Steady Heat Insulation As mentioned before different fiber types will influence the fabric surface, it will do cause different thermal sensation. This is due mainly to the transient effect, not related to sensible heat loss Q. First, in any thermal transfer process, particularly in porous materials like textiles, there are essentially two distinctive stages involved; the transient and the steady stages; the system state including temperature distributions between the two stages are very different. [27] The study in ref. [11] however failed to differentiate them, e.g., no specification of the stage was given on the data measured. Two different clothes may lead to the same thermal performance at the steady state; still they can cause distinctive transient thermal behaviors and hence the sense of warmth toward the cloth. How to do it Right The supreme question remains on how to appropriately evaluate the efficacy of a new material in improving the cloth thermal comfort. As in dealing with any phenomena, there are different types of method. Experimental Approach The first logical experimental approach to assess the efficacy of a new cloth is to actually wear it. [28,29] Wearing trial by human subject is the most direct. However like in many human sensory processes, the inherent bias in human preference often render the method ineffective. [30] Also it will not be allowed to employ human subject when the temperature is at extreme levels. Then full scale manikins have been developed for evaluating the cloth performance including thermal behaviors. [25,29,31] Yet how to scale the human size and to imitate closely the human metabolic and sweating functions remain to be the challenging issues. Theoretical Path Unfortunately, the energy conservation relation in Equation (4) is insufficient, as a sufficient tool for design optimization because of lacking the established relationships between the ambient temperature T a and all the thermodynamic variables in the equation. For instance, at given metabolic rate M, the performed work W, and even the rate of sensible heat loss [Q], it is still not clear exactly how the rate of total latent heat loss E will change as a function of ambient T a and the relative humidity. However, we may use some known physical laws to tackle easier or local questions. For instance, consider the sensible heat case only so we know all the physics involved. We can choose the thickness of cloth t to minimize the body heat loss in a cold day at a given temperature gradient T s > T a . We assume the body-cloth system to be represented by a cylinder of the inner (equivalently body) radius r 0 and outer radius r 1 = r 0 + t in Figure 3. Then from ref. [17] for steady state and an isotropic system, the total sensible heat lose Q per unit length through the cloth of total thickness t is the sum of all 3 mechanisms of sensible heat transfer where k is the thermal conductivity and h v is the cloth surface convectivity at wind speed V a . The only variable in the equation is the cloth thickness t and its critical value t c to minimize the heat lose can be derived as in thermal engineering, [17] i.e., when t = t c , there is Furthermore, there are additional considerations related to cloth thermal comfort, such as the cloth mechanical behaviors, fabric finishes, washings, etc. Another additional concern is the interconnections among the thermal properties of cloth: when changing one of them all others may be impacted. For more information, there is a recent review article on the complex and restrictive requirements for cloth comfort. [4] The fundamental objective of research on the triad system of body-cloth-ambience, as stated by Parsons, [19] is to establish "Thermal models integrate the principles of heat transfer, heat balance, thermal physiology and thermoregulation along with anthropometry and anatomy into a mathematical representation of the human body and its thermoregulatory systems. When presented on a computer, these 'complete' models can simulate the thermal responses of a person and provide a prediction of the dynamic response of the body to any environment." So far we have accumulated some knowledge about the human thermoregulation, [19,6,32] for instance, the sweating rate of human body can reach as high as 2 kg h −1 for a short period and 0.9 kg h −1 over several hours. Also it can be easily calculated that a human body of normal size would have roughly 2 m 2 total surface area, with normal 90 W m −2 heat production capacity, yet only 0.25 kg h −1 sweat is needed to dissipate all the body heat generated. [19,6,32] Therefore, the evaporation of the sweat has the potential to sufficiently dissipate the heat released from the human body to maintain the cloth heat balance [32] of course at normal ambient conditions. On the other hand, what we can present so far are almost all the terminal or extreme values. To describe in detail the dynamics and stochastics in this complex triad system is still far beyond our reach, even though we have identified all the effective parameters in the heat exchange process, and have available of all the general physical and mathematic laws and principles. Still we know, to just name a few, very little the dynamics of sweat rate E change as a function of the ambient temperature T a and sensible heat loss rate Q, including both temporal and spatial distributions; very little how the changes in sweat rate E will in turn alter the properties of the cloth (moisture condensation and permeation); let alone the interperson and intraperson body variations in thermal physiology and thermoregulation along with anthropometry and anatomy. Challenges are daunting but there are reasons to be optimistic, chiefly that because of the arising interests to wearable technology, unprecedented efforts from industry and academia, especially nontextile fields, starting to focus on clothing science. Needless to say, as dealing with any complex problem, simplification is a necessary initial step: but it has to be done correctly. To start, a sufficiently clear understanding of the problem has to be achieved so as to decide what/how the simplification should be done. Conclusions In order to serve human body with multifaceted functions, clothes as materials are hierarchal, porous, and flexible with friction-induced system integrity. To understand clothing thermal comfort, one has to tackle the complex interactions between human body, cloth, and the surrounding ambience. Clear distinction and connection between the microconstituents and the macro system have to be established. There is a nonaffinity between the behaviors of the system and its constituents and the properties of a single fabric swatch tested ex situ may not be taken as the corresponding properties of a cloth. This nonaffinity in clothing thermal comfort suggests that the primary factor impacting cloth thermal comfort is the structure (or the porosity) of the cloth, rather than the fiber type. Attention has to be paid to the changing sign of the temperature gradient ΔT between body and the ambience. In a hot day ΔT < 0, the body is gaining more heat. While in a cold day ΔT > 0, the body is losing more heat. In other words, regardless the weather, sensible heat transfers are always working against body comfort. Therefore, when assessing the efficacy of fabric thermal properties, the sign of the temperature gradient exerted has to be consistent with the cloth wearing situation. Fiber types are important in cloth wicking/wetting properties, the fabric surfaces, the transient cloth contact feeling and fiber sorption related phenomena. There are other important factors including the natural and forced convections, due to air gap between cloth layers, and body movement. Finally, when handling a complex problem like this, certain simplifications are necessary. Nonetheless an appropriate picture of the entire system physics has to be developed first so that such simplification can be done correctly.
7,032.2
2019-02-27T00:00:00.000
[ "Physics" ]
Limiting the Heavy-quark and Gluon-philic Real Dark Matter We investigate the phenomenological viability of real spin half, zero and one dark matter candidates, which interact predominantly with third generation heavy quarks and gluons via the twenty-eight gauge invariant higher dimensional effective operators. The corresponding Wilson coefficients are constrained one at a time from the relic density $\Omega^{\rm DM} h^2$ $\approx$ 0.1198. Their contributions to the thermal averaged annihilation cross-sections are shown to be consistent with the FermiLAT and H.E.S.S. experiments' projected upper bound on the annihilation cross-section in the $b\,\bar b$ mode. The tree-level gluon-philic and one-loop induced heavy-quark-philic DM-nucleon direct-detection cross-sections are analysed. The non-observation of any excess over expected background in the case of recoiled Xe-nucleus events for spin-independent DM-nucleus scattering in XENON-1T sets the upper limits on the eighteen Wilson coefficients. Our analysis validates the real DM candidates for the large range of accessible mass spectrum below 2 TeV for all but one interaction induced by the said operators. I. INTRODUCTION Regardless of the unequivocal astrophysical evidences from rotation velocity curves [1] via mass-to-luminosity ratio, Bullet Cluster [2], and precision measurements of the Cosmic Microwave Background (CMB) from WMAP [3] and PLANCK [4] satellites etc., we have no clue about the fundamental nature of dark-matter (DM), which forms the dominant matter component of the Universe. Dedicated experiments for the direct-detection of DM such as LUX [5], XENON-1T [6], DarkSide50 [7], PandaX-4T [8] and CRESST-III [9], PICO-60 [10], PICASSO [11] are designed to measure the momentum of the recoiled atom and/ or nucleus due to the scattering of DM particles off the sub-atomic constituents of the detector material [12][13][14]. We haven't seen any significant signal excess over the expected background yet. Several experiments, past and ongoing, have constrained a plethora of viable UV complete dark matter models formulated by writing renormalizable Lagrangians with heavy non-SM mediators (spin 0 ± , 1/2, 1, 2) facilitating interactions between DM and SM particles [21][22][23][24][25][26][27][28][29][30][31]. The models with Higgs bosons as mediators have been excluded by the recent collider experiments [32]. Analogous analysis has also been performed in the domain of the effective field theory (EFT) for the EW-Boson-philic DM operators [33][34][35]. Recently, the GAMBIT collaboration performed a global analysis for signatures of SM gauge singlet Dirac fermion DM at LHC in the EFT set-up with simultaneous activation of fourteen effective operators constructed in association with light-quarks, gluons and photons (up-to mass dimension seven) where the authors have disfavored the exclusive contribution of DM ≤ 100 GeV [36]. The interactions of heavy-quark-philic DM are also constrained by the ongoing collider experiments. The CMS [59,60] and ATLAS [61] collaborations investigated the dominant scalar-mediated production of t-quark-philic DM particles in association with a single top quark or a pair of tt. CMS collaboration recently conducted an exclusive search analysis for the b-philic DM in reference [62]. Many authors in the literature have explored mono-jet + / E T mode at the LHC for the viable signatures of the scalar current induced heavyquark-philic DM interactions [51,[63][64][65]. A comprehensive search analysis for heavy scalar mediated top-philic DM models for mono-jets+ / E T , mono-Z, mono-h and tt pair productions at LHC can be found in [66,67]. The phenomenology of spin 1/2, 0 and 1 real DM deserves a special mention in the context of their contributions to the DM-nucleon scattering events in the direct-detection experiments. In the absence of the contributions from vanishing vector operators for Majorana and real vector DM, the spin-independent DM-nucleon scattering cross-sections are found to be dominated by the respective scalar and dimension-8 twist operators [68][69][70][71][72][73][74][75]. The pseudo-scalar current contribution to the DM -nucleon scattering cross-section is found to be spin-dependent and velocity suppressed. Similar studies have also been undertaken for lepto-philic real DM candidates [39,40]. In this context, it is worthwhile to investigate the WIMP DM phenomenology induced by heavy-quark-philic and gluon-philic scalar, pseudo-scalar, axial-vector, and twist-2 real-DM operators and explore whether they satisfy the relic density criteria and other experimental constraints for a DM mass between 10 GeV and 2 TeV. We introduce the effective Lagrangian for real spin 1/2, 0 and 1 DM particles in section II. In section III, we discuss the phenomenology of the dark-matter. We investigate the cosmological constraints on the DM in sub-section III A. In sub-section III B, we predict and analyse the thermally averaged DM pair annihilation cross-section. Sub-section III C details the computation of DM-nucleon scattering cross-sections and expected recoil nucleus event(s) for XENON-1T [6] set up. Section IV summarises our study and observations. II. HEAVY-QUARK AND GLUON-PHILIC DM EFFECTIVE OPERATORS It is a fact that in a beyond-standard model (BSM) renormalizable gauge theory, as long as the energy range of DM-SM interaction is much below the mediator mass, the study of DM interactions can be restricted by the DM and SM degrees of freedom and their symmetries. The higher dimensional effective operators are obtained from the BSM Lagrangian by writing the operator product expansion (OPE) of currents in the limit p 2 / m 2 Med. << 1, where p µ represents the four momentum of the virtual mediator of mass m Med. . This facilitates the effective contact interaction between DM and any SM third generation heavy quark/ gluon, assuming that the mediator mass scale m Med. is of the order of the effective theory cut-off (∼ Λ eff ), which is much heavier than the masses of the SM and DM fields in general. For example, in renormalizable models, the interaction between a Majorana DM χ and a third generation quark ψ can be written as (1) where η and ζ µ are electrically charged scalar and vector fields respectively. This interaction Lagrangian facilitates the Majorana DM -quark interaction via t-channel exchange of the heavy η and/ or ζ µ . Thus, expanding the propagator in powers of p 2 /m 2 Med , yields the higher-dimensional effective four-fermion interaction for Majorana DM and heavy-quarks. The scalar, pseudo-scalar, axial-vector, and twist-2 operators are all induced by this expansion [68]. When compared to a spin-2 graviton mediated model, the twist operator corresponds to the contribution from traceless part of the energy-momentum tensor T µν [76]. In various WIMP-inspired renormalizable electroweak models, such effective interactions between the SM and Majorana DM particles are also realised by s-channel processes, where the interactions are mediated by non-SM heavy scalar/pseudo-scalar/axial-vector or spin-2 tensor particles [77,78]. Since the DM-gluon interactions can be naturally realised via one-loop interactions of the DM particles either with SM heavy quarks or BSM non-singlet coloured spin 1/2, 0 and 1 exotics at the next order in the strong coupling constant, it becomes all the more necessary to include the study of effective operators constructed independently with a Majorana DM bilinear and a pair of gluons at the leading order of ∼ α s / π. The phenomenological effective Lagrangian for the heavy-quark-philic and gluon-philic Majorana DM, χ, is written as: representing the third generation quark-philic scalar, pseudoscalar, axial-vector and twist-2 type-1 and type 2 operators respectively along with O g representing the gluon-philic scalar, pseudo-scalar and twist-2 type-1 and type-2 operators respectively are defined as The second rank twist tensor currents O q µν and O g µν for the heavy-quarks and gluons respectively are given as where D µ L and D µ R are the covariant derivatives for left and right handed quarks respectively in SM. The contribution from the vector operator vanishes for the real particles. We exclude the contribution of the dimension-9 twist-2 type-2 operators O g in our analysis because we are only interested in the effective operators up to mass dimension-8. We extend the domain of our analysis to include the effective contact interactions of real scalar φ 0 and vector V 0 µ DM candidates with SM third generation heavy quarks and gluons. The effective scalar DM Lagrangian are given as where The O are the scalar and second rank twist-2 type-2 operators respectively. There are no contributions from pseudo-scalar and axial vector currents for the scalar DM operators. The effective vector DM Lagrangian is given as The heavy-quark- representing the third generation quark-philic scalar, pseudo-scalar, axial-vector and twist-2 type-2 operators respectively and gluon-philic operators corresponding to the scalar, pseudo-scalar and second rank twist-2 type-2 interactions respectively are defined as where µνρσ is the totally antisymmetric tensor with 0123 = +1. In this study, each operator's phenomenology is investigated independently, assuming that the unique interaction contributes to the total relic density for DM. III. PHENOMENOLOGICAL CONSTRAINTS ON REAL DM A. Contribution of DM to the relic density The present day relic abundance of the DM species n DM (t) can be calculated by solving the Boltzmann equation where n eq DM is the DM number density at thermal equilibrium, H 0 is the Hubble constant, | v DM | is relative velocity of the DM pair and σ ann | v DM | is the thermal average of the annihilation cross-section [79]. It is customary to parametrise ρ DM ≡ Ω DM h 2 ρ c , where ρ c ≡ 1.05373 × 10 −5 h 2 /c 2 GeV cm −3 is the critical density of the universe and dimensionless h is the current Hubble constant in units of 100 km/s/Mpc. Solving the Boltzmann equation [80] we then get where parameter x F ≡ m DM /T F is a function of degrees of freedom of the DM g, the effective massless degrees-of-freedom g eff (∼ 106.75 and 86.75 above m t and m b mass-thresholds respectively) at freeze-out temperature T F : The predicted DM relic density Ω DM h 2 of 0.1138 ± 0.0045 and 0.1198 ± 0.0012 by WMAP [3] and Planck [4] collaborations respectively restrict the thermal averaged DM annihilation cross-section σ | v DM | ≥ 2.2 × 10 −26 cm 3 s −1 as a smaller thermally averaged cross-section would render large DM abundance which will over-close the universe. We have analytically calculated the DM pair annihilation cross-sections to a pair of third generation heavy quarks and gluons in the Appendix A. We investigate the DM relic density contributions propelled by the thermally averaged annihilation cross-sections given in the appendix B corresponding to the scalar, pseudoscalar, axial-vector and twist-2 currents of the heavy quarks for a given Majorana/ scalar/ vector DM. The annihilation channels induced by the scalar, pseudo-scalar and twist-2 currents of gluons also contribute to the relic abundance of Majorana, scalar and vector DM. Throughout our investigation, we assumed that the operators would be actuated one at a time. As a result, the constraints on all twenty-eight Wilson coefficients C q, g DM i O j / Λ n in TeV −n (DM i ≡ χ, φ 0 , V 0 and O j ≡ S/ PS/ AV/ T 1 / T 2 are derived by triggering the lone contribution from the associated operator to fulfill the relic density Ω DM h 2 ≈ 0.1198±0.0012 [4]. Due to its only action, these constraints can be regarded as a cosmologically acceptable lower-limits on the corresponding Wilson Coefficients, which translate to upper bounds on the respective cutoffs C q, g DM i O j −1/n Λ in units of TeV for a given DM mass. When the Wilson coefficient is greater than its lower-limit for a given DM mass, it partially meets the relic density. It is important to note that because we can raise the annihilation cross-section by turning on more Wilson coefficients, we can easily imagine a scenario in which they all reside below their lower-limit without over-closing the universe with DM. To put it another way, if we set two distinct Wilson coefficients to their lower limits for a given DM mass, we plainly underproduce DM because the overall annihilation cross-section is larger. Therefore, in order to be consistent with a multiple species of DM model satisfying the relic density constraint, FIG. 2: We depict relic density contours satisfying Ω φ 0 h 2 = 0.1198 [4] and shaded cosmologically allowed regions in the plane defined by Majorana DM mass m φ 0 and |C The b-quark-philic, t-quark-philic and gluon-philic scalar DM contours in figures 2a, 2b and 2c are drawn corresponding to scalar and twist-2 type-2 operators respectively. this lower-bound on the specific Wilson coefficient will further reduce when more than one coupling are triggered simultaneously. However, the rest of our analysis is focused on the scenario where the sole operator contributes to the relic density. For numerical computation of the DM relic density, we have used MadDM [81,82], which implements the exact and closed expression for thermal averaged annihilation crosssection as discussed by the authors in the reference [79]. The input model file required by the MadDM is generated using FeynRules [83,84], which calculates all the required couplings and Feynman rules by using the full Lagrangian given in equations (3), (6) and (8) corresponding to Majorana, scalar and vector DM particles respectively. We have further verified and validated our numerical results from MicrOMEGAs [85]. The constant relic density contours are depicted in figures 1, 2, and 3 corresponding to Majorana, real scalar and real vector DM candidates respectively in the plane defined by The thermal averaged DM pair annihilation cross-section is found to be ∼ 2 × 10 −26 cm 3 s −1 corresponding to a relic density of 0.1198, which essentially provides the functional dependence and shape-profile of the Wilson coefficients with the varying DM mass corresponding to each four-point effective operator. In order to understand the shape profile of the relic density contours, σ v may be expanded analytically in power-series of |v DM | 2 and then the contribution of the leading terms may be investigated. The velocity of the DM at freeze-out is considered to be 0.3 c. We observe that • due to the constrained thermal averaged cross-section, the cut-off Λ for Majorana DM scalar and pseudo-scalar operators in figures 1a and 1b varies as (m f m χ ) 1/3 and therefore shows a steep rise for top-philic DM in comparison to the bottom-philic case. However, the pseudo-scalar contribution dominates over its scalar counterpart due to its contribution from the leading velocity independent term in equation (B1b). The cut-offs for leading contributions from the quark-philic twist operator and velocity suppressed gluon-philic scalar, pseudo-scalar, and twist operators are found to increase monotonously with increasing DM mass as m 3/4 χ . • the s-wave contribution in the thermal averaged annihilation cross-section σ AV | v χ | given in equation (B1c) is proportional to m 2 f which follows from the chirality conserving property of axial-vector operator [87]. The variation of σ AV | v χ | with the increasing DM mass is found to be largely determined by the terms proportional to m 2 χ in the converging power-series expansion as explicitly shown in equation (B1c). As a consequence, to satisfy the relic density constraint, the cut-off varies with respect to DM mass as (m χ | − → v χ |) 1/2 , i.e, faster than scalar and pseudo-scalar but slower than the twist operator, as shown in figures 1a and 1b. • the cut-offs for the scalar and twist operators induced by heavy-quark-philic scalar and vector DM candidates are depicted in figures 2a, 2b and 3a, 3b respectively. In case of scalar interactions, the cuts-offs are found to be independent of the DM masses, whereas the cut-off for the heavy-quark-philic vector DM pseudo-scalar operator is found to be monotonously increasing as (m V 0 ) 1/2 . In case of twist operators, the cut-off variation goes as (m φ 0 ) 1/2 for scalar DM, while for vector DM scenario, it is d-wave suppressed and goes as • the gluon-philic scalar, pseudo-scalar, and twist interactions corresponding to Majorana DM are p-wave suppressed and the cut-off dependences are depicted in figure 1c. The cut-offs corresponding to gluon-philic scalar, pseudo-scalar, and twist operators for vector DM candidates are constrained to vary as ( respectively, as shown in figure 3c. The observed relative suppression of the constrained cut-off for the twist operator is due to the velocity dependence. The same suit follows for the scalar DM candidates corresponding to the gluon-philic scalar and twist operators in figure 2c. B. Indirect detection of DM pairs Today, the WIMP Dark Matter in the universe is expected to be trapped in large gravitational potential wells, which further enhances the number density of DM in the region, resulting in frequent collisions among themselves. This facilitates DM-DM pair annihilation into a pair of SM particles (photons, leptons, hadronic jets, etc) at the Galactic centre, in Dwarf Spheroidal galaxies (dSphs), Galaxy clusters, and Galactic halos. The dwarf spheroidal satellite galaxies of the Milky Way are especially promising targets for DM indirect detection due to their large dark matter content, low diffuse Galactic γ-ray foregrounds as they travel the galactic distance, and lack of conventional astrophysical γray production mechanisms. Their flux is observed by the satellite-based γ-ray observatory produced by DM pair annihilation hadronize partially to neutral pions, which then decay to photon pairs (π 0 → γγ) with a 99% branching fraction, yielding a broad spectrum of photons. In order to compare the photon spectra observed in FermiLAT [15] and H.E.S.S [17] experiments, the photon flux resulting from the DM pair annihilations is realised by inter-facing the MadDM algorithm [82] with the showering and hadronization simulated using PYTHIA 8.0 code [96]. In this subsection, we compare the thermally averaged DM annihilation cross-sections corresponding to the bb, tt and gg channels in the dSphs environment with the upper bounds calculated from the respective recasted experimental limits of the observed photon spectra. The analytic expressions of thermally averaged DM pair annihilation cross-sections with what is known for other fermions in the literature [40,46,97,98]. These are derived from the cross-sections in the Appendix A in which the center of mass energy squared s It is to be noted that DM velocity | v DM | is roughly of the order of ∼ 10 −3 c at the center of Galaxy and 10 −5 c at dSphs in contrast to that of 10 −1 c at freeze-out. In comparison to all chiral blind s-dominated processes, the leading term contributions from the p-wave and d-wave channels We observe that the shape profile of the axial-vector induced contribution to the thermal averaged DM pair annihilation cross-section with varying DM mass is governed by the The Majorana, real scalar, and real vector DM-nucleon scattering cross-sections driven by the heavy-quark scalar, axial-vector, and twist-2 currents are given as is the reduced mass for the respective DMnucleon system and the scale µ is taken to be Z 0 -boson mass. For the computation of scale dependent α s (µ) , α s (Λ) and g (x; Λ), we access CTEQ6l1 [99] PDF data set from LHAPDF6 [100] library. However, the scale dependence of the Wilson coefficients is found to be smaller than that of α s , as noted by the authors of the reference [101]. It is to be noted that the observed logarithmically enhanced one-loop induced scattering cross-sections in equations (12c), (13b) and (14c) result from the explicit momentum dependence in the twist-2 operator interaction Lagrangian given in equations (3), (6) and (8) The analytical expressions for the gluon-philic DM-nucleon scattering cross-sections are in agreement with those given in reference [74]. We display the spin independent Majorana, real scalar and real vector DM-nucleon scat- extracted by interpolating the data for event rates from the experimental data in references [102] and [103] for m DM less than 1 TeV. Nuclear Recoil Spectrum For a fixed target detector exposure T and target nucleus mass m nuc. , the differential nuclear recoil event rate w.r.t. nuclear recoil energy dR nuc. /dE r corresponding to the DM-nucleon scattering cross-section σ N and DM velocity | v| is given as where F (| q| r n ) is the Helm form factor taking account of the non-vanishing finite size of the nucleus [104]. Here r n ≡ 1.2 × A 1/3 is the effective radius of the nucleus with atomic mass A and | q| is the momentum transfer corresponding to the recoil energy E r . Following reference [105], the velocity integral for normalized Maxwellian DM velocity distribution is solved as For numerical computation we have taken Earth's velocity relative to galactic frame to be | v E | = 232 km/s, | v 0 | = 220 km/s and the escape velocity | v esc | = 544 km/s. Further, in order to incorporate the detector based effects, the nucleus recoil event rate dR nuc. / dE r is convoluted with the detector efficiency [6]. Integrating over the recoil energy from E th ∼ 4.9 KeV to the maximum E max for a fixed duration and size of the detector, we can estimate the expected recoil nucleus events. As an illustration, we predict and plot the probable number of Xe nuclear recoil events IV. SUMMARY AND CONCLUSIONS In this article we assess the viability of the b-quark-philic, t-quark-philic, and gluon-philic self-conjugated spin 1/2, 0 and 1 DM candidates in the EFT approach. We have formulated the generalised effective interaction Lagrangian induced by the scalar, pseudo-scalar, axialvector, and twist-2 operators for the real particles in section II. For a given DM mass, the relic abundance of Majorana, scalar, and vector DM is computed in section III A using the thermally averaged cross-sections in appendix B. Figures 1, 2 and 3 show twenty-eight interaction strengths in the form of Wilson coefficients C q, g DM i O j / Λ n in TeV −n (eleven for Majorana DM, six for scalar DM and eleven for vector DM) satisfying the relic density constraint Ω DM h 2 ≈ 0.1198 [4]. Using the constrained Wilson coefficients, we study the thermal averaged DM pair annihilation cross-sections for the indirect detection of varying DM masses (0.01 -2 TeV) in figures 4, 5 and 6. The contributions driven by the lower-limit of the effective couplings to the annihilation cross-sections in bb, tt and g g channels are found to be consistent when compared with the upper limits of the bb annihilation cross-sections obtained from FermiLAT [15] and H.E.S.S. [17] indirect experiments. The scattering of the incident heavy-quark-philic DM particle off the static nucleon in- coefficients corresponding to scalar and twist-2 type-2 operators from the relic density constraint [4] and direct detection limits from XENON-1T experiment [6], shown in red, blue and green for the b-quark, t-quark and gluon-philic scalar DM respectively. As mentioned in the text, we include the study of gluon-philic Majorana DM interactions induced by the scalar, pseudo-scalar and twist-2 type-1 operators. The annihilation crosssections of the Majorana DM pair to a pair of gluons are given as The annihilation cross-sections of a pair of real spin 0 DM φ 0 to a pair of third generation heavy quarks induced by the scalar and twist-2 type-2 operators are given as: The cross-sections for the pair of scalar DM annihilation into a pair of gluons induced by scalar and twist-2 type-2 operators are given as Similarly, the production of a pair of third generation heavy quarks as a result of the annihilation of a pair of real vectors DM induced by the scalar, pseudo-scalar, axial-vector, and twist-2 type-2 operators is given as The annihilation cross-sections of a pair of vector DM induced by the scalar and twist-2 type-2 gluon currents are given as The thermal average of the heavy-quark-philic Majorana DM pair annihilation crosssections given in equations (A1a), (A1b), (A1c) and (A1d) corresponding to scalar and pseudo-scalar operators respectively are given as The axial-vector contribution is analytically expanded in terms of two independently converging series expansions in terms of | v χ | 2 and given as And finally the contribution from the twist-2 type-1 operator induced by the heavy quarkphilic Majorana DM interaction is given as Using annihilation cross-sections in (A2a), (A2b) and (A2c) the thermal averaged annihilation cross-sections for the gluon-philic Majorana DM are given as The thermal average of heavy-quark-philic scalar DM pair annihilation cross-sections dipalyed in equations (A3a) and (A3b) are given as Similarly, the thermal average of heavy-quark-philic scalar DM pair annihilation crosssections displayed in equations (A4a) and (A4b) are given as The thermal average of the vector DM pair annihilation cross-sections given in equations (A5a), (A5b), (A5c) and (A5d) corresponding to the scalar, pseudo-scalar, axial-vector and twist-2 type-2 operators, respectively, are given as Similarly, the thermal average of the vector DM pair annihilation to gluon channels as displayed in equations (A6a), (A6b) and (A6c) corresponding to the scalar, pseudo-scalar and twist-2 currents respectively are given as The dimensionless one-loop integrals I gg S , I gg AV , and I gg T are defined as where the square of the four momentum transferred q 2 ≈ 2 m N E r (E r 100 KeV is the recoil energy of the nucleon). m Q and m N are the concerned heavy quark mass (m b / m t ) running in the loop and nucleon mass respectively. We observe that the one-loop effective contact interactions generated from quark scalar, twist and axial vector currents contain the scalar, pseudo-scalar and twist gluon operators respectively. We perform the non-relativistic reduction of these operators for zero momentum partonic gluons and evaluate the gluonic operators between the nucleon states. The zero momentum gluon contribution to the hadronic matrix element f N TG ≈ .923 is extracted in terms of light quark as [106][107][108] According to [108] and [109], the pseudoscalar gluon operator between nucleon states is s , while the contribution from heavy quarks is found to be vanishingly small [110]. It is important to observe that a quark axial-vector current is related to the gluonic pseudoscalar current by PCAC [109]. The zero momentum nucleonic matrix element for gluon twist-2 operators is defined as where g (2; µ R ) =
6,042.4
2021-10-22T00:00:00.000
[ "Physics" ]
Enhancing Water Resistance in Foam Cement through MTES-Based Aerogel Impregnation The propensity of foamed concrete to absorb water results in a consequential degradation of its performance attributes. Addressing this issue, the integration of aerogels presents a viable solution; however, their direct incorporation has been observed to compromise mechanical properties, attributable to the effects of the interface transition zone. This study explores the incorporation of MTES-based aerogels into foamed cement via an impregnation technique, examining variations in water–cement ratios. A comprehensive analysis was conducted, evaluating the influences of MTES-based aerogels on the thermal conductivity, compressive strength, density, chemical composition, and microstructure of the resultant composites across different water–cement ratios. Our findings elucidate that an increment in the water–cement ratio engenders a gradual regularization of the pore structure in foamed concrete, culminating in augmented porosity and diminished density. Notably, aerogel-enhanced foamed concrete (AEFC) exhibited a significant reduction in water absorption, quantified at 86% lower than its conventional foamed concrete (FC) counterpart. Furthermore, the softening coefficient of AEFC was observed to surpass 0.75, with peak values reaching approximately 0.9. These results substantiate that the impregnation of MTES-based aerogels into cementitious materials not only circumvents the decline in strength but also bolsters their hydrophobicity and water resistance, indirectly enhancing the serviceability and longevity of foamed concrete. In light of these findings, the impregnation method manifests promising potential for broadening the applications of aerogels in cement-based materials. Introduction Globally, the increasing energy demand has resulted in an energy crisis, the depletion of finite energy resources, and a myriad of environmental issues, such as ozone depletion, global warming, and climate change [1].The demand for energy in buildings has surged in recent years, driven by population growth and higher living standards.Consequently, enhancing the energy efficiency of buildings can substantially decrease dependence on fossil fuels and play a pivotal role in mitigating overall greenhouse gas emissions [2].At the current consumption rate, it is anticipated that non-renewable energy sources, such as oil and natural gas, will be exhausted in the first half of the 21st century.Research indicates that building energy consumption constitutes a significant portion, ranging from 30% to 50%, of the overall energy consumption in national economies [3,4].Therefore, conserving energy in buildings represents a crucial measure to mitigate the energy crisis and promote the sustainable development of human society [5][6][7].In the pursuit of energy conservation, incorporating thermal insulation materials into building envelopes offers a substantial opportunity.Compared to other renewable energy sources like solar and wind energy, building energy conservation delivers more tangible economic benefits.As a result, considerable attention has been directed towards developing thermal insulation materials as a strategy to achieve efficient energy savings and zero emissions in buildings. Gels 2024, 10, 118.https://doi.org/10.3390/gels10020118https://www.mdpi.com/journal/gels Research in this field has primarily focused on enhancing the thermal insulation properties of building envelopes to enhance their overall energy efficiency [8,9].The use of aerogel in building insulation is gaining popularity due to its low thermal conductivity and excellent insulation performance.M. Koebel et al. [10] suggest that SAs have great promise for applications in the insulation market, but more research activities are still needed to improve aerogel insulation materials.Research has shown that a mere 20 mm of aerogel can reduce heat loss through building exterior walls by up to 90% [11,12].Yan et al. conducted a study on the use of silica aerogel to improve the performance of rock wool and glass wool and the preparation of silica aerogel and composite insulation with expanded perlite for building insulation [13,14].To further enhance the thermal insulation properties of concrete, S. Fickler incorporated silica aerogel particles into a highstrength cement matrix, resulting in high-performance aerogel concrete with a compressive strength ranging from 3.0 MPa to 23.6 MPa and a thermal conductivity ranging from 0.16 W/m/K to 0.37 W/m/K [15].Similarly, Serina Ng added aerogel to ultra-highperformance concrete to create aerogel-containing mortar, which exhibited a compressive strength of 20 MPa and a thermal conductivity of 0.55 W/m/K with a 50% volume content of aerogel and a thermal conductivity of 0.31 W/m/K with an 80% volume content of aerogel.In comparison to ordinary mortar, a thinner insulation thickness was required for aerogel insulation mortar to achieve the same insulation effect [16].Scholars have also investigated the influence of various factors on the thermal conductivity of aerogel mortar, and a comparative analysis suggested that aerogel-mortar thermal insulation systems were effective in reducing building energy consumption compared to vitrified-microspheremortar thermal insulation systems. Silica aerogel (SA) is known for its hydrophobicity, which poses challenges when attempting to mix it with water-based cementitious materials.The formation of an interfacial transition zone between the aerogel and the water-based cementitious materials has a negative impact on the workability of the composite slurry [17].To address this issue, many researchers have focused on how to eliminate the interface transition zone to improve interface bonding.For example, Cui et al. [18] found that modifying hydrophobic SiO 2 aerogel with KH550 and epoxy resin AB glue can effectively eliminate the interface bonding area, leading to significant improvements in the compressive strength of the composite material.However, it is important to note that the composite material prepared using this method has poor thermal insulation performance and runs counter to the hydrophobic nature of the aerogel material [19]. Based on the above research, it can be inferred that the first problem to be solved is the composite of aerogel and geopolymer.The purpose of this study is to highlight the thermal insulation properties, mechanical properties, and water resistance of the composites obtained by incorporating a Methyltrichlorosilane (MTES)-based aerogel into foamed concrete using the impregnation method.The results obtained will contribute to the application of aerogel materials in the building sector. Microstructure Figure 1 illustrates a clear trend wherein the pore size of the foamed concrete increases as the water-cement ratio (w/c) ascends from 0.36 to 0.56.Notably, at w/c ratios below 0.41, the FC exhibits a disorganized pore structure.However, as the w/c ratio exceeds 0.41, the pores progressively adopt a more circular shape, accompanied by a marginal increase in size.This observed phenomenon can be ascribed to the excess free water derived from the 3% hydrogen peroxide solution employed during the FC preparation.The ensuing chemical reactions during this process are delineated below. ported by the literature [20,21].Moreover, the role of hydrogen peroxide in modulating pore size is noteworthy.An escalation in the w/c ratio marginally elevates the concentration of hydrogen peroxide, catalyzing the emergence of larger pores.Additionally, the incorporation of MTES-based aerogels into the FC matrix induces a pore-filling effect with aerogel materials, exerting a substantial impact on the physicochemical attributes of the resultant aerogel-embedded foamed concrete.Figure 2 presents scanning electron microscopy (SEM) images elucidating the pore structures of FC, AEFC, and the MTES-based aerogels within AEFC.Complementarily, Tables 1 and 2 furnish energy dispersive spectroscopy (EDS) elemental surface analysis results corresponding to Figure 2a and Figure 2b, respectively.A notable distinction is observed in the textural comparison of FC and AEFC; the surface of AEFC depicted in Figure 2b is smoother relative to the rougher texture of FC in Figure 2a, a phenomenon attributable to the infill provided by MTES-based aerogels.Furthermore, Figure 2c distinctly showcases the typical coral-like silica framework structure of the MTES-based aerogel embedded within the pores of AEFC, as depicted in Figure 2b [22].From Table 3, it can be seen that the cement Ca element is more heavily weighted, a comparative analysis of the EDS elemental surface data from Tables 1 and 2 reveals a stark contrast in elemental composition.The data in Table 1 (corresponding to Figure 2a) indicate a predominant presence of calcium, reflective of the calcium salts integral to the cement matrix of FC.Conversely, Table 2 (pertaining to Figure 2b) exhibits a preponderance of silicon, predominantly ascribed to the MTES-based aerogels within AEFC.The contribution of hydrated calcium silicate to the silicon content within the cement matrix of AEFC is discerned to be relatively minor [23].The findings depicted in Figure 1 reveal that an increment in the water-cement ratio (w/c) corresponds to a gradual evolution in the pore morphology of the foamed concrete, transitioning from irregular to circular configurations.This morphological transformation becomes pronounced at a w/c ratio of 0.41.The viscosity of the cement slurry emerges as a critical determinant in shaping the pore structure, with high-viscosity slurries conducive to irregular pore formation and low-viscosity slurries favoring circular pores, as supported by the literature [20,21].Moreover, the role of hydrogen peroxide in modulating pore size is noteworthy.An escalation in the w/c ratio marginally elevates the concentration of hydrogen peroxide, catalyzing the emergence of larger pores.Additionally, the incorporation of MTES-based aerogels into the FC matrix induces a pore-filling effect with aerogel materials, exerting a substantial impact on the physicochemical attributes of the resultant aerogel-embedded foamed concrete. Figure 2 presents scanning electron microscopy (SEM) images elucidating the pore structures of FC, AEFC, and the MTES-based aerogels within AEFC.Complementarily, Tables 1 and 2 furnish energy dispersive spectroscopy (EDS) elemental surface analysis results corresponding to Figures 2a and 2b, respectively.A notable distinction is observed in the textural comparison of FC and AEFC; the surface of AEFC depicted in Figure 2b is smoother relative to the rougher texture of FC in Figure 2a, a phenomenon attributable to the infill provided by MTES-based aerogels.Furthermore, Figure 2c distinctly showcases the typical coral-like silica framework structure of the MTES-based aerogel embedded within the pores of AEFC, as depicted in Figure 2b [22].From Table 3, it can be seen that the cement Ca element is more heavily weighted, a comparative analysis of the EDS elemental surface data from Tables 1 and 2 reveals a stark contrast in elemental composition.The data in Table 1 (corresponding to Figure 2a) indicate a predominant presence of calcium, reflective of the calcium salts integral to the cement matrix of FC.Conversely, Table 2 (pertaining to Figure 2b) exhibits a preponderance of silicon, predominantly ascribed to the MTES-based aerogels within AEFC.The contribution of hydrated calcium silicate to the silicon content within the cement matrix of AEFC is discerned to be relatively minor [23]. Chemical Composition and Hydrophobic Mechanism The synthetic procedure employed in this study involves hydrolysis and condensation reactions, as delineated in Figure 3, respectively [24].Notably, the Si-CH3 groups intrinsic to the MTES molecule and its hydrolysis byproducts remain unaltered throughout the hydrolysis process.These persistent Si-CH3 groups are ultimately integrated into the silica framework, where they significantly influence the surface chemistry of the MTESbased aerogels [25]. Chemical Composition and Hydrophobic Mechanism The synthetic procedure employed in this study involves hydrolysis and condensation reactions, as delineated in Figure 3, respectively [24].Notably, the Si-CH 3 groups intrinsic to the MTES molecule and its hydrolysis byproducts remain unaltered throughout the hydrolysis process.These persistent Si-CH 3 groups are ultimately integrated into the silica framework, where they significantly influence the surface chemistry of the MTES-based aerogels [25]. Chemical Composition and Hydrophobic Mechanism The synthetic procedure employed in this study involves hydrolysis and condensation reactions, as delineated in Figure 3, respectively [24].Notably, the Si-CH3 groups intrinsic to the MTES molecule and its hydrolysis byproducts remain unaltered throughout the hydrolysis process.These persistent Si-CH3 groups are ultimately integrated into the silica framework, where they significantly influence the surface chemistry of the MTESbased aerogels [25].Figure 4a showcases the Fourier-transform infrared (FTIR) spectroscopy analysis of MTES-based aerogel and FC.In the spectrum of the MTES-based aerogel, Si-O-Si bond asymmetric stretching vibration bands manifest in the ranges of 1010-1090 cm −1 and 465 cm −1 [23].Peaks at 782 cm −1 and 1272 cm −1 signify stretching and symmetric deformation vibrations of the Si-C bonds [22,26].Broad absorption at 3436 cm −1 and a peak around 1647 cm −1 correspond to -OH group vibrations [27], while peaks at 2902 cm −1 and 2964 cm −1 are indicative of C-H bond asymmetric and symmetric stretching vibrations [28].These peaks highlight the Si-CH 3 groups, which impart hydrophobicity to the aerogel. cm [23].Peaks at 782 cm and 1272 cm signify stretching and symmetric deformation vibrations of the Si-C bonds [22,26].Broad absorption at 3436 cm −1 and a peak around 1647 cm −1 correspond to -OH group vibrations [27], while peaks at 2902 cm −1 and 2964 cm −1 are indicative of C-H bond asymmetric and symmetric stretching vibrations [28].These peaks highlight the Si-CH3 groups, which impart hydrophobicity to the aerogel. In the FC spectrum, the peak at 3461 cm −1 is attributed to hydroxyl stretching vibrations of Ca(OH)2 [29], and the peak at 1646 cm −1 to deformation vibration in free water.Carbonate diffraction peaks at 1430 cm −1 and 876 cm −1 arise due to the reaction of CO2 and Ca(OH)2 forming CaCO3 [30].The sulfate group's antisymmetric stretching vibration is noted at 1120 cm −1 [31,32], and asymmetric tensile vibration peaks of Si-O at 1000 cm −1 and 521 cm −1 suggest the presence of calcium silicate hydrate [32]. The AEFC spectrum features all peaks present in both the MTES aerogel and FC, without any additional peaks, suggesting that no further chemical reactions occur between the MTES solution and FC during the impregnation process.This indicates that the MTES-based aerogel merely adheres to the surface of the FC substrate.Figure 4b illustrates the hydrophobicity comparison between FC and AEFC by showcasing the behavior of 100 µL water droplets on their surfaces (fill light was applied during the photo shoot).The hydrophilic nature of FC is evident as the water droplet spreads across its surface.In contrast, AEFC exhibits markedly better hydrophobicity, with the water droplet maintaining a stable, spherical shape atop its surface (contact angle (CA) = 117°).This enhanced hydrophobicity can be attributed to the Si-CH3 groups present on the MTES-based aerogel, as elucidated in Figure 3a,b.These groups confer the aerogel with its inherent hydrophobic properties [22].Post impregnation, the FC surface becomes coated with the MTES-based aerogel, effectively creating a hydrophobic layer.Consequently, AEFC exhibits a degree of hydrophobicity that is nearly on par with the MTESbased aerogel itself. Water Resistance The durability of materials in aqueous or humid environments is crucial, necessitating the evaluation of post-immersion properties such as water absorption and softening coefficient [33][34][35].Figure 5 delineates the influence of the water-cement ratio on the water absorption characteristics of FC and AEFC.Notably, the water absorption of both FC and AEFC escalates with the increment in the water-cement ratio until a plateau is reached beyond a ratio of 0.41.This phenomenon correlates with the morphological evolution of In the FC spectrum, the peak at 3461 cm −1 is attributed to hydroxyl stretching vibrations of Ca(OH) 2 [29], and the peak at 1646 cm −1 to deformation vibration in free water.Carbonate diffraction peaks at 1430 cm −1 and 876 cm −1 arise due to the reaction of CO 2 and Ca(OH) 2 forming CaCO 3 [30].The sulfate group's antisymmetric stretching vibration is noted at 1120 cm −1 [31,32], and asymmetric tensile vibration peaks of Si-O at 1000 cm −1 and 521 cm −1 suggest the presence of calcium silicate hydrate [32]. The AEFC spectrum features all peaks present in both the MTES aerogel and FC, without any additional peaks, suggesting that no further chemical reactions occur between the MTES solution and FC during the impregnation process.This indicates that the MTESbased aerogel merely adheres to the surface of the FC substrate. Figure 4b illustrates the hydrophobicity comparison between FC and AEFC by showcasing the behavior of 100 µL water droplets on their surfaces (fill light was applied during the photo shoot).The hydrophilic nature of FC is evident as the water droplet spreads across its surface.In contrast, AEFC exhibits markedly better hydrophobicity, with the water droplet maintaining a stable, spherical shape atop its surface (contact angle (CA) = 117 • ).This enhanced hydrophobicity can be attributed to the Si-CH 3 groups present on the MTESbased aerogel, as elucidated in Figure 3a,b.These groups confer the aerogel with its inherent hydrophobic properties [22].Post impregnation, the FC surface becomes coated with the MTES-based aerogel, effectively creating a hydrophobic layer.Consequently, AEFC exhibits a degree of hydrophobicity that is nearly on par with the MTES-based aerogel itself. Water Resistance The durability of materials in aqueous or humid environments is crucial, necessitating the evaluation of post-immersion properties such as water absorption and softening coefficient [33][34][35].Figure 5 delineates the influence of the water-cement ratio on the water absorption characteristics of FC and AEFC.Notably, the water absorption of both FC and AEFC escalates with the increment in the water-cement ratio until a plateau is reached beyond a ratio of 0.41.This phenomenon correlates with the morphological evolution of the pore structure from irregular to more uniformly circular pores at elevated watercement ratios.However, a further increment in the water-cement ratio to 0.56 results in a substantial surge in the water absorption of FC, concomitant with a decline in material density and strength and an augmentation in porosity.Figure 5a substantiates that FC attains water absorption saturation after an hour of immersion.Conversely, AEFC shows significantly lower water absorption compared to FC, owing to the hydrophobic properties imparted by the MTES-based aerogel.Beyond a water-cement ratio of 0.41, AEFC's water absorption stabilizes around 75%, marking an 86% reduction relative to FC.Additionally, AEFC exhibits a more protracted water absorption trajectory in comparison to FC. Gels 2024, 10, x FOR PEER REVIEW 6 of 14 the pore structure from irregular to more uniformly circular pores at elevated water-cement ratios.However, a further increment in the water-cement ratio to 0.56 results in a substantial surge in the water absorption of FC, concomitant with a decline in material density and strength and an augmentation in porosity.Figure 5a substantiates that FC attains water absorption saturation after an hour of immersion.Conversely, AEFC shows significantly lower water absorption compared to FC, owing to the hydrophobic properties imparted by the MTES-based aerogel.Beyond a water-cement ratio of 0.41, AEFC's water absorption stabilizes around 75%, marking an 86% reduction relative to FC.Additionally, AEFC exhibits a more protracted water absorption trajectory in comparison to FC.The softening coefficient is a pivotal metric for materials deployed in aqueous or humid conditions, serving as an indicator of their structural integrity post immersion.Standards dictate that materials subjected to prolonged damp conditions should exhibit a softening coefficient exceeding 0.85, while those in mildly humid environments should maintain a coefficient no less than 0.75 [33].Figure 6 illustrates the variation in softening coefficients for FC and AEFC across different water-cement ratios, along with the correlation between water absorption and softening coefficient.As depicted in Figure 6a, there is an evident decrement in the softening coefficients for both FC and AEFC concomitant with increasing water-cement ratios.Notably, FC exhibits a pronounced reduction in its softening coefficient upon reaching a water-cement ratio of 0.56.Conversely, AEFC consistently maintains higher softening coefficients The softening coefficient is a pivotal metric for materials deployed in aqueous or humid conditions, serving as an indicator of their structural integrity post immersion.Standards dictate that materials subjected to prolonged damp conditions should exhibit a softening coefficient exceeding 0.85, while those in mildly humid environments should maintain a coefficient no less than 0.75 [33].Figure 6 illustrates the variation in softening coefficients for FC and AEFC across different water-cement ratios, along with the correlation between water absorption and softening coefficient. the pore structure from irregular to more uniformly circular pores at elevated water-cement ratios.However, a further increment in the water-cement ratio to 0.56 results in a substantial surge in the water absorption of FC, concomitant with a decline in material density and strength and an augmentation in porosity.Figure 5a substantiates that FC attains water absorption saturation after an hour of immersion.Conversely, AEFC shows significantly lower water absorption compared to FC, owing to the hydrophobic properties imparted by the MTES-based aerogel.Beyond a water-cement ratio of 0.41, AEFC's water absorption stabilizes around 75%, marking an 86% reduction relative to FC.Additionally, AEFC exhibits a more protracted water absorption trajectory in comparison to FC.The softening coefficient is a pivotal metric for materials deployed in aqueous or humid conditions, serving as an indicator of their structural integrity post immersion.Standards dictate that materials subjected to prolonged damp conditions should exhibit a softening coefficient exceeding 0.85, while those in mildly humid environments should maintain a coefficient no less than 0.75 [33].Figure 6 illustrates the variation in softening coefficients for FC and AEFC across different water-cement ratios, along with the correlation between water absorption and softening coefficient.As depicted in Figure 6a, there is an evident decrement in the softening coefficients for both FC and AEFC concomitant with increasing water-cement ratios.Notably, FC exhibits a pronounced reduction in its softening coefficient upon reaching a water-cement ratio of 0.56.Conversely, AEFC consistently maintains higher softening coefficients As depicted in Figure 6a, there is an evident decrement in the softening coefficients for both FC and AEFC concomitant with increasing water-cement ratios.Notably, FC exhibits a pronounced reduction in its softening coefficient upon reaching a water-cement ratio of 0.56.Conversely, AEFC consistently maintains higher softening coefficients compared to FC, with values surpassing the 0.75 threshold, reinforcing its suitability for use in moist environments.In response to the higher softening coefficient of FC with a w/c ratio of 0.46 compared to that with a w/c ratio of 0.41 observed in Figure 6a, we suggest this could be due to microstructural variances and experimental variability affecting mechanical properties.Further investigation is needed to fully understand these mechanisms. Figure 6b reveals a linear relationship between the softening coefficient and water absorption (y = 0.96913 − 0.00234x; R 2 = 0.95481), underscoring water absorption as a significant determinant of the softening coefficient.This linear correlation suggests that enhancing material hydrophobicity could be a strategic avenue to bolster the softening coefficient, thereby augmenting the durability of foamed concrete in wet conditions. Compressive Strength Figure 7a depicts the inverse relationship between water-cement ratio and density for both FC and AEFC before and after immersion treatment, in accordance with established microstructural dynamics [36,37].Notably, at a water-cement ratio of 0.41, FC transitions to a regular circular pore structure.Post immersion, AEFC retains the same trend as FC but exhibits a higher density across corresponding water-cement ratios.This increased density in AEFC can be attributed to the incorporation of MTES-based aerogel, which slightly augments the mass without altering the volume.As per Equations ( 8) and ( 9), porosity and density are inversely proportional when matrix density is constant, leading to a slight reduction in porosity for AEFC compared to FC, as illustrated in Figure 7a. Compressive Strength Figure 7a depicts the inverse relationship between water-cement ratio and density for both FC and AEFC before and after immersion treatment, in accordance with established microstructural dynamics [36,37].Notably, at a water-cement ratio of 0.41, FC transitions to a regular circular pore structure.Post immersion, AEFC retains the same trend as FC but exhibits a higher density across corresponding water-cement ratios.This increased density in AEFC can be attributed to the incorporation of MTES-based aerogel, which slightly augments the mass without altering the volume.As per Equations ( 8) and ( 9), porosity and density are inversely proportional when matrix density is constant, leading to a slight reduction in porosity for AEFC compared to FC, as illustrated in Figure 7a. Figure 7b reflects the congruent trends in compressive strength for AEFC and FC in relation to their densities.A gradual decline in compressive strength from 0.83 MPa to 0.28 MPa is observed with increasing water-cement ratio, followed by a stabilization in the 0.23-0.28MPa range [38][39][40].However, due to the conditions, we only made a single measurement of the sample.The incorporation of MTES-based aerogel in AEFC does not markedly influence its compressive strength.The decrease in strength is largely due to heightened porosity stemming from the shift in pore structure towards regular circular forms and the consequent enlargement of pores.It is a well-established fact that the strength of porous materials is inversely related to their porosity.However, AEFC benefits from a more uniform and orderly microstructure, which aids in the effective dispersion of stress, thereby stabilizing compressive strength within the specified range despite the increase in porosity. Thermal Insulation Property Figure 8a illustrates the thermal conductivity of FC and AEFC in relation to the water-cement ratio.Based on measurements of the effective thermal conductivity of the insulation material [41], for FC, the thermal conductivity diminishes significantly from 0.095 Figure 7b reflects the congruent trends in compressive strength for AEFC and FC in relation to their densities.A gradual decline in compressive strength from 0.83 MPa to 0.28 MPa is observed with increasing water-cement ratio, followed by a stabilization in the 0.23-0.28MPa range [38][39][40].However, due to the conditions, we only made a single measurement of the sample.The incorporation of MTES-based aerogel in AEFC does not markedly influence its compressive strength.The decrease in strength is largely due to heightened porosity stemming from the shift in pore structure towards regular circular forms and the consequent enlargement of pores.It is a well-established fact that the strength of porous materials is inversely related to their porosity.However, AEFC benefits from a more uniform and orderly microstructure, which aids in the effective dispersion of stress, thereby stabilizing compressive strength within the specified range despite the increase in porosity. Thermal Insulation Property Figure 8a illustrates the thermal conductivity of FC and AEFC in relation to the watercement ratio.Based on measurements of the effective thermal conductivity of the insulation material [41], for FC, the thermal conductivity diminishes significantly from 0.095 W/m/K to 0.038 W/m/K as the water-cement ratio increases from 0.36 to 0.56.This reduction is attributed to the sharp decrease in density and the corresponding increase in porosity from approximately 74% to 84-88%.In this porous structure, static air, with its low thermal conductivity of 0.026 W/m/K, predominates, thereby significantly lowering the effective thermal conductivity of the material. W/m/K to 0.038 W/m/K as the water-cement ratio increases from 0.36 to 0.56.This reduction is attributed to the sharp decrease in density and the corresponding increase in porosity from approximately 74% to 84-88%.In this porous structure, static air, with its low thermal conductivity of 0.026 W/m/K, predominates, thereby significantly lowering the effective thermal conductivity of the material.Similar to FC, AEFC exhibits a corresponding decrease in thermal conductivity with increasing water-cement ratio, stabilizing at around 0.04 W/m/K [42][43][44].Despite the incorporation of MTES-based aerogel, the thermal properties of AEFC are closely aligned with those of FC, mainly because the aerogel is only physically combined with FC, making their physical property differences negligible for the purposes of this analysis. AEFC is considered a ternary material in which the cementitious component is the only continuous phase and the MTES-based aerogel and gas are the dispersed phases.For the gas/MTES-based aerogel mixed phase, the gas can be considered the continuous phase and the MTES-based aerogel the dispersed phase, while the effective thermal conductivity of the gas/MTES-based aerogel mixed phase can be calculated by the Maxwell-Eucken-1 model [45,52]. and represent the thermal conductivities of gas and the MTES-based aerogel, respectively.The thermal conductivity of stationary air is typically 26 mW/m/K.The volume fraction of SA in the gas/SA mixed phase is represented by .When the cement base is in the continuous phase, the overall thermal conductivity , , is represented by the M1 model as follows: When the gas/MTES-based aerogel mixture is in the continuous phase, it is expressed by M2 as follows: Similar to FC, AEFC exhibits a corresponding decrease in thermal conductivity with increasing water-cement ratio, stabilizing at around 0.04 W/m/K [42][43][44].Despite the incorporation of MTES-based aerogel, the thermal properties of AEFC are closely aligned with those of FC, mainly because the aerogel is only physically combined with FC, making their physical property differences negligible for the purposes of this analysis. AEFC is considered a ternary material in which the cementitious component is the only continuous phase and the MTES-based aerogel and gas are the dispersed phases.For the gas/MTES-based aerogel mixed phase, the gas can be considered the continuous phase and the MTES-based aerogel the dispersed phase, while the effective thermal conductivity of the gas/MTES-based aerogel mixed phase can be calculated by the Maxwell-Eucken-1 model [45,52]. K g and K a represent the thermal conductivities of gas and the MTES-based aerogel, respectively.The thermal conductivity of stationary air is typically 26 mW/m/K.The volume fraction of SA in the gas/SA mixed phase is represented by v a . When the cement base is in the continuous phase, the overall thermal conductivity k c,a,g is represented by the M1 model as follows: When the gas/MTES-based aerogel mixture is in the continuous phase, it is expressed by M2 as follows: However, the limitation of the M1 and M2 models is that they are determined only based on the difference between the dispersed phase and the continuous phase and lack Gels 2024, 10, 118 9 of 14 relevant correction parameters.The relatively accurate representation of the Levy model is shown as follows [52]: where K, K c , K a,g are the effective thermal conductivity of composite materials, the cement thermal conductivity, and the gas/SA mixed-phase thermal conductivity, respectively.The revised parameter expressions are as follows: where v a,g is the pore volume fraction, namely porosity, and parameter G is expressed as follows: The comparison of the effective thermal conductivity values calculated using models M1 and M2 and the Levy model with the experimental data for foam concrete and aerated foam concrete is depicted in Figure 8b.At lower porosities, the experimental thermal conductivities for FC and AEFC lie between the values predicted by M1 and the Levy model.For higher porosities, the experimental thermal conductivities of FC and AEFC are in close agreement with those predicted by the Levy model, as indicated by the blue ellipse in Figure 8b.As shown in Figure 1, the sample exhibits an irregular pore structure and low porosity at a water-cement ratio of 0.36.When this ratio exceeds 0.41, the porosity of the sample surges rapidly to over 84%, and the pores in FC begin transitioning into circular forms.This transition is a critical factor influencing the alignment of experimental thermal conductivity values with those predicted by the Levy model.Consequently, it can be inferred that the circular pore structure in FC, as illustrated in Figure 1, correlates more closely with the Levy model's calculations.However, when the pore structure alters, further modifications are required. Conclusions In this work, the feasibility of fabricating aerogel-enhanced foamed concrete using varying water-to-cement ratios through the impregnation technique was explored.The results elucidate that the water-to-cement ratio is pivotal in dictating the microstructural attributes of the foamed concrete, specifically its pore configuration and porosity.With a fixed foam stabilizer content, reduced water-to-cement ratios were associated with a more chaotic pore structure due to the resultant increase in slurry viscosity.In contrast, a ratio of 0.41 yielded moderate viscosity, which allowed the foam stabilizer to sustain the bubble structure more effectively, resulting in a more organized pore framework.Furthermore, a significant dependency of the physical characteristics of AEFC on the water-to-cement ratio was observed.An increase in this ratio led to an escalation in porosity and a concomitant reduction in density to as low as 0.2 g/cm 3 , while maintaining a direct and positive correlation with compressive strength.Importantly, our study demonstrated the material's enhanced hydrophobic properties with a remarkable 86% decrease in water absorption at a water-tocement ratio of 0.56 compared to regular foamed concrete, surpassing the minimum standard for the softening coefficient by registering an increase from 0.62 to 0.78.These findings highlight the potential of water-to-cement ratio manipulation as a strategic tool to enhance the waterproofing characteristics of aerogel-enhanced foamed concrete, indicating its suitability for applications requiring specific structural and waterproofing criteria. Preparing Foamed Cement Initially, calcium stearate and sulfur aluminate cement were dry-mixed in a mass ratio of 1:293.Subsequently, 3 wt.%hydrogen peroxide solution was added to the powder and heated to 40 ± 2 • C. The water-cement ratio was controlled by adjusting the amount of hydrogen peroxide solution in the range of 0.36-0.56.The heated mixture was then stirred rapidly and poured into a mold, followed by placement in a 60 • C water bath to accelerate the decomposition of hydrogen peroxide and achieve the initial setting.The resulting foamed concrete was cured in an environment with 95% RH and a temperature of 20 ± 2 • C.After curing, the foamed concrete was dried at 60 • C for 1 day and then further dried up to a constant weight at 80 • C.This method was utilized to investigate the preparation of foamed concrete with controlled water-cement ratios, aiming to establish a new method for developing high-performance cement-based composites. Preparing MTES-Based Aerogel Initially, a mixture of 5 mL MTES, 0.01 g CTAB, 25 mL DI•H 2 O, and 300 µL 0.1 M HCl was stirred for 3 min in a 100 mL beaker.The resulting hybrid solution was kept in a 45 • C water bath and stirred for approximately 2 h to undergo hydrolysis.After hydrolysis, the hybrid solution was cooled to below 10 • C to regulate the condensation rate, based on the Arrhenius equation.Subsequently, 500 µL of 1 M NH 3 •H 2 O was added to the hybrid solution while stirring for 10 s.The resulting gel was placed in an incubator below 10 • C; gelation typically occurs within 1.5 h [22]. Embedding of MTES-Based Aerogel During the preparation of the MTES-based aerogel, the above-mentioned foamed concrete was completely immersed in the MTES hydrolyzed solution after adding the 1 M NH 3 •H 2 O. Subsequently, the formed sol with foamed cement was placed in an incubator below 10 • C; the gelation usually occurs within 1.5 h.(It is important to note that although FC may absorb a small amount of moisture, this moisture, as well as moisture in the gel, is removed during the drying process).Finally, the composites were dried under ambient pressure at 120 • C for 3 h to obtain the MTES-based aerogel/foamed concrete composite (aerogel-embedded foamed concrete, AEFC).A schematic diagram of the experimental flow is shown in Figure 9. Methods of Characterization The internal microstructure of the material was examined using a German ZEISS Sigma 300 field emission scope (SEM) and its accompanying equipment, including energy dispersive spectroscopy (EDS). The bulk density ( ) was approximated through tap density, which was measured using a tap density meter (ZS-202, Liaoning Instrument Research Institute Co., Ltd.Liaoning, China) with 300 r/min for continuous vibration within 10 min.The corresponding porosity (the gas volume fraction, ) was calculated from the following formula [53]: In this study, cement and SA are considered together as the solid-phase matrix of AFC, and the matrix density ( ) is calculated as follows: = 1/( / + / ) where , and , correspond to the density and mass fraction of cement and the MTES-based aerogel-silicon framework, respectively; the density is 1.803 g/cm 3 and 2.25 g/cm 3 , respectively [54]. The chemical bonds and chemical groups of pure SA, FC, and AFC were determined by Fourier-transform infrared spectroscopy (FTIR, Nicolet 8700, Nicolet, Madison, WI, USA) using KBr pellets. The hydrophobicity of the samples was measured using the ASR-705S hydrophobic angle tester from Guangdong Iris Instrument Technology Co., Ltd.(Dongguan, China), with contact angles recorded and obtained via the image processing program ImageJ (version:1.54f)[55]. To measure water absorption, we first conditioned standard and aerogel-enhanced foamed concrete samples to a stable weight, then weighed them dry.After full immersion in distilled water for different periods of time, we removed surface moisture and reweighed the samples.The water absorption rate was calculated as (Wet Weight − Dry Weight)/Dry Weight × 100%, providing insights into changes in water absorption due to aerogel incorporation. The compressive strength testing was conducted using a column sensor produced by Ningbo Saishi Measurement and Control Technology Co., Ltd.(Ningbo, China) under a loading rate of 0.5 mm/min.The single compressive test conducted for each sample ad- Methods of Characterization The internal microstructure of the material was examined using a German ZEISS Sigma 300 field emission scope (SEM) (Oberkochen, Germany) and its accompanying including energy dispersive spectroscopy (EDS).The bulk density (ρ b ) was approximated through tap density, which was measured using a tap density meter (ZS-202, Liaoning Instrument Research Institute Co., Ltd.Liaoning, China) with 300 r/min for continuous vibration within 10 min.The corresponding porosity (the gas volume fraction, v g ) was calculated from the following formula [53]: In this study, cement and SA are considered together as the solid-phase matrix of AFC, and the matrix density (ρ m ) is calculated as follows: where ρ c , ρ a and w c , w a correspond to the density and mass fraction of cement and the MTES-based aerogel-silicon framework, respectively; the density is 1.803 g/cm 3 and 2.25 g/cm 3 , respectively [54]. The chemical bonds and chemical groups of pure SA, FC, and AFC were determined by Fourier-transform infrared spectroscopy (FTIR, Nicolet 8700, Nicolet, Madison, WI, USA) using KBr pellets. The hydrophobicity of the samples was measured using the ASR-705S hydrophobic angle tester from Guangdong Iris Instrument Technology Co., Ltd.(Dongguan, China), with contact angles recorded and obtained via the image processing program ImageJ (version: 1.54f) [55]. To measure water absorption, we first conditioned standard and aerogel-enhanced foamed concrete samples to a stable weight, then weighed them dry.After full immersion in distilled water for different periods of time, we removed surface moisture and reweighed the samples.The water absorption rate was calculated as (Wet Weight − Dry Weight)/Dry Weight × 100%, providing insights into changes in water absorption due to aerogel incorporation. Figure 1 . Figure 1.Pore structure changes before and after immersion of FC and AEFC samples with different water-cement ratios. Figure 1 . Figure 1.Pore structure changes before and after immersion of FC and AEFC samples with different water-cement ratios. Figure 3 . Figure 3.The synthetic procedure employed in this study involves hydrolysis (a) and condensation reactions (b). Figure 3 . Figure 3.The synthetic procedure employed in this study involves hydrolysis (a) and condensation reactions (b). Figure 3 . Figure 3.The synthetic procedure employed in this study involves hydrolysis (a) and condensation reactions (b). Figure 5 . Figure 5. (a) Water absorption of FC at different times; (b) water absorption of AEFC at different times. Figure 6 . Figure 6.(a) Softening coefficient of FC and AEFC under different water-cement ratios; (b) relationship between water absorption and softening coefficient. Figure 5 . Figure 5. (a) Water absorption of FC at different times; (b) water absorption of AEFC at different times. Figure 5 . Figure 5. (a) Water absorption of FC at different times; (b) water absorption of AEFC at different times. Figure 6 . Figure 6.(a) Softening coefficient of FC and AEFC under different water-cement ratios; (b) relationship between water absorption and softening coefficient. Figure 6 . Figure 6.(a) Softening coefficient of FC and AEFC under different water-cement ratios; (b) relationship between water absorption and softening coefficient. Figure 7 . Figure 7. (a) Density and porosity of FC and AEFC; (b) compressive strength of FC and AEFC. Figure 7 . Figure 7. (a) Density and porosity of FC and AEFC; (b) compressive strength of FC and AEFC. Figure 8 . Figure 8.(a) Variation trend of thermal conductivity of FC and AEFC with water-cement ratios; and (b) comparison between experimental data and effective thermal conductivity calculated by different heat transfer models. Figure 8 . Figure 8.(a) Variation trend of thermal conductivity of FC and AEFC with water-cement ratios; and (b) comparison between experimental data and effective thermal conductivity calculated by different heat transfer models. Figure 9 . Figure 9. Schematic diagram of the experimental flow. Figure 9 . Figure 9. Schematic diagram of the experimental flow. Table 1 . EDS surface analysis of element-specific gravity of FC. Table 2 . EDS surface analysis of element-specific gravity of AEFC. Table 3 . Chemical composition of cement. Table 1 . EDS surface analysis of element-specific gravity of FC. Table 2 . EDS surface analysis of element-specific gravity of AEFC. Table 3 . Chemical composition of cement. Table 1 . EDS surface analysis of element-specific gravity of FC. Table 2 . EDS surface analysis of element-specific gravity of AEFC. Table 3 . Chemical composition of cement.
9,209.2
2024-02-01T00:00:00.000
[ "Materials Science", "Engineering" ]
PATH PLANNING FOR INDOOR CONTACT INSPECTION TASKS WITH UAVS UAV technology has become a useful tool for the inspection of infrastructures. Structural Health Monitoring methods are already implementing these vehicles to obtain information about the condition of the structure. Several systems based on close range remote sensing and contact sensors have been developed. In both cases, in order to perform autonomous missions in hard accessible areas or with obstacles, a path planning algorithm that calculates the trajectory to be followed by the UAV to navigate these areas is mandatory. This works presents a UAV path planning algorithm developed to navigate indoors and outdoors. This algorithm does not only calculate the waypoints of the path, but the orientation of the vehicle for each location. This algorithm will support a specific UAV-based contact inspection of vertical structures. The required input data consist of a point cloud of the environment, the initial position of the UAV and the target point of the structure where the contact inspection will be performed. INTRODUCTION UAVs systems have become a useful tool for SHM (Structural Health Monitoring) tasks, since they can navigate in complex and hard-accessible areas easily. This property makes this technology perfect for inspection tasks in big infrastructures . Several UAV systems for infrastructure inspection have been developed, using close remote sensing payloads, such as cameras or LiDAR (Light Detection and Ranging) sensors (Teng et al., 2017), or contact inspection sensors, such as ultrasonic sensors (Zhang et al., 2018). In both cases, it is needed a path planning algorithm to perform this inspection tasks autonomously. For outdoor missions, the path planning is usually founded on a GNSS way-point based algorithm using open software program (Ardupilot, n.d.; Dronecode, n.d.). In these missions, the system is less dependent on the environment where the mission is to be performed. For more complex environments, where the system is expected to fly in reduced or confined spaces, a more sophisticated path-planning method must be used. Several indoor navigation algorithms have been also developed, in which obstacle detection is fundamental, using 3D point clouds of the environment using LiDAR sensors (Díaz-Vilariño et al., 2016). In the indoor case, it is fundamental to found the doors in the room, that are the component that connects different rooms (Liu and Zlatanova, 2011) RELATED WORK There are many two-dimensional (2D) path planning methods (Liu and Zlatanova, 2013), where the data used for the path planning is the existing building information. This methods can be classified in two large groups: surface-based and volumebased models (Fernández-Prieto et al., 2019;Li et al., 2018;Tomic et al., 2012). One of the main restrictions of the current developed 3-D path finding methods for UAVs is that they focused only on obtaining path positions. The vehicle orientation (Roll, Pitch, Yaw angles) along such path is also a key aspect to perform autonomous missions. Volume-based algorithms are based on the discretization of the volumetric information, frequently by means of a voxelization: a discretization method, that is based on dividing the volume in smaller cubes of a fixed size, voxels. Each voxel is classified depending on the area that they represent (González-de Santos et al., 2018). Also, a more efficient octree structure can be applied to volume discretization, obtaining an equivalent resolution with less data to be processed (González-deSantos et al., 2018). The aim of this work is to develop a path planning algorithm for UAV combining different developed methods and adding an orientation planning for the calculated path. The developed methodology is applied to contact inspection tasks indoors and outdoors. Authors have already developed a payload for UAVs to perform semi-autonomous contact inspection tasks in vertical structures (González-deSantos et al., 2020 , which control the approaching and stabilized contact of the UAV to the structure to be inspected. The main objective of the developed algorithm is to obtain and plan the path to navigate from the initial point, where the UAV is landed, to an oriented point in front of the structure where the payload control is capable of managing the approaching movements to perform the contact inspection. METHODOLOGY The data used by the developed path planning algorithm is a 3D point cloud obtained by TLS (Terrestrial Laser Scanner). Figure 1 presents the workflow of the developed methodology. This is the first step of the method, that consist of the discretization of the point cloud of the environment into a voxelbased structure, where each voxel is classified depending on the properties of the space it represents. In this step, the voxel size has to be defined. Choose a correct voxel size is important, because it implies a trade-off between the loss of detail in the point cloud and the computational cost of the classification. A pre-processing of the point cloud was performed before the data discretization. In this process, the point cloud is rotated in order to make the floor plane completely horizontal. To calculate the floor's plane, a RANSAC (Random Sample Consensus) algorithm is used (Fischler and Bolles, 1981). Some LS (Laser Scanner) use a barometer to collect absolute elevations to reference the Z coordinate of each point. A rigid body transformation is applied in order to reference the height of each point to the floor, making its plane coincident with the XY plane. Then, the point cloud is segmented in 3 parts, separating the floor and the ceiling from the rest of the room ( Figure 2). In most of the buildings, walls are perpendicular and parallel to each other. According to this, the point cloud is rotated around the Z axis in order to align walls with the voxel faces, since they are aligned with XYZ coordinates. In this way the voxelization can be optimized. To make this, a small horizontal section of the upper part of the room is segmented, that is usually the less occluded part in most of the non-industrial buildings. In this section, the plane of each wall is fitted, locating the longest wall of the room. The normal vector of this wall is used to rotate the entire point cloud of the room to align it with the X axis. When the point cloud is rotated, the voxelization structure can be created, taking into account the room limits in each axis and the voxel size, calculating the voxel matrix's dimensions. Each voxel of the structure is classified depending on its occupancy as one of the following four types of voxels: • Empty: Voxels that do not contain points • Occupied: Voxels that contain points. • Security offset: Empty voxel that is near to one or more occupied voxels. They are defined by a security distance. • Exterior: Empty voxels that are out of the study room. Initially, all the voxels are defined as empty voxels. The following sections explain how these voxels are reclassified. Occupation classification In this step, each voxel is studied individually, classifying them depending on their inner point density. If a voxel has a point density higher than a predefined threshold, the voxel is classified as occupied. The value of this density threshold is of great importance, because it helps to discriminate noisy voxels or reflections in the point cloud that otherwise could affect the classification. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B4-2020, 2020 XXIV ISPRS Congress (2020 edition) Figure 3. Horizontal plane of voxels. Dark blue: occupied voxels; Grey: security offset voxels; Yellow: exterior voxels; Light blue: empty voxels or navigable area. Navigability study After the first classification of the occupied/empty voxels, the point cloud is evaluated to localize and delimit the room, classifying the outer voxels as exterior voxels. In order to carry out this classification, the point cloud of the ceiling is used, applying a 2D discretization based on a uniform grid in this case. The cell-size of the grid is equal to the voxel-size and, accordingly, the cells of the grid are coincident with the voxel top faces. An occupation classification, similar to the classification followed in the previous section, is derived but, in this case, the squares with a point density lower than the threshold are classified as exterior, and this classification is applied to all empty voxels that are below the square. Also, a security distance depending on the UAV size is defined. With this distance, a security offset is generated around the occupied voxels in an operation similar to a morphological dilation but with structuring element (s=NxNxN) of the operation is a 3D matrix. The matrix dimensions (N) is calculated using the security distance selected and the voxel size (Equation 1). ( 1 ) Only empty voxels can change their classification during this process, occupied and exterior voxels do not change. In this way, the navigable area, that is defined as the area where the UAV could be positioned, is calculated (Figure 3). Navigation map generation Once the navigable area has been defined, the navigation map can be generated, using just the target position that the UAV has to reach, but not the initial position. Basically, this navigation map (Map) is a 3D matrix with the same dimensions as the voxel discretization where each voxel is given a value according to the following rules: • Non-navigable voxels or navigable voxels not directly connected with the target position, meaning that there is not possible path between them and the target position, are given a value of -1. • Navigable voxels directly connected to the target position are given a value depending on the voxels that surround it and the distance. • The target voxel, that is, the voxel that contains the target position is given a value of 1. Figure 4. Navigation map algorithm workflow The workflow of the map generation is described in Figure 4. As can be seen, we start the generation of the map in the target position, that has value 1. Then, we start expanding the navigation map, analysing the cells that surround this voxel, giving them a value depending on their position. Depending on the relation between the indexed of the voxels in the volume, three classification groups can be distinguished ( Figure 5): The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B4-2020, 2020 XXIV ISPRS Congress (2020 edition) • Common face: voxels surrounding the study voxel that have a common face. This means that 2 of the indexed coordinates of this voxels and the study one are coincident • Common edge: voxels surrounding the study voxel that have a common edge. This means that 1 of the indexed coordinates of this voxel and the study one are coincident. • Common vertex: voxels surrounding the study voxels that have a common vertex. This means that none of the coordinates are coincident. For each of the 26 voxels that surround the study voxel, a new weight value is given. Each relationship type has a standard weight. In this study, a value of 1 is assigned to common face voxels, the common edge voxels obtain a weight value of 2 and a weight value of 3 is assigned to common vertex voxels. The weight value has to coincide with the stop criteria CycleLimit. This weight is added to the value of the voxel under study and is assigned to empty voxels only if it has not been previously assigned a lower value. In this way each voxel of the map has the lowest possible value. As can be seen in Figure 6, the Navigation map generated is similar to a gradient that decreases near to the target position, that has the lowest value of the map. a) b) c) Figure 5. Types of position relationship between voxels (red: study voxel; grey: voxels surrounding the study voxel). a) Common face. b) Common edge. c) Common vertex Figure 6. Horizontal plane of voxels that shows the Navigation Map generated (red voxels) around the objective position (black voxel) Path planning This section describes how the path is calculated, using the generated map. A voxel has to fulfil only one requirement to be assigned as initial position, which is having a value different than -1 in the navigation map. As a result, an initial voxel fulfilling this requirement means also that it is an empty voxel directly connected to the objective position. In order to calculate the path, a process similar to the neighbouring voxels analysis explained before is carried out. In this case, we start with the initial position and analyse the values of the surrounding voxels in the navigation map, looking for the lowest value voxel, which is selected as the next position in the path, and turns the study voxel. This process is repeated until target voxel is reached. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B4-2020, 2020 XXIV ISPRS Congress (2020 edition) As it has been said before, this path planning algorithm was developed specifically for UAVs to navigate and perform a completely autonomous contact inspection task with the system developed in previous works (González-de Santos et al., 2019;González-deSantos et al., 2020 (Figure 8). The general operation consists of a UAV landed on the floor and has to take off at the beginning of the mission. Taking into consideration the requirements of the contact payload, the system has navigate to a location and orientation one meter in front of the target point. With this information, the path planning software must calculate the path to be followed by the UAV to perform the contact inspection from an initial position (Figure 9). When the path has been calculated, the orientation, consisting of three angles: roll, pitch and yaw, has to be obtained for each position in the path. In this work, only the yaw angle is calculated and fixed because roll and pitch are used for the UAV navigation and, thus, left to be calculated by the flight controller. The Yaw angle calculation supports the UAV rotation to fit the target orientation, perpendicular to the wall where the contact will be performed. In this work, three methods have been considered for the Yaw calculation: • Initial/final rotation: The UAV runs the full rotation in the initial (or final) position, maintaining the orientation in the waypoints of the path. • Progressive rotation: The UAV rotates progressively in Yaw between the initial orientation and the target orientation. • Tangent to the path: The changes in UAV orientation are proportional to the azimuth changes in the path, i.e. the orientation of the path projection onto the XY plane. The user has to choose between these options. For this work, the progressive rotation has been selected (Figure 7), since is the one that generates the most fluent path. Figure 9. Path calculation. a) Initial position (red voxel) and objective position (black voxel). b) Voxels selected by the path planning algorithm (yellow: take off; red: path; green: contact approaching movements). c) Points of the calculated path. RESULTS AND DISCUSSION The path planning algorithm developed in this study has been tested in two study cases. The first one has been carried out in a laboratory of the Mining and Energy School at the University of Vigo, which size is approximately 10m x 6m. In this first test, a Faro Focus 3D x330 LiDAR scanner was used to obtain the point cloud, that contains 0.66 million points. The results of this test were shown in the Methodology section. The second study case has been carried out in the Hall of the same School, the point cloud was obtained using a prototype of a backpack-based mobile system (Filgueira et al., 2016). This point cloud is available in the ISPRS Benchmark on Indoor Modelling (Khoshelham et al., 2017). The size of this second room is approximately 14m x 26m. The point cloud used contains 5.5 million points. In both cases, a The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B4-2020, 2020 XXIV ISPRS Congress (2020 edition) voxels size of 0.2m has been selected, that is enough to capture critical obstacles like chairs and other furniture. All the algorithms for the path planning processing were implemented in MATLAB 2018b, using the Point Cloud Library. The PC used for this study has an Intel core i7 and 16GB of RAM. Figure 10. shows the results obtained in the second study case, where a longer path has been calculated. Figure 11. shows the navigation map generated for this room. As can be seen in the figures, this second point cloud is noisy, but the software is able to process this data to calculate the path. a) b) Figure 10. Path calculation. a) Initial position (red) and objective position (black voxel). b) Path calculated (yellow: take off; red: path; green: contact approaching movements) Figure 11. Navigation Map generated (red voxels) around the objective position (black voxel) CONCLUSSIONS This paper presents a path planning algorithm developed for the navigation of a UAV to carry out contact inspection tasks completely autonomously. The algorithm has been tested in two study cases, with a point cloud obtained with a TLS and another obtained with a backpack-based mobile scanner, that contains more noise. In both cases good results have been obtained. This work completes previous works where a payload for UAV was developed to perform contact inspection tasks. As future work, the developed algorithm will be optimized, to being applied for real study cases. As part of this future work, a software communicating the results of the path planning to the flight controller will be developed.
4,069.8
2020-08-25T00:00:00.000
[ "Computer Science" ]
Drone-Based Remote Sensing for Research on Wind Erosion in Drylands: Possible Applications : With rapid innovations in drone, camera, and 3D photogrammetry, drone-based remote sensing can accurately and efficiently provide ultra-high resolution imagery and digital surface model (DSM) at a landscape scale. Several studies have been conducted using drone-based remote sensing to quantitatively assess the impacts of wind erosion on the vegetation communities and landforms in drylands. In this study, first, five difficulties in conducting wind erosion research through data collection from fieldwork are summarized: insufficient samples, spatial displacement with auxiliary datasets, missing volumetric information, a unidirectional view, and spatially inexplicit input. Then, five possible applications—to provide a reliable and valid sample set, to mitigate the spatial offset, to monitor soil elevation change, to evaluate the directional property of land cover, and to make spatially explicit input for ecological models—of drone-based remote sensing products are suggested. To sum up, drone-based remote sensing has become a useful method to research wind erosion in drylands, and can solve the issues caused by using data collected from fieldwork. For wind erosion research in drylands, we suggest that a drone-based remote sensing product should be used as a complement to field measurements. Introduction Wind erosion is a land surface process that occurs in drylands [1]. It impacts the landform and vegetation community, which leads to degradation of farmland, rangeland, and natural habitats [2]. It also brings dust particles into the atmosphere, biosphere, and hydrosphere, which directly and indirectly influences the climate variation on a global scale [3][4][5]. Wind erosion is a complex process that involves interactions between the soil, vegetation, land surface, and atmosphere [2]. Field observation is the traditional way to provide raw data for research on wind erosion. However, the data collected from fieldwork have several limitations such as insufficient samples [6], spatial displacement with auxiliary datasets [7], missing volumetric information [8], a unidirectional view [9], and spatially inexplicit input [10], because of time and budget restrictions. Drone-based remote sensing has developed rapidly due to technological innovations involving drones, cameras, and 3D photogrammetry [11]. The products from drone-based remote sensing can provide effective information at a relatively large area. Moreover, drone-based remote sensing is a low-cost and high-efficiency way to monitor and detect the impact of wind erosion on landforms and vegetation communities. Recently, dronebased remote sensing was employed to conduct experiments on wind erosion in drylands. For example, Cunliffe et al. [8] developed a physical model to estimate the biomass of a single vegetation type by using the volumetric information retrieved from a drone-based structure from motion (SfM) model. Gillan et al. [12] used the digital terrain model (DTM) retrieved from a drone-based digital surface model (DSM) to detect the landscape change by wind erosion and water erosion in an omnidirectional view. Those studies proved that drone-based remote sensing products can provide valuable supplemental information. In this paper, first, the difficulties of conducting wind erosion research by using data collected from fieldwork are summarized: insufficient samples, spatial displacement with auxiliary datasets, missing volumetric information, unidirectional view, and spatially inexplicit input. Then, five possible applications of drone-based remote sensing to conquer those difficulties are designated. Last, the potential of drone-based remote sensing as a supplement to fieldwork is discussed. Insufficient Samples Field observation provides samples to validate the results from remote sensing products and ecological models in the research of wind erosion, but the number of field samples is limited because field observation is time-consuming and expensive [13]. A small sample size, which is a common problem for field-data-informed studies, has large variability [14]. For instance, a sample size of 1000 draws from a normal distribution with a mean of 10 and a standard deviation of 1.0 allows for the estimation of the mean within ±0.62% (that is, the 95% confidence interval is [9.938, 10.062]), whereas it only allows estimation of the standard deviation within ±4.2% (that is, the 95% confidence interval is [0.958, 1.046]). In this scenario, nearly 50,000 samples are required to estimate the standard deviation to within ±0.62%. Although 1000 is not a small sample size for field measurements, it still has a relatively large variation of standard deviation. The studies of Karl et al. [15], Duniway et al. [16], and Zhang and Okin [17] have proved that the wind erosion-related indicators from field measurement and drone-based remote sensing estimated by using the transect line method are highly correlated. Those studies have augmented a small number of samples made in the field to a large number of samples made on high-quality imagery. However, the samples in the previous study (not only from the fieldwork but also from drone-based remote sensing) are spatially autocorrelated because of their even pattern. The goal of this experiment is to design a statistical method to make the large sample set have less variation and little spatial autocorrelation. Spatial Displacement with Auxiliary Datasets Accurately and efficiently predicting wind erosion-related indicators is vital for land management in drylands [18]. With the advancement in computing power, several studies [19][20][21] have utilized machine learning algorithms to assimilate field observation with other data sources, such as satellite-based remote sensing products, to estimate land surface conditions where field samples cannot reach. In those studies, a machine learning-based algorithm built a regression relationship between pixel values of remote sensing products and the value of the field sample site at the same geolocation. The field sample site, shown as a point, is usually measured by a method that covers a small area; the value of the field sample site represents the mean in that area. It will be very rare for a field sample site to fall squarely at the center of a certain pixel of remote sensing imagery. In most cases, the sample sites will have a sub-pixel-level displacement from the center of a certain pixel, which will cause the field sample site to cover other neighboring pixels. Therefore, there would be a mismatch between the pixel values of remotely sensed products and the sample value that it represents as measured at the supposedly same geolocation, which is called spatial displacement [22]. Hogland and Affleck [7] discovered that this off-center and displacement phenomenon can cause attenuation bias in the estimation, especially for coarse spatial resolution data. Several studies rescaled the pixel size (upscaled the pixel size to make a coarser pixel), which "put" the field sample site in the center of the coarser pixel to avoid this attenuation [23,24]. However, in most studies, this attenuation bias was ignored. Drone-based remote sensing imagery records the land surface conditions at a certain time with an ultra-high resolution [8]. To minimize or even remove the error caused by the off-center or displacement phenomenon, the field sample site can be remeasured at the center of a certain pixel and remeasured by using the same method as in the field. For example, the Assessment, Inventory, and Monitoring (AIM) sample, which was designed by the Bureau of Land Management (BLM) [25], was frequently used with Moderate-Resolution Imaging Spectroradiometer (MODIS) remote sensing products to predict the indicators on a large scale [20,21]. If the AIM sample site is not at the center of the MODIS pixel (Figure 1), the displacement could be from 0 to 353.5 m. Traditionally, the displacement was ignored because the field measurement at a certain time is not repeatable or reproducible. However, if the measurement was conducted in the drone-based remote sensing imagery, the measurement could be repeatably measured at the center of the pixel as the spatially corrected AIM sample site. The goal of this experiment is to test whether the measurement of relocated sample sites (the spatially corrected AIM sample site) from drone imagery has less attenuation bias than the original one. center of the coarser pixel to avoid this attenuation [23,24]. However, in most studies, this attenuation bias was ignored. Drone-based remote sensing imagery records the land surface conditions at a certain time with an ultra-high resolution [8]. To minimize or even remove the error caused by the off-center or displacement phenomenon, the field sample site can be remeasured at the center of a certain pixel and remeasured by using the same method as in the field. For example, the Assessment, Inventory, and Monitoring (AIM) sample, which was designed by the Bureau of Land Management (BLM) [25], was frequently used with Moderate-Resolution Imaging Spectroradiometer (MODIS) remote sensing products to predict the indicators on a large scale [20,21]. If the AIM sample site is not at the center of the MODIS pixel (Figure 1), the displacement could be from 0 to 353.5 m. Traditionally, the displacement was ignored because the field measurement at a certain time is not repeatable or reproducible. However, if the measurement was conducted in the drone-based remote sensing imagery, the measurement could be repeatably measured at the center of the pixel as the spatially corrected AIM sample site. The goal of this experiment is to test whether the measurement of relocated sample sites (the spatially corrected AIM sample site) from drone imagery has less attenuation bias than the original one. Missing Volumetric Information Soil elevation is a key factor to represent the land surface conditions, especially in areas with wind erosion [26]. Unlike measuring dust flux, soil elevation reflects the spatial distribution of sediment erosion and deposition [12]. Several field techniques have been created to observe and monitor sediment movements, such as sediment traps [27], erosion pins [28], and erosion bridges [29]. Field observation is not only labor intensive and expensive but also highly depends on the distribution and number of samples. It also misses the volumetric information of some important landforms of wind erosion research, such as dunes. One of the drone-based remote sensing products is the dense point cloud, which is similar to the point cloud of LiDAR. The dense point cloud not only contains spectral information on land cover but also volumetric information of the land surface. Therefore, a dense point cloud can generate a 3D classification map having one or more land cover classes. The point cloud has been widely used to map the landslide displacement [30], sediment connectivity [31], modeling vegetation height [6], and to estimate the biomass Missing Volumetric Information Soil elevation is a key factor to represent the land surface conditions, especially in areas with wind erosion [26]. Unlike measuring dust flux, soil elevation reflects the spatial distribution of sediment erosion and deposition [12]. Several field techniques have been created to observe and monitor sediment movements, such as sediment traps [27], erosion pins [28], and erosion bridges [29]. Field observation is not only labor intensive and expensive but also highly depends on the distribution and number of samples. It also misses the volumetric information of some important landforms of wind erosion research, such as dunes. One of the drone-based remote sensing products is the dense point cloud, which is similar to the point cloud of LiDAR. The dense point cloud not only contains spectral information on land cover but also volumetric information of the land surface. Therefore, a dense point cloud can generate a 3D classification map having one or more land cover classes. The point cloud has been widely used to map the landslide displacement [30], sediment connectivity [31], modeling vegetation height [6], and to estimate the biomass of vegetation [8]. The goal of this experiment is to construct a complete bare ground DTM, which can be used to analyze the soil elevation after erosion, from the point cloud. Unidirectional View The bare soil gap size (the length of a gap with bare soil between two plants), canopy size, and vegetation height are important indicators to study how wind erosion impacts the land surface and vegetation community [32]. Although those indicators can be easily measured by field measurement, the measurements only represent the information in one direction but miss the information in other directions [33]. In research on wind erosion, those indicators strongly vary in different directions. For example, the shrub or grass ecotone may have a banded or street structure along with the prevailing wind direction, respectively [34]. Drone-based remote sensing imagery records spatially continuous information, which allows for observing the details of land cover in an omnidirectional view. This property of drone-based remote sensing can also reproduce measurements in directions that may not have been seen as useful in the original measurement scheme. For example, the directional property describes if a feature has uniformity or not in all orientations. In research on wind erosion, it is an important measurement to represent how the severity of wind erosion impacts the vegetation community and bare ground in different directions [9,34]. The directional property was calculated by a geostatistical measure, "connectivity," which assesses the probability that adjacent pixels will belong to the same type. The connectivity of bare soil and vegetation in all directions can evaluate the directional influence of wind erosion and determine the direction that has the most severe wind erosion [9]. The goal of this experiment is to compute connectivity based on drone-based remote sensing products and evaluate the directional property of land covers under wind erosion. Spatially Inexplicit Input An ecological model with spatially explicit input has become popular with advancements in computing power [10,35,36]. In research on wind erosion, the major advantage of an ecological model with spatially explicit input is to provide details of land surface changes, biological processes, and the interaction between biotic and abiotic variables at a fine scale [37]. With a spatially explicit input, the model can incorporate complex mathematical and physical equations in the calculation, which was ignored or used as a statistic process to alter in the model with spatially implicit input [10]. Traditionally, in order to mimic the spatially explicit input, interpolation and/or simulation based on the selective samples with spatially explicit behavior were used as the input [38], but none of them are the true spatially explicit input, which affects the performance of the model and the quality of the model's results. Drone-based remote sensing can provide the potential input of the ecological models. For example, the WEMO model is a physical model to simulate the impact of aeolian transport on the shrub-grass mixed vegetation community [39]. This model used the distribution of scaled height, which is the ratio of bare soil gap size to plant height, as the input to describe the land surface conditions. In the past, an exponential function was used to interpolate the scaled height derived from field data to make the distribution as the input. However, the distribution of scaled height can be directly retrieved from the drone-based remote sensing product as the input to the WEMO model. The goal of this experiment is to use the spatially explicit input from the drone-based remote sensing products to run the WEMO model and to compare the results to the field observations. Study Area The study area is the Nutrient Effects of Aeolian Transport (NEAT) site at the Jornada Experimental Range (JER), New Mexico (32 • 56 N, 106 • 75 W). The NEAT site was established in 2004 to monitor and detect the impacts of wind erosion on the landscape and vegetation community. There are three blocks in the NEAT site and each block has five treatments. All blocks and treatments are aligned parallel to the prevailing wind, which is from the southwest. Each block is 2.5 ha and each treatment is 0.25 ha. There is a 25-m buffer zone between treatments to minimize the interference (wind-blown soil nutrients) between them. The treatments are divided into 25 × 50 m upwind areas and 25 × 50 m downwind areas. In the upwind areas, grasses were manually removed from 100% to 0% (from Treatment 4 to Treatment Control) of its original cover in 2004 and have been maintained at similar levels since 2004. The details of the NEAT site can be found in Zhang [6]. In this paper, Block 3 was used as the study area. The visualized pattern of Block 3 and the treatments can be found in Figure 2. Study Area The study area is the Nutrient Effects of Aeolian Transport (NEAT) site at the Jornada Experimental Range (JER), New Mexico (32°56′ N, 106°75′ W). The NEAT site was established in 2004 to monitor and detect the impacts of wind erosion on the landscape and vegetation community. There are three blocks in the NEAT site and each block has five treatments. All blocks and treatments are aligned parallel to the prevailing wind, which is from the southwest. Each block is 2.5 ha and each treatment is 0.25 ha. There is a 25-m buffer zone between treatments to minimize the interference (wind-blown soil nutrients) between them. The treatments are divided into 25 × 50 m upwind areas and 25 × 50 m downwind areas. In the upwind areas, grasses were manually removed from 100% to 0% (from Treatment 4 to Treatment Control) of its original cover in 2004 and have been maintained at similar levels since 2004. The details of the NEAT site can be found in Zhang [6]. In this paper, Block 3 was used as the study area. The visualized pattern of Block 3 and the treatments can be found in Figure 2. Data Preparation The image was obtained by a DJI Phantom 4 equipped with its own 4K camera (Dà-Jiāng Innovations Science and Technology Co., Shenzhen, China) in the summer of 2017. DroneDeploy V2.6 (DroneDeploy, San Francisco, CA, USA) was used to initially set up Data Preparation The image was obtained by a DJI Phantom 4 equipped with its own 4K camera (Dà-Jiāng Innovations Science and Technology Co., Shenzhen, China) in the summer of 2017. DroneDeploy V2.6 (DroneDeploy, San Francisco, CA, USA) was used to initially set up (e.g., flight height and speed) and control the DJI Phantom 4 drone during the flight. The drone was flown twice for a block in different orientations (i.e., north-south and east-west) following a "lawnmower" moving pattern to acquire structural information from different aspects. The sidelap of DJI 4K image was set to 70%, which is sufficient to build SfM models of vegetation and landscape. The details about the drone and camera are in Table 1. The images were processed using a commercial SfM software (PhotoScan V1.2.5, Agisoft LLC, St. Petersburg, Russia). There are four general steps of SfM image processing: (1) photo alignment and sparse point cloud building, (2) Ground control points (GCPs) identification and sparse point cloud optimization, (3) dense point cloud building, and (4) construction of orthomosaic and DSM. The details of processing the imagery can be found in Zhang [6]. The average point density for the dense cloud was 1220 points/m 2 for DJI 4K camera. The DJI 4K image resulted in 7 mm and 2.8 cm resolution for orthomosaics and DSMs, respectively. To assess the spatial accuracy of orthomosaics and DSMs, the spatial root mean square error (RMSE) and spatial mean error (ME) were calculated in the sparse cloud ( Table 2). The average RMSE and ME represent the error between the surveyed coordinates of the GCPs in the field and coordinates of those in the sparse cloud model across Block 3. They represent the accuracy of the orthomosaic and the DSM, which were derived from the sparse cloud model. Table 2. Evaluation of the accuracies of SfM models based on DJI 4K cameras. X represents longitude, Y represents latitude, and Z represents altitude. Spatial root mean square error (RMSE) and spatial mean error (ME) are between the surveyed coordinates of the ground control points (GCPs) in the field and coordinates of those in the sparse cloud model. After the orthomosaic and DSM were made, the classification map was produced by an unsupervised classification method, K-means. The algorithm of K-means [40] was based on Tou and Gonzalez [41] and the code was written in IDL 8.5 (L3Harris Technologies, Melbourne, FL, USA). The number of classes is four (grass, shrub, bare soil, and other), the maximum iteration is 20 times, and the minimum change threshold is 0.1. The vegetation layer has used the bare soil mosaic from the classification map to mask the DSM [42]. The vegetation layer is a DSM, which only contains the height information of the canopy. The process was done by ENVI 5.3 (L3Harris Technologies, Melbourne, FL, USA). Provide a Reliable and Valid Sample Set To simulate the impact of sample size on the quality of sampling, 1000 transect lines, which are along the main axis (the prevailing wind direction) of Treatment 3 of Block 3, are evenly distributed on the classification map and vegetation layer. There is only a 5-cm gap between transect lines and the length of every transect line is 100 m. On each transect, three important wind erosion-related indicators (vegetation cover, plant height, and bare soil gap size) were measured. Vegetation cover was measured by using the line-point intercept (LPI) method on classification map [13], which recorded the land cover types (i.e., soil, shrub, and grass) at 1-m intervals on the transect line. Therefore, the vegetation cover is the number of shrub and grass records divided by 100. Bare soil gap size, which is measured on classification map, is the length of the gap with bare soil between two plants on the transect line, which is the number of contiguous pixels of bare soil in the transect line times pixel size. Plant height, which is measured on the vegetation layer, is the difference between the largest and the smallest pixel value on the intersection of the transect line and each canopy cluster on the vegetation layer. In each transect line, the averaged values of bare soil gap size and plant height were calculated. The assumption that the large sample set has a better estimate than the small sample set was tested. Six transect lines, equal distances (7.5 m) from each other, were measured. The statistics (e.g., mean, standard deviation, the confidence interval of mean, and the confidence interval of standard deviation) of the six transect lines and 1000 transect lines were compared. Although 1000 evenly distributed samples have already increased the sample size, those samples, like the six evenly distributed samples, are highly spatially autocorrelated. Therefore, a method to increase the sample size by using drone-based imagery was designed. Six transect lines were randomly bootstrapped from 1000 transect lines and 1000 draws were made. The mean of six transect lines for each indicator represents the value of one draw. Using this bootstrapping method, which is drawing a subset of samples randomly from a set of samples with replacement, the number of samples in the sample pool is about 1.37 × 10 15 , which confirms the basic assumption of sampling: the data pool has a nearly infinite sample size. For this experiment, the tool in the spatial analyst extension of ArcGIS Desktop 10.8 (ESRI, Redlands, CA, USA) was used to extract the pixel value of the classification map and vegetation layer on the transect line. Python 3.6 with the Geospatial Data Abstraction Library (GDAL) 3.0.1 package [43] was used to implement the calculations for bare soil gap size, plant height, and vegetation cover; generate the random selection of transect lines; and compute the statistic of 1000 random draws for each variable. Mitigating the Spatial Offset To simulate the displacement of the field sample from the pixel center, a MODIS image tile (MCD43A4.A2017204.h09v05.006.2017213033355) that contains both Block 3 of NEAT sites and one AIM sample site (#13497) was used for this experiment. The AIM sample can be downloaded from the Landscape Approach Data Portal, https://landscape.blm.gov/ geoportal. The drone-based imagery of Block 3 captured the center of this MODIS pixel. The AIM sample site (#13497) is at the upper left corner of one MODIS pixel. The field measurement of the AIM sample site was conducted at the same time as the drone-based remote sensing work in the summer of 2017. For the AIM sample, the bare soil was used as the bare soil cover and total foliar was used as the vegetation cover. To measure the vegetation cover and bare soil cover at the center of the MODIS pixel, the configuration of the AIM sample site, which was widely used in land management, was mimicked on the classification map. Three 50-m transect lines were created in the direction of 0 • , 120 • , and 240 • at the central location of the MODIS pixel. Then, vegetation cover and bare soil cover were measured by using the LPI method [13], which recorded the land cover types (i.e., soil, shrub, and grass) at 1-m intervals on the transect line. Therefore, the vegetation cover or bare soil cover of one transect line is the number of shrub and grass records or bare soil records divided by 50. Last, the mean vegetation cover and bare soil cover of all three transect lines were calculated. Python 3.6 with Geospatial Data Abstraction Library (GDAL) 3.0.1 package [43] was used to perform the calculations for vegetation and soil cover. To assess the abundance of vegetation and soil in one MODIS pixel, a spectral unmixing algorithm (SMA) was used to extract the percentage of green vegetation, nonphotosynthetic vegetation, and soil at the sub-pixel level. The abundance of green vegetation and non-photosynthetic vegetation was combined with the vegetation cover. The abundance of soil represents the bare soil cover. The spectral library used in this experiment was derived from a combination of field measurements in Savanna [44]. The code of SMA was based on Campbell and Wynne [45] and was written in IDL 8.5. Soil Elevation The point cloud is a direct product of drone-based remote sensing. The point cloud not only contains the spectral information of land cover but also the volumetric information of land surface; therefore, a classification method, which is based on the slope of the initial terrain model, was used to distinguish the on-ground object (i.e., vegetation) [46,47]. The classification method has three parameters: max angle, max distance, and cell size. Cell size determines the largest grid of on-ground objects, such as vegetation, but not containing any ground point in the scene. Max angle determines the maximum slope of the ground in the scene. Max distance determines the maximum distance of maximum variation in Z of ground inside the cell size in the scene. The cell size was set to 0.4 m to filter the litter, the max angle to 3 • , and the max distance to 0.1 m. Then, a DSM with two classes (bare ground and vegetation) was created with a 2.8 cm spatial resolution. The bare ground feature was selected and the vegetation feature was removed. To fill in those holes (vegetation removal), the DSM raster of bare ground was vectorized first (Figure 3). Then, a fitted interpolation method, which is the natural neighbor interpolation method [48,49], was used to interpolate the holes of the feature layer of bare ground to make a DTM of bare ground (Figure 3). straction Library (GDAL) 3.0.1 package [43] was used to perform the calculations etation and soil cover. To assess the abundance of vegetation and soil in one MODIS pixel, a spec mixing algorithm (SMA) was used to extract the percentage of green vegetation, n tosynthetic vegetation, and soil at the sub-pixel level. The abundance of green ve and non-photosynthetic vegetation was combined with the vegetation cover. Th dance of soil represents the bare soil cover. The spectral library used in this exp was derived from a combination of field measurements in Savanna [44]. The code was based on Campbell and Wynne [45] and was written in IDL 8.5. Soil Elevation The point cloud is a direct product of drone-based remote sensing. The poin not only contains the spectral information of land cover but also the volumetr mation of land surface; therefore, a classification method, which is based on the the initial terrain model, was used to distinguish the on-ground object (i.e., veg [46,47]. The classification method has three parameters: max angle, max distance, size. Cell size determines the largest grid of on-ground objects, such as vegetation, containing any ground point in the scene. Max angle determines the maximum the ground in the scene. Max distance determines the maximum distance of ma variation in Z of ground inside the cell size in the scene. The cell size was set to filter the litter, the max angle to 3°, and the max distance to 0.1 m. Then, a DSM w classes (bare ground and vegetation) was created with a 2.8 cm spatial resoluti bare ground feature was selected and the vegetation feature was removed. To fill holes (vegetation removal), the DSM raster of bare ground was vectorized first (Fi Then, a fitted interpolation method, which is the natural neighbor interpolation [48,49], was used to interpolate the holes of the feature layer of bare ground to DTM of bare ground (Figure 3). To quantitively evaluate the surface height, 500 transect lines were made per ular to the prevailing wind direction to measure the pixel value of DTM. The height was measured by using the LPI method [13], as in Section 3.3.1 (it recor pixel value at 0.1-m intervals on the transect line). This experiment was conducted downwind area (which started from the centerline between upwind and downwin of Treatment 4 and Treatment Control of Block 3. Block 3 has a relatively flat terr To quantitively evaluate the surface height, 500 transect lines were made perpendicular to the prevailing wind direction to measure the pixel value of DTM. The surface height was measured by using the LPI method [13], as in Section 3.3.1 (it recorded the pixel value at 0.1-m intervals on the transect line). This experiment was conducted on the downwind area (which started from the centerline between upwind and downwind area) of Treatment 4 and Treatment Control of Block 3. Block 3 has a relatively flat terrain and the slope of Block 3 is 0.5 • from northwest to southeast. Because of that, the relative altitude was used to represent the soil elevation in the different treatments. The mean of the relative altitude on the centerline was marked as 0 m for each treatment. The altitude of each pixel should be subtracted from the mean of the altitude on the centerline to get the relative altitude. PhotoScan V1.2.5 was used to process the classification of point cloud and the creation of DSM of bare ground. The conversion tool in ArcGIS 10.8 was used to convert the DSM raster of bare soil to a point feature and the Spatial Analyst extension to make an interpolated surface (DTM) of bare ground. Directional Property of Landscape Connectivity represents the continuous property of patches along with one direction at a landscape scale. For a classified raster image, a connectivity C can be defined as: where → h represents the vector with lag distance |h|, n is the number of successive pixels along → h in an image, and I i is a class indicator that equals 1 for pixels that belong to the class of interest and 0 for the other [9]. For example, if "shrub" is the class of interest, then pixels that are classified as "shrub" are given a value of 1, and all other pixels are given a value of 0. In practice, the decrease in connectivity with lag distance approximates an exponential decay function, which is: where C 0 is denoted as the connectivity when |h| is equal to 0 and α represents the maximum lag distance in all directions, which is similar to the "range" of the traditional variograms. In this experiment, the maximum lag distance (α) of each pixel in 360 • direction with a 2 • interval was calculated [9]. A 20 m × 20 m plot in the upwind area of Treatment 3 and Treatment 1 was selected to calculate the connectivity of vegetation (the classes of shrub and grass) and soil (the class of bare soil) on the classification map. The code of connectivity was written in IDL 8.5. Spatially Explicit Input WEMO uses a size distribution of erodible gaps between plants, instead of lateral cover, to describe the distribution of shear stress at the surface. Based on the research of Vest et al. [50], Lancaster et al. [51], Li et al. [52], and Mayaud et al. [53], the WEMO model performed better than the traditional shear stress partitioning approaches. A major advantage of this model is that the primary input variables can be obtained by standard field methods [13] or orthophotography techniques [17]. In this experiment, two different methods were used to describe the distribution of shear stress as the inputs of the WEMO model: one is to use an exponential function to reconstruct a distribution based on the field measurement, the other is to directly measure on the drone-based image to build a distribution. In the following equation, scaled height x represents the bare soil gap size in units of plant height. Horizontal flux (Q u * x ) at a scaled height (x) in the shear stress wake zone is calculated as per Gillette and Chen [54]: where A is a best-fit constant (0.026), ρ is the density of air (0.00129 g/cm 3 ), g is the acceleration due to gravity (980 cm/s 2 ), and u * t is the threshold of wind shear velocity of the unvegetated soil [52,55], which is 25 cm/s at the experimental sites. P d is the probability that any point in the reduced shear stress zone is at a certain distance from the nearest upwind plant in units of plant height. Traditionally, it is estimated as an exponential function [9] with a decay constant, which equals to the estimated median weighted average gap size divided by estimated plant height from the field measurement. In this experiment, P d was also obtained by using the methods in Section 3.3.1 on the drone-based imagery. The horizontal flux Q u * t of the whole shear stress wake zone at a specific wind shear velocity (u * ) is the sum of Q u * x for all gap lengths (x), weighted by P d such that: where C is the estimated vegetation cover (it was obtained by using the methods in Section 3.3.1 on the drone-based imagery). To calculate the total horizontal flux (Q), Q u * t is summed for all wind shear velocities (u * ) weighted by the probability distribution of wind shear velocity (P u * ), such that: In this model, P u * is a constant distribution, which is derived from a three-year (2006-2008) record of wind speed measured at 5-min intervals at the headquarters of JER ( Figure 4). Pd is the probability that any point in the reduced shear stress zone is at a certain distance from the nearest upwind plant in units of plant height. Traditionally, it is estimated as an exponential function [9] with a decay constant, which equals to the estimated median weighted average gap size divided by estimated plant height from the field measurement. In this experiment, Pd was also obtained by using the methods in Section 3.3.1 on the drone-based imagery. The horizontal flux * of the whole shear stress wake zone at a specific wind shear velocity ( * ) is the sum of * for all gap lengths (x), weighted by Pd such that: where C is the estimated vegetation cover (it was obtained by using the methods in Section 3.3.1 on the drone-based imagery). To calculate the total horizontal flux (Q), * is summed for all wind shear velocities ( * ) weighted by the probability distribution of wind shear velocity ( * ), such that: In this model, * is a constant distribution, which is derived from a three-year (2006-2008) record of wind speed measured at 5-min intervals at the headquarters of JER ( Figure 4). For each treatment, 500 evenly distributed transect lines were created to measure plant height, bare soil gap size, and vegetation cover in the upwind area of each treatment by using the same method in Section 3.3.1 on the classification map and vegetation layer. The measurement of plant height and bare soil gap size constructed Pd and the measurement of vegetation cover is C in the previous equations. The code of the WEMO model was written in Python 3.6. Provide a Reliable and Valid Sample Set The means of the vegetation cover, bare soil gap size, and plant height based on six evenly distributed transect lines are similar to the means based on 1000 evenly distributed transect lines (Table 3). However, the standard deviations of vegetation cover, bare soil For each treatment, 500 evenly distributed transect lines were created to measure plant height, bare soil gap size, and vegetation cover in the upwind area of each treatment by using the same method in Section 3.3.1 on the classification map and vegetation layer. The measurement of plant height and bare soil gap size constructed P d and the measurement of vegetation cover is C in the previous equations. The code of the WEMO model was written in Python 3.6. Provide a Reliable and Valid Sample Set The means of the vegetation cover, bare soil gap size, and plant height based on six evenly distributed transect lines are similar to the means based on 1000 evenly distributed transect lines (Table 3). However, the standard deviations of vegetation cover, bare soil gap size, and plant height based on six evenly distributed transect lines are larger than the standard deviations based on 1000 evenly distributed transect, which shows that the small sample set has more variation than the large sample set (Table 3 and Figure 5A,C,E). The confidence intervals of the mean and the confidence intervals of the standard deviation of vegetation cover, bare soil gap size, and plant height based on six evenly distributed transect lines are, at least, twice as large as the confidence intervals of the mean and the confidence intervals of the standard deviation based on 1000 evenly distributed transects, which shows that the small sample set involves more uncertainty than the large sample set (Table 3). gap size, and plant height based on six evenly distributed transect lines are larger than the standard deviations based on 1000 evenly distributed transect, which shows that the small sample set has more variation than the large sample set (Table 3 and Figure 5A,C,E). The confidence intervals of the mean and the confidence intervals of the standard deviation of vegetation cover, bare soil gap size, and plant height based on six evenly distributed transect lines are, at least, twice as large as the confidence intervals of the mean and the confidence intervals of the standard deviation based on 1000 evenly distributed transects, which shows that the small sample set involves more uncertainty than the large sample set (Table 3). Vegetation covers show the nearly normal distribution of 1000 transect lines and 1000 draws ( Figure 5B,D,F). The mean of 1000 transect lines and 1000 draws are the same, but the standard deviation of 1000 draws is smaller than 1000 transect lines. The confidence interval of the mean of 1000 transect lines is five times larger than with 1000 draws and the confidence interval of the standard deviation of 1000 transect lines is four times larger than with 1000 draws ( Table 3). The average bare soil gap size of transect lines shows the slightly skewed distribution of 1000 draws with a mean of 3293.53 cm and a standard deviation of 798.725 cm ( Figure 5B,D,F). The confidence interval of the mean and standard deviation of 1000 transect lines is twice as large as with 1000 draws (Table 3). Average plant height in transect lines shows a highly skewed distribution of 1000 draws with a mean of 18.90 cm and a standard deviation of 7.19 cm ( Figure 5B,D,F). The confidence interval of the mean of 1000 transect lines is four times larger than with 1000 draws and the confidence interval of the standard deviation of 1000 transect lines is twice as large as with 1000 draws (Table 3). Providing a large sample has the advantage of reducing confidence intervals, particularly for the confidence interval for the standard deviation, because the magnitude of the confidence interval for the standard deviation decreases more slowly with the number of degrees of freedom than that for the mean. This means that a larger sample size is required to estimate the confidence interval for the variance (as a proportion of variance) than is required for the estimation of the confidence interval of the mean (as a proportion of the mean). Further, the results from bootstrapped resampling indicate that a priori unknown biases potentially exist (as indicated by the skewed distributions in Figure 5, right column) after the removal of spatial autocorrelation. When using only evenly distributed transects (six or 1000), the biases that depended on the land cover were not found. In this case, the drone-based imagery not only provides a large sample set but also gives a chance to resample the large sample set and provide a reliable and valid sample set. Mitigating the Spatial Offset The actual displacement from the AIM sample site to the center of the MODIS pixel is 323.5 m. The bare soil cover and vegetation cover from the drone-based estimate (remeasured on the drone imagery using the LPI method) is 79.5% and 20.5% (Table 4), respectively, which are less different from the SMA result of the MODIS pixel compared to the field measurement (71.9% for bare soil cover and 28.3% for vegetation cover). That is because the field measurement contains information not only on the MODIS pixel that covers the center of the AIM sample site but also on three adjacent MODIS pixels (upper, upper left, and left). Those differences should not be ignored and may strongly affect the accuracy of any follow-up analysis and computation. In some studies [20,21], the prediction errors (underpredicted high value and overpredicted low value) came from the spatial offset between the field sample site and the pixel of remote sensing products. Although there is an existing method that uses a linear equation to correct the error from the spatial offset [22], this method assumes that all samples must have the same spatial characteristics. It is not suitable for research on a continental or global scale since the samples were from different ecoregions and have different spatial characteristics. Moreover, the linear correction method has a good performance for a relatively small pixel size such as Landsat or Sentinel. However, for relatively large pixel sizes, such as in MODIS, the linear correction method did not perform well [7]. In this case, the drone-based imagery is a good supplemental material for the correction of spatial displacement. Soil Elevation The bare ground DTM of Treatment 4 has an obviously increasing trend of relative altitude along with the prevailing wind direction but Treatment Control does not show that increasing trend from the centerline between the upwind and downwind areas ( Figure 6). In Treatment 4, the upwind area has no grass cover, which caused strong wind erosion (soil loss). The impact of strong wind erosion extended to the front region of the downwind area (0-10 m from the centerline). The blown-up soil particles deposited on the area at 10 m of the centerline and became a dune-shape landscape. Moreover, the deposition of soil particles also brought nutrients for the grasses to accelerate their growth. The grass community can trap more blown-up soil, which also accelerated the formation of the dune-shape landscape. Due to that, the increasing trend of relative altitude along with the prevailing wind direction was created. In Treatment Control, the upwind area has no grass removal and the wind erosion is uniform, along with the prevailing wind direction; therefore, relative altitude does not have a clear tendency. From the analysis of 500 transect lines (Figure 7), the range of relative altitude in Treatment Control is from 0.020 to 0.032 m and the range of relative altitude in Treatment 4 is from 0.021 to 0.152 m, which is much larger than Treatment Control. The curve of Treatment 4 shows that the slope between 12.5 m to 17.5 m is the largest because that area has the most accumulation of blown-up soil particles, which became a band structure [34]. The curve of Treatment Control shows a slight variation. There is a trough from 7.5 m to 10 m because the vegetation distribution in that area varies a lot. In this case, drone-based DTM provides an opportunity for studying the landform in a 3D view. Directional Property of Landscape In Treatment 3, the average bare soil patches are larger than the average vegetation clusters and both exhibit marked anisotropy ( Figure 8A,C). The direction of vegetation clusters is aligned with the prevailing wind direction, suggesting a role of the wind in shaping the vegetation clusters. It also means that the vegetation community has more connectivity in the direction of the prevailing wind. The bare soil patches have more connectivity in the perpendicular direction of the prevailing wind. In Treatment 1, the average bare soil patches are close to the average vegetation clusters. The bare soil patches exhibit anisotropy but the vegetation clusters exhibit isotropy ( Figure 8B,D). The direction of soil patches is perpendicular to the direction of the prevailing wind, with a 10° difference. The decay curves of connectivity of bare soil patches and vegetation clusters follow Directional Property of Landscape In Treatment 3, the average bare soil patches are larger than the average vegetation clusters and both exhibit marked anisotropy ( Figure 8A,C). The direction of vegetation clusters is aligned with the prevailing wind direction, suggesting a role of the wind in shaping the vegetation clusters. It also means that the vegetation community has more connectivity in the direction of the prevailing wind. The bare soil patches have more connectivity in the perpendicular direction of the prevailing wind. In Treatment 1, the average bare soil patches are close to the average vegetation clusters. The bare soil patches exhibit anisotropy but the vegetation clusters exhibit isotropy ( Figure 8B,D). The direction of soil patches is perpendicular to the direction of the prevailing wind, with a 10° difference. The decay curves of connectivity of bare soil patches and vegetation clusters follow Directional Property of Landscape In Treatment 3, the average bare soil patches are larger than the average vegetation clusters and both exhibit marked anisotropy ( Figure 8A,C). The direction of vegetation clusters is aligned with the prevailing wind direction, suggesting a role of the wind in shaping the vegetation clusters. It also means that the vegetation community has more connectivity in the direction of the prevailing wind. The bare soil patches have more connectivity in the perpendicular direction of the prevailing wind. In Treatment 1, the average bare soil patches are close to the average vegetation clusters. The bare soil patches exhibit anisotropy but the vegetation clusters exhibit isotropy ( Figure 8B,D). The direction of soil patches is perpendicular to the direction of the prevailing wind, with a 10 • difference. The decay curves of connectivity of bare soil patches and vegetation clusters follow an exponential function. The bare soil patches have more connectivity than the vegetation clusters because bare soil is the dominant land cover in the study area, as can be seen in Figure 2. an exponential function. The bare soil patches have more connectivity than the vegetation clusters because bare soil is the dominant land cover in the study area, as can be seen in Figure 2. In Treatment 3, the shape of bare patches, perhaps due to their recent genesis, is different from that described by McGlynn and Okin [9] for an established shrub dune. The orientation of well-developed bare soil patches is parallel to the direction of the prevailing wind. In Treatment 1, the result is the same as in Okin et al. [34,56], in which the size of bare soil patches and vegetation clusters are close in an established shrub-grass mixed community. The connectivity, in this experiment, only represents the soil patches and vegetation clusters (shrub and grass). In the future, this method may be employed to detect the differences between grass and shrub, which could reveal the details of the competition of shrub and grass during wind erosion. In this case, drone-based imagery is a good way to conduct multidirectional research. Spatially Explicit Input The daily dust fluxes computed using drone-based imagery are in the range of the field experiments performed using a wind tunnel [57]. The dust flux of Treatment 4 (2.8 g/cm/day) is much higher than for the remaining treatments because the percentage of grass removal in the upwind area of Treatment 4 is higher than for the remaining treatments. The median, first quartile, and third quartile of dust flux computed using drone-based imagery have a decreasing gradient from Treatment 4 to Treatment Control ( Figure 9). However, there is no obvious decreasing gradient of dust flux when using field measurements from Treatment 4 to Treatment Control for the median, first quartile, and third quartile. The median of Treatment 1 is greater than the rest and the median of Treatment 2 is smaller than the rest, which contradicts the observations from the wind tunnel [57]. Moreover, the medians of Treatment 4 and Treatment 3 are much smaller than the observations from wind erosion [57]. [9] for the upwind portion of Treatment 3 (75% grass removal) and Treatment 1 (25% grass removal) (left column). The decay curves of connectivity of interplant gaps and vegetation patches for the upwind portion of Treatment 3 and Treatment Control 1 (right column). The directional property of Treatment 3 (A,C); the directional property of Treatment 1 (B,D). This figure was revised from Zhang et al. [17]. In Treatment 3, the shape of bare patches, perhaps due to their recent genesis, is different from that described by McGlynn and Okin [9] for an established shrub dune. The orientation of well-developed bare soil patches is parallel to the direction of the prevailing wind. In Treatment 1, the result is the same as in Okin et al. [34,56], in which the size of bare soil patches and vegetation clusters are close in an established shrub-grass mixed community. The connectivity, in this experiment, only represents the soil patches and vegetation clusters (shrub and grass). In the future, this method may be employed to detect the differences between grass and shrub, which could reveal the details of the competition of shrub and grass during wind erosion. In this case, drone-based imagery is a good way to conduct multidirectional research. Spatially Explicit Input The daily dust fluxes computed using drone-based imagery are in the range of the field experiments performed using a wind tunnel [57]. The dust flux of Treatment 4 (2.8 g/cm/day) is much higher than for the remaining treatments because the percentage of grass removal in the upwind area of Treatment 4 is higher than for the remaining treatments. The median, first quartile, and third quartile of dust flux computed using dronebased imagery have a decreasing gradient from Treatment 4 to Treatment Control ( Figure 9). However, there is no obvious decreasing gradient of dust flux when using field measurements from Treatment 4 to Treatment Control for the median, first quartile, and third quartile. The median of Treatment 1 is greater than the rest and the median of Treatment 2 is smaller than the rest, which contradicts the observations from the wind tunnel [57]. Moreover, the medians of Treatment 4 and Treatment 3 are much smaller than the observations from wind erosion [57]. From the results, the WEMO model using spatially explicit input shows an obvious decreasing trend from Treatment 4 to Treatment Control. In this case, the spatially explicit input, somehow, reflects the details of the land surface condition compared to the field data. In the future, more models for the research of wind erosion, such as the Vegetation Figure 9. The box plot of daily dust flux (g/cm/day) was computed by using the WEMO model on the upwind area of Treatment 4-Treatment Control (T4-TC, 100% grass removal-0% grass removal). The input of WEMO is from the drone-based estimate (500 transect lines, (A) and field measurement (three transect lines, (B). The yellow mark represents the median value of dust flux. From the results, the WEMO model using spatially explicit input shows an obvious decreasing trend from Treatment 4 to Treatment Control. In this case, the spatially explicit input, somehow, reflects the details of the land surface condition compared to the field data. In the future, more models for the research of wind erosion, such as the Vegetation and Sediment Transport (ViSTA) model [53] and Ecotone-WEMO model [38], can be tested by using the spatially explicit input. Conclusions This paper sheds light on the possible applications of drone-based remote sensing in research on wind erosion, which was inspired by the difficulties of field data collection. Through the five experiments, drone-based products (i.e., dense point cloud, orthomosaic imagery, and DSM) were shown to be useful for wind erosion research in drylands for the following reasons: a reliable and valid sample set based on bootstrapping resampling; the displacement of field sample site eases the attenuation bias between the field measurement and remote sensing product; the point cloud built DTM shows the pattern of soil distribution and the trend of soil elevation; the land cover under wind erosion condition has an anisotropic pattern; and the WEMO model produces significant results after using the spatially explicit input. At present, drone-based imagery has been suggested as a means to complete difficult fieldwork in remote areas or during harsh weather conditions. For wind erosion research in drylands, the drone-based remote sensing product should be used as a complement to field measurement.
12,500.6
2021-01-15T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Dosimetric evaluation of photons versus protons in postmastectomy planning for ultrahypofractionated breast radiotherapy Background Ultrahypofractionation can shorten the irradiation period. This study is the first dosimetric investigation comparing ultrahypofractionation using volumetric arc radiation therapy (VMAT) and intensity-modulated proton radiation therapy (IMPT) techniques in postmastectomy treatment planning. Materials and methods Twenty postmastectomy patients (10-left and 10-right sided) were replanned with both VMAT and IMPT techniques. There were four scenarios: left chest wall, left chest wall including regional nodes, right chest wall, and right chest wall including regional nodes. The prescribed dose was 26 Gy(RBE) in 5 fractions. For VMAT, a 1-cm bolus was added for 2 in 5 fractions. For IMPT, robust optimization was performed on the CTV structure with a 3-mm setup uncertainty and a 3.5% range uncertainty. This study aimed to compare the dosimetric parameters of the PTV, ipsilateral lung, contralateral lung, heart, skin, esophageal, and thyroid doses. Results The PTV-D95 was kept above 24.7 Gy(RBE) in both VMAT and IMPT plans. The ipsilateral lung mean dose of the IMPT plans was comparable to that of the VMAT plans. In three of four scenarios, the V5 of the ipsilateral lung in IMPT plans was lower than in VMAT plans. The Dmean and V5 of heart dose were reduced by a factor of 4 in the IMPT plans of the left side. For the right side, the Dmean of the heart was less than 1 Gy(RBE) for IMPT, while the VMAT delivered approximately 3 Gy(RBE). The IMPT plans showed a significantly higher skin dose owing to the lack of a skin-sparing effect in the proton beam. The IMPT plans provided lower esophageal and thyroid mean dose. Conclusion Despite the higher skin dose with the proton plan, IMPT significantly reduced the dose to adjacent organs at risk, which might translate into the reduction of late toxicities when compared with the photon plan. Introduction There are many techniques and schedule schemes of treatment for breast irradiation [1]. The ultrahypofractionation is the recent new schedule that can shorten the irradiation period. This study is the first dosimetric investigation comparing ultrahypofractionation using volumetric arc radiation therapy (VMAT) and intensitymodulated proton radiation therapy (IMPT) techniques in postmastectomy treatment planning. Adjuvant radiotherapy plays a significant role in local or locoregional breast cancer treatment. The omission of breast irradiation significantly increases the recurrence risk [2][3][4][5]. Routine breast irradiation using three-dimensional conformal radiotherapy (3D-CRT) is associated with long-term toxicity in organs at risk (OARs), such as the heart, the lungs, and the contralateral breast, Open Access *Correspondence<EMAIL_ADDRESS>2 Division of Radiation Oncology, Department of Radiology, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand Full list of author information is available at the end of the article especially when regional node irradiation is required [6]. By using advanced techniques, such as intensity modulation radiotherapy (IMRT) or volumetric arc radiation therapy (VMAT), a high radiation dose could be delivered to the planning target volume (PTV) with high conformity. Although the high-dose regions of the lungs and heart are spared using these methods [7], but a higher volume of the adjacent OARs receives a low dose radiation, which can cause long term toxicities and secondary cancer [8]. A previous study used deep inspiration breath hold to reduce the radiation dose to the heart for left-side breast irradiation; however, a low dose to the remaining organs could not be eliminated [9]. With proton beam therapy (PBT), a well-known Bragg peak characteristic could potentially reduce the dose to OARs beyond the target almost completely. It is challenging to evaluate potential benefits for breast cancer, especially in chest wall treatment, which has a shallow depth. However, theoretically, proton therapy could reduce radiation doses to the lungs and heart by taking advantage of the rapid dose fall-off of the proton energy after the Bragg peak. The conventional schedule for chest wall irradiation is 45-50 Gy in 25 fractions over 5 weeks. In the past decade, several trials investigated hypofractionation chest wall irradiation, defined as 43.5 Gy in 15 fractions for 3 weeks. Hypofractionation was noninferior to the standard scheme and with comparable toxicities to conventional fractionation among postmastectomy patients [10,11]. A new study regimen of 26-27 Gy in 5 fractions for 1 week has been introduced in breast irradiation in the name of FAST-Forward [12]. Since the COVID-19 pandemic, ultrahypofractionated treatment has emerged as an option to reduce radiotherapy courses in the United Kingdom [13]. After the publication of the FAST-Forward study, the results showed that the acute skin reactions observed in FAST-Forward were mild and that the late normal tissue was noninferior to the results achieved using 43.5 Gy in 15 fractions [12,14]. Breast irradiation with ultrahypofractionation has been applied in our department to reduce costs, and it may provide psychosocial benefits for patients [15,16]. To our knowledge, there is no consensus on dose-volume constraints for PBT in ultrahypofractionation schemes. This is the dosimetric study to compare VMAT and intensity modulated proton radiation therapy (IMPT) in postmastectomy cancer patients. Materials and methods This study was approved by the Institutional Review Board (IRB: 017/64). Twenty postmastectomy patients (10-left and 10-right sided) who received breast irradiation at King Chulalongkorn Memorial Hospital were included in this study. The computed tomography (CT) datasets from their treatment were replanned using ultrahypofractionation with both VMAT and IMPT techniques. All patients were in the supine position and had a straight face with both arms over the head. The patient was immobilized using Vac-Lok (CIVCO Medical Solution, Iowa, USA) and knee support (CIVCO Radiotherapy, Iowa, USA). The image dataset was acquired by Siemens SOMATOM Definition AS 64-slice (Siemens, Erlangan, Germany) CT simulation with a 3-mm slice thickness. The scan range, from C2 to L2, included all breast tissue [17], while wires marked the area of the chest wall during simulation. The clinical target volume (CTV) included the chest wall and regional nodes (i.e., supraclavicular, axillary (level I-III) and internal mammary nodes), and organs at risk were contoured and reviewed by at least 3 radiation oncologists who specialized in breast irradiation. The CTVs of the breast and regional lymph nodes were contoured based on the Radiotherapy Comparative Effectiveness atlas (RADCOMP) [18]. The PTV was a 5-mm expansion from the CTV but extracted from the skin surface for 5 mm and from the lung/chest wall interface for 3 mm. All organs at risk, including the ipsilateral and contralateral lungs and the heart, thyroid, and esophagus, were contoured. The skin, which was a layer of 5 mm inward from the body, was also created. The wires placed during the simulation were overridden by a CT number of HU = − 1000 to approximate the air density. The VMAT and IMPT plans were generated by the Eclipse treatment planning system version 15.6 (Varian Medical System. Inc., Palo Alto, CA). Each patient had 2 PTVs: the PTV chest wall and the PTV chest wall including regional nodes. Therefore, four scenarios were created: left chest wall, left chest wall including regional nodes, right chest wall, and right chest wall including regional nodes. The prescribed dose was an ultrahypofractionated dose of 26 Gy (RBE) in 5 fractions, employing a generic relative biological effectiveness (RBE) value of 1.1 for proton plans [19]. The VMAT optimization was planned with 6 MV photon beams. A 1-cm bolus was added for 2 in 5 fractions. The VMAT plans consisted of 4 arcs at gantry angles 240°-50° and 135°-310° for the right and left breasts, respectively. The collimator was rotated 90° for 2 arcs, splitting the jaw to cover the upper part and lower part of the PTV. This was done to increase the dose coverage in the PTV due to the limitation of the maximum leaf span of the MLCs in Varian linear accelerators of the X-direction jaw of only 16 cm [17]. The IMPT plans were created with two fields: an anteroposterior (AP) field (0°) and an anterior oblique field with a gantry angle ranging from ± 30° to 45° depending on the beam direction to be as enface as possible to the chest wall. An example of a field arrangement is demonstrated in Fig. 1. A 5-cm range shifter was used to modify the beam dosimetric cover at all depths. This is the typical way to treat target at shallower depth than the minimum range of proton energy in proton treatment [20]. No bolus was applied to IMPT plans due to the reproducibility on bolus placement are seriously concern in proton treatment [20]. Additional targets were 0.5 cm for the proximal and distal margins and 1 cm for the lateral margin. Robust optimization was performed on the CTV structure with a 3-mm setup uncertainty and a 3.5% range uncertainty. The plans were normalized so that PTV received ≥ 95% of the prescription dose (95% of the prescribed dose was 24.7 Gy(RBE)) [6]. Dmean, V5, V10 and V20 were investigated in the heart and in the ipsilateral and contralateral lungs. The thyroid, esophagus, and skin were evaluated in Dmax and Dmean. The Dmax was Fig. 1 Comparison of color wash isodose distribution for the left chest wall with and without regional lymph nodes for VMAT versus IMPT planning: a VMAT chest wall + regional nodes; b IMPT chest wall + regional nodes; c VMAT chest wall; d IMPT chest wall 1% of the evaluation volume due to greater robustness than single point consideration [6]. STATA version 1.5.1 (Stata Corp LLL, Texas, USA) was used to perform statistical analysis. The paired t-test was used when data were normally distributed. The Wilcoxon signed-rank test was also used to analyze the difference in dose volume histogram between the VMAT and IMPT techniques in each scenario when the data did not follow a normal distribution. A P value < 0.05 was considered statistically significant. Results The results of the dosimetric comparison for the VMAT versus the IMPT plans are shown in Table 1. In both techniques, the approximate dose for D90 of PTV volume was higher than 25 Gy(RBE) on both sides of the chest wall. VMAT achieved more dose homogeneity and had a lower maximum dose than IMPT. Examples of dose distributions comparing IMPT and VMAT on the left and right sides are depicted in Figs. 1 and 2. The ipsilateral lung mean dose of the IMPT plans was comparable to that of the VMAT plans. The ipsilateral lung dose was approximately 6.9-8.3 Gy(RBE) in IMPT and 7.8-8.0 Gy(RBE) in VMAT, which was not significantly different. While the V10 and V20 of IMPT were higher than those of VMAT. The V5 of the ipsilateral lung in IMPT was lower than that of VMAT in the left chest wall, the right chest wall, and the right chest wall with regional node plans. The advantage of IMPT over VMAT was observed in the contralateral lung and heart. A mean contralateral lung dose less than 1 Gy(RBE) was achieved by IMPT. The Dmean and V5 of the heart in the IMPT plans were approximately one-fourth of those in the VMAT plans. For the right-sided scenarios, the Dmean of the heart was below 1 Gy(RBE) in all scenarios of IMPT while it was 2.8-3.3 Gy(RBE) in the VMAT plans. IMPT gave a significantly higher skin dose due to the absence of a skin-sparing effect in the proton beam. The left chest wall plans demonstrated a higher esophageal dose than the right chest wall. The IMPT plans provided a significantly lower esophageal Dmean than the VMAT plans. The thyroid Dmean in the IMPT plans was less than that in the VMAT plans. A sample dose volume histogram (DVH) comparison of the left and right chest wall including regional nodes for the VMAT versus the IMPT plans is displayed in Fig. 3. The trends of DVHs of OARs were similar on both sides of the chest wall with regional node plans. The IMPT plan gave a better low-dose volume of the ipsilateral lung than the VMAT plan. The contralateral lung and heart were safe from the dose bath with the IMPT technique. The mean dose of OARs was also plotted with 95% CI and depicted in Fig. 4. Discussion With low α/β, breast cancer is one of few malignancies for which a high dose per fraction provides a benefit in killing cancer cells [21]. Hypofractionation is the current standard in adjuvant radiotherapy and has been increasingly used in many centers. A new ultrahypofractionation regimen has recently been introduced and has demonstrated noninferiority in terms of acute skin toxicity and oncological outcomes when compared to standard hypofractionation [14]. Brunt et al. [12] published results of the FAST-Forward trial and concluded that ultrahypofractionation, 26 Gy in 5 fractions for 1 week, did not show a significant difference in terms of patient-assessed normal tissue effects, clinician-assessed normal tissue effects, photographic changes in breast appearance, and oncological outcomes compared with 40 Gy in 15 fractions. With this short radiotherapy treatment course, this dose-fractionation regimen could be an option during the COVID-19 pandemic. Piras et al. [22] reported a dosimetric study of the FAST-Forward protocol of postconservative surgery left breast cancer using the VMAT technique compared with the 3D-CRT technique, while our study compared VMAT with IMPT in postmastectomy plans. Piras's results showed an ipsilateral lung V30 of 8.33 Gy, while our study revealed V20 approximately 6.4-7.6 Gy in VMAT plans. Our center aims to introduce ultrahypofractionation using proton beams; thus, dosimetric comparison of the target volume and OARs with VMAT is necessary. Although the DVH of most OARs except the ipsilateral lung in IMPT is clearly superior compared with those of VMAT in our study, we cannot assert that proton therapy will become the standard treatment in breast cancer in the near future. Further clinical studies are needed to assess whether the dosimetric advantage will translate into a clinical benefit. In addition, cost effectiveness should be considered. The cost of radiotherapy sessions and the cost of treatment for related toxicities must be weighed. However, we believe that in some special cases in which cardiac sparing is needed, proton therapy could be an appropriate alternative treatment [23]. Regarding toxicities, chest wall irradiation carries the risk of radiation pneumonitis, cardiac mortality, and secondary cancers [15,24,25]. Our study evaluated the dosimetric parameters of VMAT and IMPT for ultrahypofractionation postmastectomy in breast cancer. The PTV coverage was equivalent for IMPT versus VMAT plans. These results corresponded well with Ares's study [6] which showed IMPT reduced the bath of low dose distribution for OARs. Our study demonstrated that the V5 and Dmean of the ipsilateral lung in IMPT were less than those in VMAT in 3 out of 4 scenarios. In lung cancer patients, the large volumes of lung that receive low-dose irradiation (5 Gy or 10 Gy) increase the rate of pneumonitis [26]. The volume of lung tissue receiving radiation could be associated with the risk of late toxicity, including second malignancy, particularly in young women. Our study confirmed that both high-dose and low-dose exposure to normal tissues were less common in the IMPT plan, which corresponded with MacDonald's report [24]. Heart dose is well known to be correlated with cardiac morbidity [27]. Our study supported that IMPT can reduce the potential risk to the cardiac structure. Mac-Donald's study [24] reported that proton therapy allows for the treatment of deep-seated lymph nodes, such as the internal mammary lymph node (IMN), with minimal cardiopulmonary doses. Our results also support that V5, V10 and the mean heart dose were much lower in IMPT than in VMAT. With this dosimetric advantage, lower long-term morbidity from heart disease could be expected from proton beam therapy. One concern with IMPT to the chest wall from this study is the increased dose to the skin because of the lack of a skin-sparing effect with protons. However, the total dose of our regimen was only 26 Gy(RBE), and the actual differences in doses between IMPT and VMAT are Fig. 2 Comparison of color wash isodose distribution for the right chest wall with and without regional lymph nodes for VMAT versus IMPT planning: a VMAT chest wall + regional nodes; b IMPT chest wall + regional nodes; c VMAT chest wall; d IMPT chest wall negligible. Therefore, acute skin toxicity, which typically relates to the total radiation dose, should not be greatly affected. We predict that this dose level is feasible and will be well-tolerated by patients. However, cosmesis is also an important outcome. Data collection in clinical studies is in progress and is reported in the near future. Only 3 mm and 3.5% robust CTV optimization was sufficient and applied in this study because breathing motion management was concerned with access in clinical use. In addition, chest wall volumes are usually superficial in depth, ranging from 3 cm or less, and there was very little uncertainty about intrafraction motion for postmastectomy patients [23,26]. Depauw et al. [28] reported that the patient's chest wall movement along the AP/longitudinal direction was approximately 3 mm when motion management was performed during treatment. In addition, the beam path in proton plans is parallel to the target movement direction, which results in a minimal change in the position of the target; thus, the overall dosimetric impact is below 1% [6,29]. Even though our study provided insights into proton and photon therapy in ultrahypofractionated postmastectomy irradiation, there are some limitations. First, only ten plans were evaluated for each scenario, and a large sample would provide more reliable data. Second, this is a dosimetric study, and the clinical outcomes of the proton interventions are needed. Nonetheless, the dose constraints to PTV and OARs of this study have been applied in our clinical practice, and we plan to report the clinical outcomes in the future. Additionally, this study does not compare the effectiveness of TCP and NTCP for both techniques. However, because this study's dose comparison in OARs was based on an equivalent prescription dose, the physical dose comparison may be rational. Clinical outcomes pertinent to the TCP and NTCP will be reported in the future. Conclusion IMPT showed better normal tissue sparing while maintaining PTV coverage than VMAT in postmastectomy irradiation using ultrahypofractionation. Despite higher the dose to the skin in the IMPT group, the actual difference was negligible. IMPT could significantly reduce the dose to adjacent organs at risk, which could translate into reduced late toxicities compared with those of the photon plan. Further clinical studies are needed to prove the feasibility of this dose fractionation regimen using proton beam therapy. Our proposed dosing scheme would not strongly affect the acute toxicities and would be more convenient to use in COVID-19 pandemic situations.
4,310.6
2021-12-08T00:00:00.000
[ "Medicine", "Physics" ]
Impact of using computer-assisted experimentation on learning physical sciences in secondary schools in Morocco : This research is part of the integration of information and communication technologies (ICT) in the Moroccan education system, our objective being to evaluate the use of computer-assisted experimentation (CAEx) in learning to encourage the authorities of the Ministry of National Education, Vocational Training and Sports to adopt the CAEx program in the teaching and learning of physics so that it becomes compulsory for all high schools. To evaluate the impact of using CAEx in the study of free oscillations of an RLC circuit (a linear circuit containing an electrical resistor, an inductor, and a capacitor). A study was conducted with 40 Moroccan students in the second year of the scientific baccalaureate option life and earth sciences at the high school Abdellah Laroui in the city of Fez, as this work aims to highlight the effect of the use of CAEx on the learning of students. Using the methodology of pre-test and post-test with an experimental group (20 students) and a control group (20 students). The results of both groups are analyzed with IBM SPSS 21 statistical analysis software the results obtained from the post-test show that the average of the tests addressed to both control and experimental groups has a significant difference. In addition, non-directive and directive interviews were conducted with the students of the experimental group. Indeed, the content of the grid of open and closed questions was elaborated in agreement with the teachers of physical sciences of the high school Abdellah Laroui, concerning the use of CAEx in learning and to know the degree of satisfaction of the integration of CAEx in the study of free oscillations in an RLC circuit. In addition, the data were processed by Sphinx v5 software. This study showed that CAEx integration had a positive effect on student learning. It can be said that CAEx plays an important role in the grasp and assimilation of scientific concepts, and it represents a solution to make the student more attentive and serious. Computer Aided Instruction can develop a spirit of initiative in the learner. Introduction Morocco's Ministry of National Education has been working to integrate information and communication technologies (ICT) into the education system since the early 1990s, as they provide innovative means not only for the transmission of knowledge but also for the exploration of training strategies that promote skill development (Perreault, 2003).Information and Communication Technology (ICT) in education is a field of educational technology devoted to research and pedagogical applications that specifically relate to the teaching-learning approaches, processes, and techniques involved in pedagogical actions that integrate the use of digital tools (Bangou, 2006).ICT provides an opportunity to rethink and delocalize, in space and time, exchanges between teachers and students, and thus promote new avenues for learning or training activities (Depover et al., 2007).Appropriate technological tools increase the desirability and searchability of information (Hosseini et al., 2021).Furthermore, introducing ICT tools into lecture delivery assists teachers in adopting creative teaching approaches while also increasing learners' curiosity and understanding, resulting in increased learning capacity and personal growth (Al Shuaili et al., 2020), according to Keržič et al. (2021) the use of ICT for educational purposes independent the age of teachers.In the past, many researchers, including Nurliani et al. (2021) noted that ICT is capable of expanding students' knowledge of physics.Advanced ICT instruments such as gamification, simulation, and problemsolving and simulating practices become vital and effective in assisting lectures (Erdmann & Torres Marín, 2019).The beneficial contributions of ICT integration are numerous, namely accessibility, flexibility, and increased interaction and exchange between the various stakeholders in the field of education (Karsenti, 2003;Nafidi et al., 2015).In addition, ICT can support active learning and foster the development of several cross-cutting skills that make the learner capable of producing, exploiting, processing, organizing, and sharing information (Biaz et al., 2009).ICT plays an important role in reducing the frustration of students with learning disabilities who participate in experiential activities and may even improve their self-efficacy to some extent (Çoban et al., 2022;Ma et al., 2021).Because the integration of technology allows for the modification of human nature and cognition, ICT tools are frequently regarded as artificial organs in the core of human beings, alongside biological organs that allow humans to continue transforming the world while also transforming themselves (Sánchez-Sordo, 2019).In addition, Jeong and Kim (2022) show that the use of computer-aided mapping tools improves critical thinking skills.Using ICT, learners can be more motivated and achieve better results (Belkebir & Darhmaoui, 2018;Droui et al., 2013;Hu et al., 2021;Ouardaoui et al., 2012;Zakaria et al., 2014), and enable the learner to carry out experiments in the case of computer-assisted experimentation (Khazri et al., 2017). The National Charter of Education and Training in Morocco (1999) specifies in lever 10, the objective of equal opportunities for access to information and communication and aims to alleviate the difficulties of teaching and training.Among the most promising means of teaching experimental sciences is certainly computer-assisted experimentation (CAEx), which is an important area of use of ICT in experimental sciences, interacting with a real experiment through an interface equipped with sensors and connected to a computer to collect data, represent and analyze different levels (Caron, 2007), CAEx refers to educational applications that use a computer system (computer, interface, sensors, and software) as a robot (Pellerin, 2016).Leonard (1990) noted that CAEx is the most appropriate category for practical work in science education.The equipment of CAEx in Moroccan high schools started in 2009 (Atibi, 2017).The study by El Ouargui et al. (2019) shows that nearly 60% of schools are equipped with CAEx equipment, but the number and availability of this tool remain insufficient.The student is thus placed in a real laboratory environment allowing him to design, plan, and carry out experiments in physics, electronics, and chemistry.CAEx also makes it possible to control the variables and the student will be able to visualize the data of his experiment in graphic form and carry out mathematical modeling on this basis (Lauzier, 2004).Thus, the CAEx consists of the same elements: A sensor that measures the change of a physical quantity and generates an analog electrical signal whose value is proportional to the measured parameter, the electrical signal from the sensor is applied to an acquisition interface including an analog/digital converter, and the interface converts the analog signals into digital signals that it sends to the computer, a suitable software interface drives and allows you to process the measurements, including graphically (Gourja & Tridane, 2015).Physics teaching software can be used as laboratory tools during computer-assisted experimentation (CAEx) (Zakaria et al., 2014).CAEx also develops scientific and critical thinking skills, enhances student potential, and promotes student autonomy (Riopel & Nonnon, 2005). The objective of our work is to study the effect of integrating CAEx on students' understanding and learning of free oscillations in an RLC circuit (a linear circuit containing an electrical resistor an inductor, and a capacitor) and the value of this study is reflected in the comparison of the traditional teaching method with CAEx in the acquisition of scientific concepts to improve student performance.The importance of this study is to push the Ministry of National Education and Vocational Training to adopt the CAEx program. Information and communication technologies (ICT) have provided innovative tools (Tarman & Dev, 2018).Since the early 1970s, investigators have compared traditional teaching methods with those of CAEx in the acquisition of scientific concepts.Most of these studies show that the uses of CAEx are more effective than traditional teaching methods (Atibi, 2017).In this work, the use of CAEx is due to the observation of several learning difficulties during the teaching of the RLC circuit such as: visualizing the different oscillation regimes, knowing how to connect an oscilloscope to visualize the different voltages, highlighting the influence of R, L, and C on the oscillation phenomenon, knowing the role of the oscillation maintenance device, which consists of compensating for the energy dissipated by Joule effect in the RLC circuit. On the other hand, those in charge of the education sector in Morocco are aware of the important role that the integration of CAEx can play in teaching and learning, for this reason, they have given important spaces to the integration of CAEx in the field of education in the different stages of the reform of the education system.There are two considerations were the choice of electricity as a domain to test the performance of the use of CAEx: First, in the physics curricula of the secondary cycle, electricity is the domain whose development is the most complete, it occupies an important time slot in the teaching of physical sciences in Moroccan secondary.Second, learning electricity presents a real challenge for learners throughout their schooling.However, the study results do not allow for general conclusions to be drawn for the following consideration: The total number of samples used in the study remains small compared to the total number of learners. Materials and methods To investigate the processes of using CAEx in the study of free oscillations in an RLC circuit, a mixed methodological approach was adopted, in which we combined quantitative and qualitative data in the same study coherently and harmoniously.The methodology adopted in this study revolves around the comparison of the responses of students in the control and experimental groups to the pre-test/post-test and the processing of the results of an interview survey.After obtaining formal authorization from the principal of my school in addition, we informed the participants of our research objectives. Instruments Our sample is composed of two groups among Moroccan learners in the qualifying secondary school of the second year of the scientific baccalaureate option life and earth sciences at the high school Abdellah Laroui in the city of Fez, who have an average age of 17 years, during the school year 2021-2022, using the school management system "MASSAR", we divided the samples into two homogeneous groups.We make sure that some criteria are similar to guarantee a fair comparison of the performances of the two groups, the two groups are approximately at the same cognitive level, and their school marks are close.Under the same teaching conditions as the official instructions are respected, both groups have the same pre-acquisition of electricity.The control group is composed of 20 students and the experimental group is also composed of 20 students.First, a pre-test (see Appendix II) was used with both groups to ensure equivalence and to assess the degree of mastery of the previous year's pre-learning.This test consisted of five multiple-choice MCQs of two true or false choices and one exercise in the form of open-ended questions.This instrument was developed and piloted with 20 students, and its reliability was estimated using Cronbach's alpha internal consistency coefficient.The reliability score was found to be 0.70 (see Table 1), indicating an acceptable reliability coefficient.Both groups were asked to answer the pre-test questions and the answers were presented in paper format.We acquired the CAEx knowledge through initial training in the education training center and continuous training for about ten hours with the education inspector.To evaluate the added value of CAEx during free oscillations in a series RLC circuit, we developed a post-test (see Appendix III) that was administered to both groups of students after they had experimented with CAEx with the experimental group using a datasheet in the optional program for one hour (see Appendix I), the CAEx system consists of the data acquisition and computer analysis systems.Nine fundamental elements make up this CAEx system: the computer, the voltage/current sensor, the GLX interface, the DATA STUDIO software, the adjustable DC voltage generator, the variable resistor, the variable capacitor, the variable inductance inductor, and the electrical wires (see Fig. 1). The control group received the same part of the course of free oscillations in an RLC circuit in the conditions of teaching respecting the official instructions based on the classical pedagogical practice, the post-test is composed of three questions of two choices true or false, and two exercises of open questions which aims at the evaluation and the regulation of the basic knowledge which allows the students to continue their learning: discharge of a capacitor in an inductor, influence of pseudo-period damping, energy transfer between the capacitor and the inductor, graphical study in the case of low damping (negligible resistance) and maintenance of oscillations.This instrument was piloted with 20 students, and its reliability as measured by Cronbach's alpha was satisfactory (Cronbach's alpha = 0.72) (see Table 2).After completing the proposed learning activity, we invited both groups to answer the post-test questions in a paper-andpencil format to compare their responses.The collected data were then analyzed with IBM SPSS Statistics 21 (statistical analysis software).A student's t-test was used to compare the groups of two independent samples, and an alpha level of .05 was used throughout the analysis of the results. Interview Our interview concerns all the students (N = 20) of the experimental group in the qualifying secondary school of the second year of the scientific baccalaureate option life and earth sciences in the high school Abdellah Laroui of the city of Fez.Non-directive and directive interviews were conducted with the students according to the protocol (see Appendix IV). Data processing First, the written transcript of the non-directive interview was made according to a grid keeping the main themes of the interview grid.The answers have been classified according to precise categories to be exclusive.They were induced, often, according to keywords.In the second step, the directional interview data were processed with the Sphinx v5 software.The different stages of this process are shown in Fig. 2. Fig. 2. Descriptive diagram of the experiment's approach The aim is to get an idea of their appreciation and satisfaction with the integration of CAEx in the study of free oscillations in an RLC circuit. Pre-test results The results of the pre-test for both groups are shown in Table 3. Analyzing these results, the pre-test mean of the students in the experimental group is Mean = 14.05, while that of the students in the control group is Mean = 13.75; the difference is about .3.We used the test of equality of variances of Levene's errors of two samples with a normal distribution (see Table 4, Levene's p-value is not significant).The null hypothesis is the difference between the means of students in the experimental and control groups is not significant.The results of the comparison are presented in Table 5. According to this table, the p-value of the ANCOVA test higher than the alpha level is chosen, a p-value of .851does not imply the rejection of the null hypothesis; thus, we can estimate that there is no significant difference between the tested groups, this shows that both groups have the same level of skills, this result was predictable because both groups received the same course before the pre-test and allow us to validate our experimental model based on a pre-test and a post-test. Post-test results The post-test results for both groups are presented in Table 6 below.The results show that the mean of the students in the experimental group at the post-test is Mean = 15.40, while that of the students in the control group is Mean = 13.45; the difference is about 1.95.To check whether this difference is significant and to reject the null hypothesis that the educational device tested did not affect the students' results, the student's t-test was used to test the difference between the means of two independent samples with a normal distribution (see Table 7, the Shapiro-Wilk p-value is not significant).The results of the comparison are presented in Table 8. This table shows the results.the variances can be considered homogeneous; the pvalue of Levene's test is p = .153,which is not significant.The p-value of the t-test is below the selected alpha level (p < .05); a p-value of 0.01 allows for rejecting the null hypothesis and thus admits that the integration of the CAEx in a situation of the study of free oscillations in an RLC circuit has certainly had a positive effect on the student's performance.This shows that the students in the experimental group are developing their understanding of visualizing the different oscillation regimes and visualizing the influence of the circuit resistance on the oscillation regimes.9, most of the responses of the students in the experimental group are correct answers to the questions (Q3 of MCQ, Q1 exercise 1, Q2-1 exercise 1, Q2-3 exercise 1, and Q1 exercise 2), while the case is different for the students in the control group.The observed remark is also the rate of correct answers of the experimental group for question Q3 of the MCQ which deals with the influence of L on the oscillation phenomenon (80.0%), the exercise 1 question Q1 which deals with the influence of R on the oscillation regimes (65.5%),Question Q2-1 of Exercise 1, which determines the value of the pseudo-period T of the electrical oscillations (75%), Question Q2-3 of Exercise 1, which consists of the role of the oscillation maintenance device (90%) and Question Q1 of Exercise 2, which allows knowing the energy diagrams (70%).For these questions, the response rates of the students in the experimental group were higher than those of the control group.This confirms that the use of CAEx allows for good assimilation and understanding of the course. Interview To get a fuller picture of the reality of this study, which aims to investigate the impact of the use of CAEx on the understanding of free oscillations in an RLC circuit in learners of the second year of the baccalaureate option Life and Earth sciences.A mixed methodological approach was chosen for this study, i.e., the combination of the quantitative research method with another qualitative research method based on the interview, this method considers the opinion of the student who has learned the course either through classical teaching practice or through CAEx. • Did you encounter any difficulties when learning about free oscillations in an RLC circuit? • What are the benefits of integrating CAEx into the learning process? • What is the added value of using CAEx?Some student responses: • I have a problem visualizing the different regimes of RLC oscillations. • I have difficulty highlighting the influence of R, L, and C on the phenomenon of RLC oscillations. • With CAEx, we are more likely to modify to see the effect on the variables. • CAEx encourages us to make experiments by ourselves. • CAEx facilitates the development of teamwork skills. • We can start over if we make a mistake. The objective is to measure their satisfaction with the integration of CAEx in the study of free oscillations in an RLC circuit, after data processing by the protocol indicated in the methodology section, the students stated that they had a lot of difficulties with the following concepts: visualizing the different oscillation regimes, knowing how to connect an oscilloscope to visualize the different voltages, highlighting the influence of R, L, and C on the oscillation phenomenon, knowing the role of the oscillation maintenance device, which consists of compensating for the energy dissipated by the Joule effect in the RLC circuit, and the transfer of energy between the capacitor and the inductor.The CAEx environment in teaching makes students autonomous, active, and able to build their knowledge and work in groups and the CAEx helps the student to graph the interaction of the variables under study and to visualize.On the other hand, the students also confirm that thanks to the integration of CAEx in the classroom sessions, these difficulties have been overcome.In addition, in the analysis of the directive interview by the Sphinx v5 software (see Fig. 3).We emphasize that 75% of the students declare that the use of CAEx saves time.90% of the students say that CAEx improves student motivation and helps to better approach scientific concepts.In addition, 80% of the students say that CAEx facilitates the development of teamwork and problem-solving skills.According to 85% of the students, CAEx helps develop independent learning skills and develops critical thinking by comparing theory with results.On the other hand, 65% of the students stated that CAEx simplifies the concept of measurement uncertainty. Discussion In this study, it appears that the students in the experimental group who used the CAEx were generally successful in setting up procedures that allowed them to answer questions more efficiently.The post-test responses confirm that these students also acquired skills concerning free oscillations of an RLC circuit.Such as, students become able to visualize the different oscillation regimes, know the influence of the circuit resistance on the oscillation regimes, deal with the charge and discharge of a capacitor in an inductor, know how to influence the pseudo-period on the damping, study the energy transfer between the capacitor and the coil easily, study graphically low damping (negligible resistance) and how to do the maintenance of RLC oscillations.The students interviewed agree that the facilities are numerous: CAEx allows us to start over if we make a mistake; an easy collection of data; the possibility of modifying the set-up to see the effect on the variables; allows us to see in real-time what is happening; facilitates the development of teamwork skills; CAEx helps the student to graph the interaction of the variables under study and to visualize; CAEx encourages us to do experiments on our own.Similarly, previous studies have confirmed that the use of CAEx in the classroom provides a measure of its contributions to the students and the assimilation of the concepts among the students (Gourja & Tridane, 2015).Yusuf and Afolabi (2010) reported that the performance of students using CAEx either individually or cooperatively was better. It can be inferred that the integration of CAEx had a positive effect on students' learning.For example, the use of CAEx saves time by processing the data and drawing the curves which facilitates the teacher's work during the meeting, of course, to be able to finish the program in the desired time, on the other hand, to correct the students' representations and steer them in the right direction.In addition, CAEx processes the data and quickly transforms the algebraic values into numerical and graphical corresponding curves instantly.However, the use of CAEx in classrooms by teachers in their practical work is still very limited in Morocco, because of the lack of continuous training of teachers, the number of students in the classes is too high and the curriculum is overloaded which does not allow to perform experiments with students (El Bachari et al., 2010).In addition, there are also disadvantages to using CAEx as a tool, the student only manipulates manually and does not draw the graph himself (Atibi, 2017). Conclusion At the end of this study, the objective is to highlight the effect of the use of CAEx on students' learning.The statistical data collected from research among Moroccan learners in the second year of the baccalaureate option life and earth sciences at the high school Abdellah Laroui in the city of Fez, using the pre-test and post-test methodology with an experimental group and a control group showed that there was no significant difference between the two groups in the pre-test.In the post-test, after using the CAEx, there was a statistically significant difference between the experimental group and the control group, as the p-value of the student's t-test is lower than the selected alpha level (p < .05); a p-value of .01allows us to reject the null hypothesis and admit that the integration of the CAEx in a situation of the study of free oscillations in an RLC circuit had a positive impact on the student's performance.Indeed, the results obtained show that the successes of the students in the experimental group are higher than those of the control group.There are many advantages of using CAEx: CAEx saves time (75%), improves student motivation, helps to better approach scientific concepts (90%), facilitates the development of teamwork and problem-solving skills (80%), favors the development of independent learning skills, and develops critical thinking by confronting theory with results (85%) and simplifies the concept of measurement uncertainty (65%).The quantitative and qualitative analyses of the results confirmed that the pedagogical integration of CAEx had a positive effect on strengthening students' understanding. Contributions Our study may also encourage physical science teachers to use CAEx in their labs, as many of the students interviewed mentioned that the use of CAEx can significantly improve students' skills, certain indicators such as motivation and interaction between students compared to the conventional method.The results of this study may become the precursor of larger-scale research, with the aim of better understanding the opinion of teachers and students on the integration of CAEx in the school curriculum and may encourage the authorities of the Ministry of National Education, Vocational Training and Education to adopt the CAEx based program. Limitations This research has some limitations.40 Moroccan secondary school students qualifying for the second year of the scientific baccalaureate option life and earth sciences in the high school of Fez, who have an average age of 18 years participated in this study which does not allow us to formulate general results due to the small number of samples, it would be advantageous to conduct similar research in the different discipline (life and earth sciences) with a large number of samples and this study concerns only the Moroccan context. Future directions The results of this study lead us to the conclusion that it can be a starting point for future research.In this research, we have emphasized the importance of continuing education for physical science teachers in the integration of CAEx into their professional practice.One research can target the training needs of students for optimal use of CAEx in their learning.Another interesting research is to do a similar study with other students from different countries with a larger sample size. □ True □ False Exercise 1 The inductor of inductance L = 60mH and the previous ohmic conductor are connected in series with a capacitor of capacity C previously charged.Curves 1, 2, and 3 show the variations of the voltage uc(t) between the terminals of the capacitor for different values of the resistance of the ohmic conductor. 1. Complete the following table by linking the number of the curve to the value of the corresponding resistor R. Consider curve (1) o Give the value of the pseudo-period T of the electrical oscillations. o Assuming that the pseudo-period T is equal to the natural period T0 of the free oscillations of the oscillator (LC), check that the value of the capacitance is C = 15µF. o To maintain these oscillations, a voltage generator is connected in series such that ug = R0 × i with the circuit (R, L, C).For what value of R0 does it produce undamped oscillations? Exercise 2 The curves represent the variations with time of the electrical energy Ee stored in the capacitor, the magnetic energy Em stored in the coil, and the total energy ET of the circuit, such that ET = Ee + Em . Fig. 1 . Fig. 1.Assembly of free oscillations in an RLC circuit with the CAEx The use of CAEx saves time CAEx improves students' motivation and helps them to approach scientific concepts better CAEx facilitates the development of teamwork and problem-solving skills CAEx favors the development of autonomous learning capacities and develops critical thinking through the confrontation of theory with the results obtained CAEx simplifies the concept of measurement uncertainty Table 4 Test of equality of variances of Levene's errors Table 9 Correct answers for control and experimental groups
6,375.6
2023-12-26T00:00:00.000
[ "Education", "Physics", "Computer Science" ]