text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Numerical Design of Ultrathin Hydrogenated Amorphous Silicon-Based Solar Cell <jats:p>Numerical modelling is used to confirm experimental and theoretical work. The aim of this work is to present how to simulate ultrathin hydrogenated amorphous silicon- (a-Si:H-) based solar cells with a ITO BRL in their architectures. The results obtained in this study come from SCAPS-1D software. In the first step, the comparison between the J-V characteristics of simulation and experiment of the ultrathin a-Si:H-based solar cell is in agreement. Secondly, to explore the impact of certain properties of the solar cell, investigations focus on the study of the influence of the intrinsic layer and the buffer layer/absorber interface on the electrical parameters (<jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M1"> <msub> <mrow> <mi>J</mi> </mrow> <mrow> <mtext>SC</mtext> </mrow> </msub> </math> </jats:inline-formula>, <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M2"> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mtext>OC</mtext> </mrow> </msub> </math> </jats:inline-formula>, FF, and <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M3"> <mi>η</mi> </math> </jats:inline-formula>). The increase of the intrinsic layer thickness improves performance, while the bulk defect density of the intrinsic layer and the surface defect density of the buffer layer/<jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M4"> <mi>i</mi> </math> </jats:inline-formula>-(a-Si:H) interface, respectively, in the ranges [109 cm-3, 1015 cm-3] and [1010 cm-2, <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M5"> <mn>5</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mn>13</mn> </mrow> </msup> </math> </jats:inline-formula> cm-2], do not affect the performance of the ultrathin a-Si:H-based solar cell. Analysis also shows that with approximately 1 μm thickness of the intrinsic layer, the optimum conversion efficiency is 12.71% (<jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M6"> <msub> <mrow> <mi>J</mi> </mrow> <mrow> <mtext>SC</mtext> </mrow> </msub> <mo>=</mo> <mn>18.95</mn> <mtext> </mtext> <mtext>mA</mtext> <mo>·</mo> <mtext>c</mtext> <msup> <mrow> <mtext>m</mtext> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </math> </jats:inline-formula>, <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M7"> <msub> <mrow> <mi>V</mi> </mrow> <mrow> <mtext>OC</mtext> </mrow> </msub> <mo>=</mo> <mn>0.973</mn> <mtext> </mtext> <mtext>V</mtext> </math> </jats:inline-formula>, and <jats:inline-formula> <math xmlns="http://www.w3.org/1998/Math/MathML" id="M8"> <mtext>FF</mtext> <mo>=</mo> <mn>68.95</mn> <mi>%</mi> </math> </jats:inline-formula>). This work presents a contribution to improving the performance of a-Si-based solar cells.</jats:p> Introduction The photovoltaic industry is over 95% focused on the use of silicon as a base material [1]. The cost of synthesizing this material has made it possible to switch from crystalline silicon (c-Si) to amorphous silicon (a-Si). Thin-film silicon, mainly the amorphous silicon (a-Si:H) solar cells, has the potential to be less expensive due to low material consumption, lower thermal budget manufacturing steps, and low temperature coefficient of solar cell efficiency [2]. However, the commercially available stabilized efficiency and reliability of a-Si:H solar cells are greater than many of the third generation solar cells as reported so far [3]. When synthesizing amorphous silicon, it has a high concentration of dangling bonds in its structure. To overcome this failure, it is necessary to incorporate hydrogen [4]. This defect reduction allowed the doping of hydrogenated amorphous silicon with boron and phosphorus [5]. The a-Si:H has the advantage of having an adjustable band-gap and a high optical absorption coefficient. This optical bandgap varies between 1.6 eV and 1.8 eV [6]. The efficiency limit for a single bandgap thin film-based solar cell predicted by Shockley and Queisser is around 31% [7]. Nowadays, the maximum conversion efficiency of a-Si:H solar cells is 10.2% [8], which is still far from the theoretical value. There are several methods of manufacturing a-Si:Hbased solar cells such as photo PECVD [9], sputtering [10]. The most suitable for manufacturing single-junction solar cells is the PECVD process [11]. However, some limits should be placed on the preeminence of a-Si:H-based solar cells, since a degradation effect caused by exposure to light was highlighted by the Staebler-Wronski effect [12]. Indeed, they observed that exposure to the light of an a-Si:H-based solar cell, stretched over time, caused a drop of its electrical parameters: this is known as a lightinduced degradation (LID) effect. This limitation can be reduced by controlling the thickness of the intrinsic (i) layer in the structure of solar cell [13,14]. In the a-Si:H-based solar cell, the thickness of the i-layer controls the short-circuit current [3]. To facilitate light absorption in a-Si:H-based solar cell, some advanced tricks are required to utilize the major portion of the incident light in the active layer of the cell, like light-trapping techniques in order to enhance the optical path length (OPL) of photons [3]. The optical absorption and carrier collection can be increased by improving the reflection characteristics of a back reflector layer (BRL). There are multiple approaches to suppress the optical losses in the device; hence, properly designing the device is crucial [3]. This work presented how to use numerical simulation in the case of a-Si:H-based solar cell with a nonconventional back reflector layer (BRL) in their structures. A nonconventional BRL is definite as BRL containing semiconductor nanoparticles. This type of BRL improves the reflection characteristics of a back reflector layer. Nonconventional BRL contributes in reducing optical losses by light scattering behaviour, improves absorbance, and ameliorates photon management. In this approach, this can be numerically modelled through the rate of reflection at the rear contact and the absorption coefficient of the active layer (i) of the a-Si:Hbased solar cell in the SCAPS-1D software. As is known to all, the device quality is mainly determined by factors including emitter quality and interface quality [15]. In order to validate our solar cell model, we start from a comparative study of the J-V characteristic of the results of the experiment and simulation, and on the other hand, we study the influence of the various parameters (thickness, bulk defects density, and properties of buffer layer/absorber interface) on the performance of the solar cell, using SCAPS-1D software. Method and Materials 2.1. Method. Numerical modelling is an approach and an important tool which allows to understand the complexity of solar cells and the development of these; it also helps to understand the phenomena that are at the origin of the limitation of the conversion efficiency of solar cells. SCAPS-1D is a one-dimensional numerical simulation software of solar cells [16]; it was developed at the University of Gent in Belgium and was previously tested on the structures of CuInSe2 and CdTe family [17]. But with the evolution of research, its functions have extended to crystalline (Si, GaAs) and amorphous (a-Si, micromorphic Si) structures. Thus, his choice is justified in this work by the fact that it allows to have simulation results in agreement with experience [16]. The descriptive equations used by SCAPS-1D software are the basic semiconductor equations (equations (1)-(3)). They are three coupled and nonlinear differential equations that are solved simultaneously in SCAPS-1D. Equation (1) is called the Poisson equation; it describes the phenomena of an electrostatic nature, where ψ is the electrostatic potential, n and p are the density of free electrons and holes, respectively, N + D and N − A are the concentrations of ionized donors and acceptors, respectively, and ρ def is the density of deep defect centers. Equations (2) and (3) are the continuity equations of electrons and holes; they govern the condition of dynamic equilibrium in a semiconductor; G is the generation rate; U n and U p are the recombination rates of electrons and holes, respectively. J n and J p represent the current densities of electrons and holes, respectively; and their expressions are given, respectively, by equations (4) and (5), where μ n is the electron mobility and μ p the hole 2.2. Materials. Generally, the performance of solar cells is dependent on three main factors: material selection, material growth technique, and device architecture [18]. A solar cell consists of a semiconductor material that absorbs light and then generates excess electrons and holes [15]. Figure 1 is the structure of a-Si:H-based solar cells. Figure 1(a) shows the structure of the solar cell investigated in this work, while Figure 1(b) represents the schematic diagram of the solar cell from the experimental work of Banerjee et al. [3]. In this subsection, we describe the structure of the solar cell to be optimized. This single-junction cell (Figure 1(a)) is based on hydrogenated amorphous silicon and is constructed using SCAPS-1D software. The improvement of its electrical parameters depends on the different properties and the arrangement of the layers in the structure. Cell to be Electron thermal velocity (cm/s) 10 7 10 6 10 6 10 7 2:0 × 10 7 Hole thermal velocity (cm/s) The i-(a-Si:H) layer, intrinsic type, is situated between the p-doped buffer layer and the n-doped n-(a-Si:H) layer; it is the fundamental element of the solar cell in which the photovoltaic conversion takes place. This layer is called an absorber layer. The light enters the structure through the p-(a-SiO x ) window layer, its bandgap depends on the O/Si ratio, and when this ratio is 34%, it has an intermediate bandgap of 1.95 eV, with a conductivity of 3.3 S/cm [19]. This layer maximizes the absorption of light in the structure. The photons absorbed by the i-layer create the electron-hole pairs. The induced electric field, by the n-and p-layers through the i -layer, causes the electrons to drift towards the n region and the holes towards the p region. The p-type-doped buffer layer reduces the height of the Schottky barrier and the recombination at the p-(a-SiO x )/i-(a-Si:H) interface [20]. The ITO layer is an important layer in this numerical simulation; it can increase the optical absorption and carrier collection by improving the reflection characteristics of a back reflector layer (BRL, which is a layer between the metal back contact and the bottom n-layer in a-Si:H solar cell) [3]. The ITO layer also reduces transmission losses at the rear contact (Al metallic contact) and promotes adhesion between the amorphous silicon and the metallic contact [21]. Improving the photovoltaic conversion efficiency requires improving the utilization of the major portion of the solar spectrum in the active layer, in order to enhance the OPL of photons [3]. In this work, we are trying to understand how the presence of ITO BRL in a-Si:H solar cell contributes to enhance its performances with SCAPS-1D software. To model this type of amorphous silicon solar cell structure, taking into account the effect of ITO BRL containing semiconductor nanoparticles which ameliorates photon management and increases the generation of electron-hole pairs, in the environment of SCAPS-1D, the optical absorption submodel applied at the i-layer is based on the squarelaw model, given by equation (6). The value of the absorption coefficient represents how efficiently the photon energy will be harvested using the materials [7]. BRL plays a crucial role in light reflection by backscattering to the active layer of the cell; thus, the reflection rate at the rear contact is taken at more than 93% in this numerical simulation as suggested by the experimental work of Banerjee et al. [3]. where E g represents the gap, and A (cm -1 ·eV -1/2 ) and B (cm -1 ·eV +1/2 ) are the parameters of the model in SCAPS-1D. The electrical input parameters of all materials used in this numerical simulation, for the resolution of the previous equations, are given in Table 1; these data are taken from the literature [3,19,22,23]. In this simulation study, metal contacts (front and back contacts) are assumed to be flat bands. Results and Discussion 3.1. Comparison between Simulation and Experience. First, we validate our solar cell structure model (Figure 1(a)), by comparing the experimental results of the current-voltage characteristic from the work of Banerjee et al. [3] to those of the numerical simulation as shown in Figure 2. The modelling of the a-Si:H-based solar cell (Figure 1(a)) is made under AM1.5 solar spectrum, a light power of 1000 W/m 2 , and at 300 K, using the parameters of Table 1. In this numerical simulation model, the silver sulfide nanomirrors in ITO Table 2. In view of the data from Table 2, we can therefore conclude that there is a good agreement between the experimental and simulation results; this confirms the validity of the a-Si:H-based solar cell model (Figure 1(a)). Figure 3 represents the band diagram of this simulated cell at thermal equilibrium. Influence of the Intrinsic Layer Thickness Variation. The main drawback of a-Si:H solar cells is light-induced degradation (LID), which can be minimized by controlling the thickness of the intrinsic (i) layer [3]. It is useful to be able to appreciate the importance of studying the variation of thickness of the intrinsic layer on the performance of a-Si:H-based solar cell, because it is in this layer that happens the phenomenon of photovoltaic conversion, resulting in the production of photovoltaic energy. Thus, this subsection allows us to investigate the effect of the variation of the thickness of the intrinsic layer on the electrical parameters of the solar cell, as shown in Figure 4. For this study, the thickness of the intrinsic layer varies from 0.1 μm to 1 μm in the case of solar cells with ultrathin absorber layer, by keeping constant the other parameters of the layers of the structure of solar cell (Figure 1(a)) and by neglecting the properties of interfaces between the different layers. Within this variation range of the intrinsic layer thickness, we observe an increase in the short-circuit current density from 13.28 mA·cm -2 to 18.75 mA·cm -2 (Figure 4(a)). Therefore, the short-circuit current density increases with the thickness. This result is explained by the considerable absorption of incident photons in the intrinsic layer, which increases the number of photogenerated carriers [6], through the generation rate as shown in equation (7). Similarly, the work of Chelvanathan et al. [24] has shown that the current density also increases with thickness in the case of the CIGS-based solar cell. where L n and L p are the diffusion lengths of electrons and holes, respectively, and W is the width of the space charge region. The open-circuit voltage V OC (Figure 4(b)) increases as the thickness of the intrinsic layer increases in the range [0.2 μm, 1 μm]; this can be attributed to a decrease in the phenomenon of bulk recombination and at the level of back contact, and better passivation of the i-(a-Si:H)/n-(a-Si:H) interface as suggested by Lachaume [25] and De Wolf et al. [26] In addition, the fill factor FF (Figure 4(c)) also shows two trends: a first in the range [0.1 μm, 0.2 μm] and a second in the range [0.2 μm, 1 μm]. The increase in FF in the interval [0.1 μm, 0.2 μm] may be due to the reduction in the double diode effect observed in the i-(a-Si:H) layer [27]. However, the decrease in FF in the interval [0.2 μm, 1 μm] is due to the presence of coordination defects in the absorber layer, and secondly, the resistance of the intrinsic layer increases with its thickness, which increases series resistance of the solar cell. In view of the above, the combined action of these three parameters (J SC , V OC , and FF) contribute to an increase of conversion efficiency (equation (8)) of the a-Si:H-based solar (Figure 4(d)); therefore, we observe a variation of the conversion efficiency from 8.26% to 11.63%. 3.3. Influence of the Defect Density in the Intrinsic Layer. Figure 5 shows the effect of varying deep bulk defect density (N t ), acceptor type, of the intrinsic layer on the performance parameters of the a-Si:H-based solar cell using the data from Table 1 and for N t ranging from 10 9 cm -3 to 10 19 cm -3 . These defects form following the breakdown of the chemical equilibrium of the weak Si-Si bonds during the deposition of amorphous silicon [28]. Their energy distribution in the bandgap varies depending on the incorporation of the pro-portions of hydrogen in the intrinsic layer; by convention, these defects are located in the upper part of the bandgap. The results obtained show that, for N t took in the range [10 9 cm -3 , 10 15 cm -3 ], the electrical parameters (J SC , V OC , FF, and η) are constant ( Figure 5). In this region, the donor density (10 6 cm -3 ) is the same as acceptors. Taking into account the amorphous nature, the intrinsic layer can be considered to be heavily doped; therefore, the combined effects International Journal of Photoenergy On the other hand, we observe that, for N t took in the range [10 15 cm -3 , 10 19 cm -3 ], all the electrical parameters are greatly affected by the variation of the bulk defect density; there is an abrupt decrease of all these parameters ( Figure 5). This decrease is reflected in the fact that the increase of these defects creates localized states in the bandgap [29,30]. These localized states influence the intrinsic Fermi level by creating tail states and by inducing additional charges, which are taken into account in the ρ def term of equation (1). Thus, the phenomena of recombination of the photogenerated carriers in this layer predominate over the phenomena of generations. The current density J SC decreases from 16.61 mA/cm 2 to 8.79 mA/cm 2 , the open-circuit voltage V OC from 0.910 V to 0.845 V, the fill factor FF from 73.0% to 41.33%, and the efficiency from 11.04% to 3.07%. The presence of these defects is the cause of optical loss due to a high Shockley-Read-Hall recombination rate, resulting in a sharp decline in all electrical parameters. These results are in agreement with those of the work of Ghahremani and Fathy [31]. Effect of State Density of Buffer Layer/Absorber Interface. Defect states at the interface of two layers can cause strong interface recombination in solar cell. A low interface state in the midgap can be achieved by the insertion of intrinsic hydrogenated amorphous silicon in the a-Si/c-Si passivated contact (a-PC) solar cell [15]. Modelling and optimizing a solar cell require controlling the interface states between the different layers that constitute it, in order to ensure the passage of charge carriers through a junction. The previous results are obtained by neglecting the interface properties between the different layers. The buffer layer/absorber interface plays a crucial role in the charge transport mechanism in a-Si:H solar cells [32]. In this section, we explore the influence of the density of surface defects D it (Table 3) at the buffer layer/i-(a-Si:H) interface, using the data from Table 1 and for N t = 10 16 cm -3 in the intrinsic layer, on the electrical parameters (J SC , V OC , FF, and η) of the a-Si:H-based solar cell, as presented in Figure 6. This density of surface defects (D it ) varies from 10 10 cm -2 to 5 × 10 16 cm -2 . For surface defect densities ranging from 10 10 cm -2 to 10 13 cm -2 , the electrical parameters are almost insensitive. Moreover, when the density of the surface defects increases from 10 13 cm -2 to 5 × 10 16 cm -2 , the short-circuit current density decreases drastically from 16.43 mA/cm 2 to 11.47 mA/cm 2 (Figure 6(a)). This decrease is due to increased recombination centers at the interface buffer layer/i-(a-Si:H), favouring electron traps but also the loss by optical absorption of the incident light [33]. Figure 6(b) shows that, for D it took in the range [10 14 cm -2 , 5 × 10 15 cm -2 ], the open-circuit voltage decreases considerably from 0.908 V to 0.896 V. This decrease is the consequence of dangling bonds in the absorber layer, which cause recombination phenomena, and atomic interdiffusion of the buffer layer/i-(a-Si:H) interface. Figure 6(c) shows the variation of the fill factor as a function of the density of the surface defects. For surface defect densities greater than 10 13 cm -2 , the fill factor decreases from 70.56% to 68.09%. FF is affected by surface recombination at the buffer layer/i-(a-Si:H) interface. This decrease can also be explained by an empirical expression of FF as a function of V OC (equation (9)) [32,34]. FF = ν oc − ln ν oc + 0:72 These three electrical parameters (J SC , V OC , and FF) contribute to the decrease of the conversion efficiency ( Figure 6(d)) given by equation (8); these results are in agreement with the works of Rached and Rahal [35]. The trend of the influence of interface state density on cell performance is similar to that obtained in the work of Zhou et al. [36]. 3.5. Optimized a-Si:H Solar Cell. Optimizing a solar cell consists of finding the values of the parameters that make it the most efficient. In this paper, it is to determine the optimal electrical parameters of our model ultrathin film hydrogenated amorphous silicon solar cell. Our approach in this subsection consists in using the optimal values of the parameters studied in the previous subsections, in order to simulate the electrical parameters of the optimized solar cell. The optimal parameters, previously determined, are 1 μm, 10 9 cm -3 , and 10 10 cm -2 , respectively, of the thickness and the bulk defect density of the intrinsic layer and of the density of surface defects of the buffer layer/i-(a-Si:H) interface. The optimized cell thus leads to a conversion efficiency of 12.71%. Figure 7 gives the current-voltage characteristics of the three structures used in our work, and Table 4 summarizes the electrical parameters of experimental and optimized solar cells. The decrease in FF from the experimental solar cell to the optimized one is due to the phenomenon of surface recombination at the buffer layer/i-(a-Si:H) interface and to the increase in the series resistance of the solar cell, which increases with the thickness of the intrinsic layer. Conclusion In this work, we present the factors that can be at the origin of the electrical losses in the ultrathin a-Si:H-based solar cell with a ITO BRL, through a study by numerical simulations using the SCAPS-1D software. From the p-(a-SiO x )/buffer layer/i-(a-Si:H)/n-(a-Si:H)/ITO/Al structure of the solar cell, we simulated the current-voltage characteristic whose electrical parameters (J SC = 16:55 mA·cm -2 , V OC = 0:905 V, FF = 70:62%, and η = 10:58%) reproduce the experimental data, and we have shown how the thickness and bulk defects of the absorber and the density of the surface defects of the buffer layer/i-(a-Si:H) interface affect the electrical parameters of the ultrathin a-Si:H-based solar cell. These properties are crucial for high-performance solar cells. Increasing the thickness of the intrinsic layer contributes a lot to the process of generation of photogenerated carriers, which increases the performance of the solar cell. We have also observed that the increase in the density of the bulk defects of the intrinsic layer and the surface defects at the buffer layer/i-(a-Si:H) interface increases the recombination phenomena, which contribute to the reduction of a-Si:H-based solar cell performance. The optimized structure of the ultrathin a-Si:H-based solar cell gives a conversion efficiency of 12.71% for a thickness of 1 μm and a bulk defect density of 10 9 cm -3 of the intrinsic layer and a surface defect density of 10 10 cm -2 at the buffer layer/i-(a-Si:H) interface. Data Availability All data used in our work has been cited in the reference list.
5,349.4
2021-08-14T00:00:00.000
[ "Mathematics" ]
The capillary bistable switch constrained by pinning/wetting angles as a sensor of pressure A droplet deposited onto an orifice in the flat smooth plate can remain in a bistable state. Such system —a droplet and a flat surface— behaves as a bistable capillary switch, which can be used for threshold detection of some system's properties, e.g., the difference in the pressure on both sides of the plate. The droplet morphology changes abruptly at a certain value of the pressure difference and shows hysteresis. The specific behavior of the system is a result of the geometrical constraints defining the curvature of the liquid droplet at both sides of the surface. These constraints can be represented either by pinning angles at both sides of the plate or by the pinning angle at one side and the contact angle at the other. The dependence of details of the droplet morphology and energy on the difference in pressure at both sides of the plate is calculated by means of the semi-analytical model and Surface Evolver simulations. Introduction A capillary switch is a bistable system of liquid/gas or immiscible liquid/liquid interfaces with a trigger to toggle back and forth between the two or more stable equilibrium states [1]. To switch from one stable state to the other, the energy barrier must be overcome. The energy barrier can be tuned by the droplet and surrounding morphology. A capillary system becomes a real switch only when toggling is achieved [2]. The toggling trigger can be based on the pressure difference [3], liquid surface tension, inertia force, electric field (e.g., when the electrostatic potential acts to oxidize the surfactant on the one part of the droplet surface and to reduce it on the other part) [3], magnetic field (applied to ferrofluids) [4,5], etc. The droplet properties determining the behavior of the capillary switch include the liquid's physical properties (e.g., zeta potential, surface tension and contact angles) [6,7]. A droplet realizing the bistable switch shows the unusual bifurcation dependence of the droplet morphology on its volume and hence at a volume large enough the morphology vs. stimulant dependence forms a hysteresis loop. The geometrical constraints of the morphology of the liquid surface usually are reduced to the pinning angle of the orifice/pipe edges. However, there are systems in which the motion possibilities of the triple line can be switched from the free shift on the solid surface to the immobilization on linear inhomogeneities (edges) as found for the vertically submerged cylinder [8], the liquid between fiber and substrate [9] and a e-mail<EMAIL_ADDRESS>in Wenzel or Cassie wetting regimes [10]. Moreover, the viscous dissipation by the movement of the triple-contact line, where a fluid/fluid interface meets a solid is also discussed; e.g., Lea [11] demonstrated a bistable switch in geometries with the moving contact line, which is dissipative [12]. The main aim of the study is to answer the question whether the morphology of the surface of the liquid drop, constituting the capillary switch, depends on details of the solid at the site at which the droplet is settled (flat plate, outlet of tube). Namely, the surface morphologies determined by the pinning angle and contact angle are studied. Sections 2 and 3 contain the semi-analytical and simulation approaches to the problem. In sect. 3, the influence of the gravitational field on the liquid morphology is analyzed. The obtained results are discussed in terms of adjusting and calibration of bistable switches based on different constraints of droplet settlement in practical applications. The effect of wettability details on the droplet morphology -a semi-analytical approach In order to analyze the influence of geometry constraints on the behavior of the capillary switch, let us consider the simplified system in which a droplet is settled on the orifice of radius R 0 in the flat smooth plate with negligibly small thickness. Additionally let us assume that the droplet is small enough for the system to be in the pure capillary regime (the Bond number Bo 1 [13]). In such case, both droplet surfaces form spherical caps of the mean curvature radii equal to R 1 and R 2 , respectively. If the gas pressure is the same on both sides of the plate, both cap radii take the same value (R 1 = R 2 ) as a result of the Laplace law [14]. However, the symmetrical morphology of two identical spherical caps (see fig. 1(a)) is metastable and tends to one of the two equilibria states of smaller surface/interface energy ( fig. 1(b)) with the same probability. Thus, the system's evolution shows bifurcation. The initial symmetrical state as well as the resulting equilibria states depend on the plate surfaces wettability and can be governed by the pinning or contact angles as shown in fig. 1. Let us notice that the droplet morphology of type 1A and 1B is determined by the pinning effect of the droplet surface to the edge of the orifice. As a consequence, the angles between the tangent to the liquid surface and solid surface φ can vary in the way fulfilling the inequality The corresponding angles between the surfaces in type 2A and 2B can either take the value limited by inequality (1) or should be strictly determined by the contact angle θ. Changes in the pressure difference on both sides of the plate can affect the curvature radii of both spherical caps according to the Laplace law where p 1 , p 2 and p are pressures at both sides of the plate and the internal pressure in the droplet, respectively. Hence, the difference in pressures defines the difference in curvature of both spherical caps For the upper and lower pinned cups of type 1A or 1B constrains, the curvature radius can be related to the orifice radius where A is the aspect ratio equal to the ratio of In the case of type 2A constrain, eq. (5) is still valid but, in eq. (6), the orifice radius should be replaced by the radius of droplet bottom at the constant contact angle θ: where R theta is the radius of the triple line circle. By combining eqs. (4)- (7), the pinning angle φ 1 can be related to Δp by for type 1 (when Δp changes, R 0 = AR remains constant), and for type 2 (when Δp changes, θ remains constant). The sum of the volumes of both caps is equal to the volume of the whole droplet V , and the volume of a single spherical cap reads where where R x is the radius of the cup basis equal to AR (pinned cup) or R theta (wetting cup) and α is the pinning or contact angle. The total volume of the droplet is equal to in the case of type 1 constraint, and for type 2 constraint. Analysis of eq. (8) indicates that the expression governing the variability of pinning angle is ΔpA/γ hence one can expect that A and 1/γ will linearly influence the effect of Δp. Let us also notice that, in the wide range of applied Δp, one of the components of total volume of the droplet, V 1 or V 2 , can take negative values when the surface bends in the direction opposite to that shown in fig. 1. A comparison of eqs. (13) and (14) shows that they are almost identical except for the difference in the argument in the F function addressed to one of the cups, which raises the suspicion that the behavior of systems of type 1 and 2 can differ. Equations (13) and (14) can be treated as φ 1 = f (Δp) functions. However, the functions are implicit and can be solved only by numerical methods. This solution leads also to determination of all geometric parameters of droplet morphology based on the equation set (2)-(9). However, because of the high number of roots of eqs. (13) and (14), it is quite difficult to calculate the parameters of interest and in several cases the obtained results were doubtful. Moreover, the solution is limited to the non-gravity systems. In such case, the approximate calculations made for systems under gravity with the finite element method seem to give less precise but more reliable results. The capillary switch -the finite element method simulation This part of the study was performed by means of simulations with the finite element method using the Surface Evolver (SE) program [15,16]. The surface modeled by SE is represented by a mesh network of triangles whose vertices make the network nodes. The system studied was a droplet settled on the plate of negligibly small thickness (see fig. 2). The study was performed for the cup edges forced to be pinned to the orifice edge (type 1A and 1B) and for the cup edges freely moving on the surface (type 2A and 2B). In the latter case, the pinning effect can be caused spontaneously by the system geometry. Each simulation started with the two equilibrated cups placed symmetrically at both sides of the plate around the orifice at Δp = 0. Then the initial value of Δp was applied and the droplet morphology was immediately switched to the asymmetrical state of minimum energy, further modified by subsequent pressure changes. In the course of simulation, Δp was changed step by step by the value of 20 Pa (or 5 Pa when gravity was switched on). Then the free energy of the modelled system was minimized in defined steps including the procedures of mesh refinement, vertex averaging, polishing up the triangulation and the energy minimization by means of the conjugated gradient descent method. The range of Δp studied depended on the system parameters (up to −10 3 -10 3 Pa). The other parameters: the droplet volume V = 1·10 −9 m 3 , the liquid surface tension γ = 0.072 J/m 2 , and liquid density ρ = 1 · 10 3 kg/m 3 (H 2 O) were kept constant. Separate simulations were performed for different values of the droplet volume at the acceleration due to gravity of 9.81 m/s 2 , the contact angle θ = 120 • and aspect ratio A = 0.3. The values of the pinning/contact angle α were calculated on the basis of inclination of the mesh triangles representing the free liquid surface and being in contact with the triple line to the plane of the solid surface (see fig. 2), from the equation where z and S T denote the vertical coordinate and the length of the normal vector of each triangle used for calculation of the surface slope near the triple line, marked in red in fig. 2 (the length S T of SE representation of the normal vector is not unitary but equals to the area of the triangle). The exemplary evolution of the contact/pinning angle (denoted as α) on both sides of the plate in the course of one and a half cycle of changes in Δp at zero gravity, is shown in fig. 3. Both curves are symmetrical -when, on the one side of the plate, only the variation in the size of the cup is observed (at constant contact angle), on the other side, changes in the shape of the cup occur with varying the pinning angle. The unsymmetrical line in the figure refers to the starting steps of simulation. Slightly inflated values of α presented in fig. 3 and the next drawings are a result of the assumed method of calculation and are related to the fact that the triangles used for the calculation of the liquid surface slope have finite size. The width of the loops is limited by two critical pressure differences switching the droplet morphology from one to the other Δp crit I and Δp crit II . The influence of the aspect ratio on the α variation for the droplet located on a plate with the contact angle θ = 120 • in both types of pinning/wetting phenomena is shown in fig. 4. All hysteresis curves presented are similar to that shown in fig. 3 and can be interpreted in the same way, thus for simplicity only the dependencies of α 1 (further referred as α) on Δp at the single side of the plate are shown. As can be seen, the curves obtained for Fig. 4. Evolution of the pinning/contact angle α caused by changes in pressure difference: (a) in the system where the droplet is pinned to edge of the orifice (type 1) and (b) where droplet partly wets the surface with the contact angle 120 • (type 2). The lines between both parts of the figure connect the points of morphology switching. type 1 constraint slightly differ from that for type 2. The difference concerns the α angle at the side of the plate at which the liquid volume is currently larger: for half of the cycle of pressure changes, the α angle for the type 1 constraint tries to reach a very high value of about α = 160 • , whereas for the type 2 constraint, α = θ. For another half of the cycle, the α angle is not constant but limited by the pining requirements (inequality 1) and takes smaller values. At the same time α 2 = 160 • (type 1) or α 2 = θ (type 2) on the other side of the plate. In the first case (α ≈ 160 • ), in the real experiment the spreading of the droplet over the plate with θ = 120 • should be observed (type 2) instead of the increase in α to a very high value (160 • ). However, for selected geometries of the solid surface when inequality (1) can be still fulfilled (on the edge of the thin-walled tube instead of a flat plate), one can expect the α = f (Δp) shown in fig. 4. The bistable switches based on the different constraints represented by type 1 and 2 work differently -the maximum volume of the liquid droplet switched to the equilibrium state (eq. (11)) is different in both cases, so the detection method should be precisely adjusted to the type of constraint. More important, however, is the fact that the threshold pressures at which the droplet morphology switches from one to the other (Δp crit I and Δp crit II ) in both types of constrains also differ from each other as shown by the lines connecting points of switching morphology in figs. 4(a) and (b). The range of Δp not causing a jump in droplet morphology (the plateau region) is wider in the type 1 constraint. Figure 5 shows the influence of the aspect ratio on the α variation for the droplet located on a plate with the contact angle θ = 60 • . The number of available results is significantly smaller (especially for type 2 constraint) as a result of relatively poor convergence of calculations. However, both curves families (for type 1 and 2 constraints) demonstrate analogous properties and indicate the same problems with adjusting and calibrating of bistable switches of both types. The capillary switch under gravity A large drop in the presence of gravity (for relatively large Bond numbers (Bo > 1)), defined by the equation is unstable, detaches itself from the plate and falls down (e.g. for V = 30 mm 3 , Bo = 1.316). The behavior of capillary switches based on smaller droplets is dependent on the droplet volume. Figure 6 shows the α angle hysteresis loops obtained for two droplets of the volume V = 5 and 20 mm 3 (Bo = 0.398 and 1.004, respectively). The dependencies were obtained at the same aspect ratio A = 0.3. As shown at V = 5 mm 3 , the gravity practically does not influence the shape of the loop of type 1 and lowers the high critical switching pressure (Δp crit II ) of type 2. For a volume very close to Bo = 1, both loops are apparently affected -the regular decrease in both critical switching pressures (Δp crit I and Δp crit II ) is observed for type 1, whereas the dependence for type 2 shows unstable behavior of the droplet and its detachment from the plate. Conclusions The semi-analytical method as well as SE simulation were used to study the behavior of the droplet settled on the orifice in the flat plate at different constraints imposed on the droplet edge -the immobile pinned edge (type 1) or the freely moving triple line at constant contact angle (type 2). The methods indicate that droplets of both types behaves similarly -they abruptly switch their morphologies at certain differences in pressure Δp at both sides of the plate. However, the angle at the triple line α in the systems of both types differs and the critical switching pressures for all studied aspect ratio are different. Consequently, one can state that the operation of the switch is highly dependent on the precision of the orifice fabrication especially on the sharpness of its edge. Rounded or damaged edges can facilitate detachment of the cup edges and wetting of the surface (equivalent to transition from type 1 to 2). Capillary switches can work even at high wettability of the surface for which the droplet constraint of type 2 are applicable. The gravitation influences both critical switching pressures Δp crit and shifts the hysteresis loop along the pressure difference axis.
4,051.6
2019-09-01T00:00:00.000
[ "Physics" ]
Organizational Control via ERP system in Tunisia : The State of art and progress The company undergoes several changes in order to follow the evolution of its interactions; it must be in constant vigil with its macro and micro environment. The use of information and communication technologies is not new for Tunisian companies but is still an increasing priority to be prepared for new needs. The company has initially looked to develop its Information System "IS" within its organizational structure, and then it has adopted various technologies that allow it to thrive and shine on its external environment. Introduction The company undergoes several changes in order to follow the evolution of its interactions; it must be in constant vigil with its macro and micro environment. The use of information and communication technologies is not new for Tunisian companies but is still an increasing priority to be prepared for new needs. The company has initially looked to develop its Information System "IS" within its organizational structure, and then it has adopted various technologies that allow it to thrive and shine on its external environment. The adoption of information and communication technologies in general and the Enterprise Resource Planning ERP system in particular, facilitates the organizational change. The transformation of the Abstract This research provides a framework for the analysis; it allows identifying the organizational impact of the ERP adoption on the management controllers' work. As a first step, the purpose is to recognize the factors that affect the decision of the adoption, in order to study in a second step the impact of this decision on the skills development of the management controllers. The conceptual framework used for this research, is inspired from the work in the field of the contingency theory of Weill and Olson (1989) and the theory of innovation diffusion of Rogers (1995). The empirical validation is based on a survey carried out on about 100 Tunisian firms that have adopted different ERP models since 2002. The survey's main findings of the empirical study express the relevance of the conceptual model as a key global instrument for the approach; the results confirmed the managerial impact on the decision of the adoption. A decision that highlights the management controller's skills in reporting and information systems by using new methods and indicators of participation to the organization's management. organization's management procedures leads us to reflect on the organizational change caused by the introduction of the ERP system. Based on these elements of the aforementioned context, this work was proposed to be carried out to determine the adequacy of the organizational control and the ERP system. To what extent does the adoption of ERP improve the organizational control of Tunisian companies? Organizational impact of ERP System Adoption The Rongé (2000), Anthony (1988)] who study the organizational impact of the adoption of information technologies in general and the ERP system in particular on the function of management controllers. Through this review of the literature hypotheses have been come up with and their interaction allowed to conceiving a theoretical proposition (a model). The hypotheses and theoretical proposition will be tested for empirical validation. These models and theories form a support to the theoretical basis of the work carried out and to which this work is inspired. To adapt to the new requirements of the technological environment, any organization seeks to modernize its information system by integrating a multitude of systems including the ERP system. The implementation of this system entails several changes namely a change in work procedures, a change in communication habits, in the organization of processes, and a change in the organizational impact on the company. Reix (2004) stated that "information systems are organization-related". The adoption of this system causes repercussions at the organizational level. Davenport (1998) pointed out that "an ERP system, by its intrinsic nature, imposes its own logic on the strategy, organization and culture of the enterprise", Assuming this definition, it follows that the introduction of ERP system will cause a change by imposing an organizational logic that adapts to the needs of the system. Two major elements will be affected namely the information systems and the organizational processes of the company. In this sense, Davenport (1998) stated that "the consideration of organizational aspects is one of the key success factors of an ERP system". Davenport (1998), Granuld and Malmi (2002) defined the ERP system as "a set of integrated resource control tools that circulate throughout the organization". The ERP system has an effect on the organization; it affects the decision process and therefore causes a change in the control processes in the company. Indeed, as a result of outsourcing to the outside world, companies are increasingly looking for solutions that allow them to respond to the needs of the market by adopting powerful organizational control tools to help them better manage the relationships with their different partners at reasonable costs (Baglin, G., Lamouri, S., Thomas, A. 2015). The ERP system has been successful in companies; Davenport (1998) defined "an ERP system as a system that allows a better flow of information, centralization and control of this information, allowing to achieve productivity gains, and to offer competitive advantages." It is also in this context that Rowe (1999) stated that "the implementation of the ERP system is accredited with certain essential qualities that improve the management control function". Theoretical Foundations The evaluation of the determinants of the ERP system adoption and its impact on organizational control has been addressed in the literature by authors describing the determinants of adoption of an innovation and the ERP system; other writings have demonstrated the impact of adoption on organizational control. Among these works, this research has examined Rogers' theory of innovation diffusion (1983 and 1995) which presented the determinants of an innovation adoption, the contingency theory of Kast and Rosenzweig (1973), and the Davis technology acceptance model (1989). These models and theories have been applied to the domains of information systems by several researchers, they constitute the foundations of the theoretical base of the works carried out and of which this work is inspired. Models of Innovation Adoption The analysis seeks to explore the determinants affecting adoption in Information Technology to identify the determinants of business adoption of the ERP system. -Adoption of innovation "The decision-making process of adopting an innovation is the process by which an individual, or any other unit of analysis, moves from an initial knowledge of innovation to the formation of an attitude toward that innovation; then the decision to adopt or reject, the introduction of the new idea and, finally, the confirmation of that decision " Rogers (1995). The adoption process based on the individual dimension of Rogers' work (1995) has five stages. Rogers (1995) defined adoption as "a decision-making process in which an individual or organization agrees to use an innovation to achieve a goal." Rogers (1995); In her research, Pelletier (2005) illustrates the company's ability to innovate from an organizational point of view according to Rogers (1995). In the decision-making process at the organizational level, Rogers (1995) recognizes two major phases consisting of five stages. Being inspired by the decision-making process at the organizational level Rogers (1995) proposes three categories of variables favoring the organizational capacity to innovate. Three characteristics were identified by Rogers (1995), the first presenting the individual characteristics of the leader. The second illustrating the characteristics related to the internal structure of the company. And finally, the third reflects the external characteristics of the company. This illustration allows us to specify that the process of adopting an innovation or technology is caused by external pressures that subsequently materialize at the organizational level The results show a close interrelationship between the advantages of implementing ERP systems and the skills required on SCM. The results show that the beneficial effects of ERP on SCM do not necessarily lead to a better overall competence of GCS. The study confirms that operational benefits, the business process and strategic benefits can enhance the company's skills in terms of business process integration, customer relationship and the improvement of system control and planning. Although the model has been verified and validated, some limitations seem to exist. Indeed, the authors recommend checking the model on other samples including companies from other countries such as Korea, China and Singapore. The study of Kallunki, Laitinen et Silvola The study by Kallunki, Laitinen and Silvola (2011) examines the effects of the adoption of the ERP system on the financial and nonfinancial performance of the company. The authors study the role of the formal system as well as the informal system as a mediating mechanism between the adoption of the ERP system and its future performance. "With the adoption of the ERP system companies can achieve progress by improving productivity" The results of the study demonstrate that formal management control systems can act positively as an intermediate mediating variable between the adoption of the ERP system and the non-financial performance of the enterprise. "However, the informal system of management control does not have the effects of mediation" (Kallunki, Laitinen and Silvola, 2011). The study also demonstrates a close relationship between financial performance and non-financial performance. The authors state that these results are very important for "the improvement of future performance with the adoption of the ERP system, given the shortcomings of the previous literature which is very limited on this subject" (Kallunki, Laitinen and Silvola, 2011) . The results also show that the impact of adopting the ERP system will "affect the future performance of the company over the long term and it is through the formal and informal management control systems that companies can achieve their goals "(Kallunki, Laitinen and Silvola, 2011). Some limitations can be retained in the research of Kallunki, Laitinen and Silvola (2011). Indeed, several variables were introduced in the model; it is within this framework that they recommend to limit the number of variables in a later study. A second limitation is that it is imperative to actually check the modules adopted by the companies surveyed before proceeding to the questionnaire. Indeed some surveyed companies may have introduced only a few modules which can skew the result of the impact of the ERP system adoption on the performance of the company. Despite the limitations of this research, it provides a conceptual framework that suggests using the formal and informal management control system for financial and non-financial performances. The results support the scopes of (Chapman and Kihn 2009, Kallunki, Laitinen and Silvola, 2011), which show that better management satisfaction can be achieved with the adoption of the ERP system, by allowing a better financial performance and non-financial even if performance is not directly improved. The study also confirms the results of Nicolaou (2004) illustrating the improvement of the company's performance following the adoption of the ERP system by introducing the parameters of the formal and informal management control system. Specific conceptual framework The purpose of this article is to formulate a framework illustrating the impact of the adoption of the ERP system on organizational control. Variables and hypotheses of research Once the variables are justified and the relationships are identified, it is necessary to summarize the hypotheses used in this research: The following table illustrates the different hypotheses that will be analyzed. Tableau 1: Research hypotheses H1 The relative advantage positively impacts the adoption of the ERP system. H2 Operational and technological compatibility positively impacts the adoption of the ERP system. H3 The complexity of the system positively impacts the adoption of the ERP system. H4 The company (business) environment positively impacts the adoption of the ERP system. H5 Innovation capacity positively impacts the adoption of the ERP system. H6 The management commitment has a positive influence on the adoption of the ERP system. H7 The user satisfaction has a positive impact on the adoption of the ERP system. H8 The skills in reporting and the use of the new methods of information processing are essential for the management controllers using ERP. H9 Skills in the implementation of indicators and participation in the piloting are essential for the management controllers using ERP. H10 Skills in budget management, transparency and flexibility of information are paramount and essential for the management controllers who use ERP. H11 The experience of supervisors is a particularly important skill. H12 The information system skills of management procedures are particularly important conditions for the management controllers who use ERP. H13 The management controller who uses ERP has general missions not related to the traditional management controllers' activities. Conceptual model The theoretical developments presented in the preceding sections contribute to presenting the general context of the effect of the ERP system adoption on the management control function and to circumscribe a specific framework to study all the aspects relating to the research problem to explore, in particular, the organizational changes observed in the management control function following the adoption of the ERP system. The proposed research model aims to reduce and optimize the theoretical models presented in the literature by certain numbers of researchers and present the appropriate variables to the chosen problem. Methodology and field of research To answer the selected research problem, the mother population includes all the Tunisian companies that have adopted the ERP system. The objective of the research is to establish a concordance between the assumptions used in the proposed conceptual model. The choice of the target population is made in two phases. The first phase is to diagnose the population of Tunisian companies that have already adopted the ERP system. To explore these companies, our key sources of information are the main ERP system vendors who are present in Tunisia namely, the companies "DISCOVERY", and "TIMSOFT" as well as other companies related to our own network of knowledge, it is mainly composed of the Poulina group, the company "GRANUPHOS", the Tunisian telecommunication operator "OOREDOO", "HENKEL" and the "TUNISAIR Company". Publishers who have contributed to the exploration of the selected sample of companies are service companies and computer engineering firms. Indeed, the company "DISCOVERY" was created in 1993, by a shareholding of companies, whose main task was to contribute to the implementation of integrated IT management solutions for medium and large companies. The second publisher chosen is the company "TIMSOFT", and it is also a service company. Its wide functional coverage and the ease of implementation of the proposed solutions are one of the assets of this company. The authors were able to make contacts with this company, through the company located in metropole of Tunis, as well as through seminars that it organized. A second phase consisted in selecting a database of companies and grouping the top 100 companies in Tunisia by adapting it in order to keep companies that have adopted the ERP system, especially the financial and management control module. This choice allowed compiling a list of seventy companies as a sample for the survey. The results of the cross-sectional analysis of the definition and representativeness criteria of the sample show that the majority of respondents are management controllers and ERP project managers. Most of the questioned companies belong to a group, they are large companies with a turnover exceeding 6 Million Dinars, it should also be noted that 46.7% of the companies surveyed have more than 5 modules and 31% companies acquired the ERP between 2000 and 2001. Before presenting the tools and methods for analyzing the data collected, it is now necessary to describe the technique adopted and the scales of measurement of the search variables. Search Results Several variables are identified as having an organizational impact on the management control function. The scale chosen for measuring items in this research is the Likert scale. A questionnaire was presented to a panel of experts operating in the field for final approval before conducting the survey. The panel of judges consists of two teachers, a specialist of the company "TIMSOFT", a specialist of the company "DISCOVERY" and a General Chair of a Tunisian company that has already adopted the ERP system. Exploratory and Confirmatory Analysis The review of the literature has allowed us to identify several scales reflecting the different levels of research. The scales used were the subject of an exploratory factor analysis. In this part of the analysis, "principal component analysis (PCA)" is used. Given the large number of items selected, it is imperative to look for a method that allows to purify the items and to retain only the most significant items to the result of the research. Indeed, a questionnaire survey was opted for. The validation of the scales of measurement requires studying their dimensionality, their reliability and their validity. The study of dimensionality is carried out by means of exploratory factor analysis. "The reliability or internal consistency of our measurement scales is examined through the Cronbach alpha coefficient" Gavard P. et al. (2008, p218), Churchill (1979). The results of the study of the dimensionality, the reliability and the convergent validity of each scale of measurement will be exposed by means of an exploratory factorial analysis of each scale with the examination of its reliability. However, such an analysis requires the verification of a number of conditions including, in particular, the KMO "Kaiser-Mayer-Olkin" index which is defined as "a generalized measure of the partial correlation between the variables of the study" Stafford and Bodson (2006, p.80), Bartlett's test and the quality of item representation. After collecting the questionnaires during the exploratory phase, each measuring instrument was processed with the SPSS 21 data processing software. To do this, the approach that was followed is to first examine the data, then proceed to a Principal Component Analysis (PCA) and finally validate the results by checking the cronbach's alpha value of each variable revealed by the Principal Component Analysis. For data analysis, we used the structural equations approach through the AMOS software, in order to respond to the approach of the conceptual model chosen for the research work. Another step is the confirmation of the model, using the AFC technique. Following the Exploratory Factor Analysis and Reliability Analysis we will try to show "to what extent alternative models explain the relationships between items in a scale" For the analysis of data, we intend to use the structural equations approach through the AMOS software, in order to respond to the approach of the conceptual model chosen for our research work. The empirical study of the conceptual model in question consists of direct relationships between independent and dependent variables. The appropriation of the modeling method by the structural equations allowed to testing the relationships between the structural model variables treated from the conceptualization. The authors tried to expose and perform a purification of the scales of measurement. A second step was to perform an exploratory and confirmatory factor analysis and then proceed with the structural equation method to test the hypotheses. Conclusion Several companies, notably Tunisian ones, had to restructure their organization by introducing an upgrading plan. These companies have invested in the reengineering of their information process and have adopted state-of-the-art technologies, including the 'ERP' system, which meets the new requirements. The purpose of this article was to study the organizational impact of the adoption of ERP on the evolution of the function of management controllers in Tunisian companies. The analysis carried out initially dealt with the determinants of adoption of an ERP system; secondly, with the organizational effects of this adoption (how will the introduction of the ERP system change the organization). This research is also devoted to analyze the interactivity of the ERP system and organizational control. In other words, how will this system contribute to the strengthening of the control system and the improvement of the management control function in the organization? and to verify, moreover, whether the management controls logic changes following the adoption of the ERP system, while underlining the possible new role to be attributed to management controllers in the organization. The results of the analysis of the criteria of definition and representativeness of the sample show that the majority of respondents are management controllers and ERP project managers. Most of the surveyed companies belong to a group and are for the most part large companies with a turnover exceeding 6 MD. The results of the analysis highlight the importance of managerial prerequisites for management commitment and user satisfaction, the importance of technological skills in terms of new reporting methods, and generations of dashboard for the management controllers. The implementation of new indicators for the participation of management controllers in the process of steering operational and strategic decisions has improved significantly with the appropriate use of information systems and especially ERP. At this level of research, it is confirmed that a large proportion of Tunisian companies practice traditional management control by adopting operational and non-strategic steering and reporting techniques. This can be deduced by a strategic choice of managers based on very specific visions of the general context. Despite a strong willingness of managers interviewed, especially management controllers, to improve the control system in their respective services, conflicts were detected mainly by an inability to keep pace with the change that overtakes technological change to an organizational change involving the entire organization. The results reveal the contribution of the management controller of the surveyed companies to the operational objectives by an active and real participation in the decisions of piloting, the programs of operational planning and some strategic operations. Changing the roles of the controllers will lead to a shift in responsibilities and will require greater willingness to integrate them for full organizational and technological change in the process. The implementation of the ERP is part of a transversal logic by creating a new vision tending to break the organizational boundaries. New relationships and practices can be developed between different staff members. This article helps to highlight the importance of management's managerial mobilization, which made it possible to follow the organizational change induced by the ERP system. Despite the resistance to change by some users, ERP managers and project managers have managed to achieve their goal of integrating ERP with the strengthening of the control system in the organization. A control that has become easier with ERP and opportunities offered by this system. From an operational point of view, the research helped to empirically emphasize the main job of the management controller, through the adopted methods for the treatment of information, their necessary personal and professional skills of the management controllers, their commitment and involvement in the use of information technologies and especially ERP. and control through the adoption of ERP, especially for large companies. Our study also confirms this result; in fact 48.9% of the surveyed companies are large. This proves that contingency factors are very important, factors that change from one company to another, making it possible to create differences between the controllers from one company to another. From a strategic point of view, our research has important implications for managers and management controllers by specifying the missions of participation in piloting and reporting. The results offer the latter the alternative of positioning the different practices required by the profession in relation to the field of possibilities using ERP and possibly see the chance of strengthening missions for better control in Tunisian companies. This reflection leads the authors to confirm the results of the studies conducted by Su.Y and Yang (2010) which state that the strategic advantages of the ERP make it possible to improve the control and planning system of enterprises. this study proposed to consider the management controller as an adviser who is in charge of consulting missions with all the skills necessary for the success of his mission and not as a simple technician (Fornerino, M., Deglaine, J., Godener , A. 2003). Thus, his role is no longer limited to the reporting of information but rather it represents an agent of change (Scapens and Jazeyri, 2003). The study by Boitier (2004) showed that "the adoption of ERP is part of logic of interactive control. This interactivity brings together the development of interactive control in the organization (Fahy and Linch, 1999, Scapens and Jazayeri, 2003, Boitier, 2004) and the orientation and support of the strategic decision by establishing a control system closer to the operational directors " (Boitier, 2004, Granuld andMalmi, 2002). The results of this study prove the existence of a positive relationship between the adoption of the ERP and its contribution to strengthening the control system in the Tunisian surveyed companies. Management commitment is a strategic managerial decision involving ERP users for enhanced interactive control across the organization and between different departments of the organization. A system requires a deep adaptation of the organizational culture of the different actors with a strong internal will of change and evolution. Research Perspectives This work has finally allowed the authors to have significant results that can be exploited and verified in the field. Hence, other perspectives can be offered for future research and limitations can be identified to this study. At the theoretical level, a study of the financial impact of the indicators on the performance of management auditors could perhaps have taken place. Indeed, this research focused mainly on organizational impacts that neglect the financial impact of indicators. Being a considerable financial burden for a company to set up the ERP, it would also be wise in another research framework to study a funding scheme or even a reorganization of investments to adopt an ERP. A second limitation is the absence of the retroactive effect of the adoption of the ERP on the organizational structures of the management controllers. In general, the adoption of ERP has an impact on the organization in general in terms of organizational mode, and in particular on the structure and strategy department of the management controllers. But this study reveals the absence of the retroactive effect of the adoption of the ERP on the organizational structures of the management controllers. This is the second limit of this research. Regarding the reflection of the adoption of the ERP system, the utility of the latter remains the same; however, each system uses its own approach, logic, and architecture at the design level in the case of statutory grouping of multiple entities. How then to succeed to migrate on a common ERP or in a simpler way, would it With such a system, it would be interesting to study the managerial behavior of the research and development division against the recommendations of the management controllers with a view to invest in innovation. The highlighting of these limits leads to propose new and relevant research pathways. In order to give more consistency to our results, this research could be expanded by providing reliable management control by creating a universal virtual controller based on a set of parameters inspired by a tested conceptual model. This computerized tool will have the role of verifying the statistics collected with the real state and measuring the efficiency of one ERP compared to another. The adoption of an ERP is a long process that must be studied in a framework of evolution and growth. Given the particularity of each company to manage and manage internally, there may be a specific module to deal with. A longitudinal study of the ERP system and in particular QPR: Quality Processes Results can be considered by adding more appropriate modules to implement the strategy adopted by the managers and to improve the performance of the processes. It will also be interesting in the future to focus on another aspect concerning this subject. This is the managerial part, proposing to study the evolution of the professions of managers and managers in interactivity with the ERP system.
6,437.2
2019-09-24T00:00:00.000
[ "Computer Science" ]
RELATIONAL EGALITARIANISM AND THE GROUNDS OF ENTITLEMENTS TO HEALTHCARE In recent years, a number of philosophers have argued that much theorizing about the value of equality, and about justice more generally, has focused unduly on distributive issues and neglected the importance of egalitarian social relationships. As a result, relational egalitarian views, according to which the value of egalitarian social relations provides the grounds of the commitment that we ought to have to equality, have gained prominence as alternatives to more fundamentally distributive accounts of the basis of egalitarianism, and of justice-based entitlements. In this paper, I will suggest that reflecting on the kind of explanation of a certain class of our justice-based entitlements that relational egalitarian considerations can offer raises doubts about the project, endorsed by at least some relational egalitarians,of attempting to ground all entitlements of justice in the value of egalitarian social relationships. I will use the entitlement to healthcare provision as my central example. The central claim that I will defend is that even if relational egalitarian accounts can avoid implausible implications regarding the extension of justice-based entitlements to health care, it is more difficult to see how they can avoid what seem to me to be implausible explanations of why individuals have the justicebased entitlements that they do.To the extent that I am correct that relational egalitarian views are committed to offering implausible explanations of the grounds of justice-based entitlements to healthcare, this seems tome to provide at least some support for amore fundamentally distributive approach to thinking about justice in healthcare provision. Article abstract In recent years, a number of philosophers have argued that much theorizing about the value of equality, and about justice more generally, has focused unduly on distributive issues and neglected the importance of egalitarian social relationships. As a result, relational egalitarian views, according to which the value of egalitarian social relations provides the grounds of the commitment that we ought to have to equality, have gained prominence as alternatives to more fundamentally distributive accounts of the basis of egalitarianism, and of justice-based entitlements. In this paper, I will suggest that reflecting on the kind of explanation of a certain class of our justice-based entitlements that relational egalitarian considerations can offer raises doubts about the project, endorsed by at least some relational egalitarians, of attempting to ground all entitlements of justice in the value of egalitarian social relationships. I will use the entitlement to healthcare provision as my central example. The central claim that I will defend is that even if relational egalitarian accounts can avoid implausible implications regarding the extension of justice-based entitlements to health care, it is more difficult to see how they can avoid what seem to me to be implausible explanations of why individuals have the justice-based entitlements that they do. To the extent that I am correct that relational egalitarian views are committed to offering implausible explanations of the grounds of justice-based entitlements to healthcare, this seems to me to provide at least some support for a more fundamentally distributive approach to thinking about justice in healthcare provision. RELATIONAL EGALITARIANISM AND THE GROUNDS OF ENTITLEMENTS TO HEALTHCARE BRIAN BERKEY ASSISTANT PROFESSOR, DEPARTMENT OF LEGAL STUDIES AND BUSINESS ETHICS, UNIVERSITY OF PENNSYLVANIA ABSTRACT: In recent years, a number of philosophers have argued that much theorizing about the value of equality, and about justice more generally, has focused unduly on distributive issues and neglected the importance of egalitarian social relationships. As a result, relational egalitarian views, according to which the value of egalitarian social relations provides the grounds of the commitment that we ought to have to equality, have gained prominence as alternatives to more fundamentally distributive accounts of the basis of egalitarianism, and of justice-based entitlements. In this paper, I will suggest that reflecting on the kind of explanation of a certain class of our justice-based entitlements that relational egalitarian considerations can offer raises doubts about the project, endorsed by at least some relational egalitarians, of attempting to ground all entitlements of justice in the value of egalitarian social relationships. I will use the entitlement to healthcare provision as my central example. The central claim that I will defend is that even if relational egalitarian accounts can avoid implausible implications regarding the extension of justice-based entitlements to health care, it is more difficult to see how they can avoid what seem to me to be implausible explanations of why individuals have the justicebased entitlements that they do.To the extent that I am correct that relational egalitarian views are committed to offering implausible explanations of the grounds of justice-based entitlements to healthcare, this seems to me to provide at least some support for a more fundamentally distributive approach to thinking about justice in healthcare provision. INTRODUCTION In recent years, a number of philosophers have argued that much theorizing about the value of equality, and about justice more generally, has focused unduly on distributive issues and neglected the importance of egalitarian social relationships. 1 The distributive theorists that these "relational egalitarians" criticize typically begin from an account of the currency of justice (for example, welfare, resources, primary social goods, or capabilities), and proceed to articulate principles to govern the distribution of that currency (for example, equal distribution, priority for the worse off, equal opportunity, or sufficiency). 2 Egalitarian distributive theorists typically hold that equal distribution of the currency of justice is a baseline that can be deviated from only given a sufficient justification. 3 For my purposes in this paper, the most important feature of distributive views is that they explain individuals' entitlements to particular resources and socially provided services, at least in part, in terms of more general entitlements to shares in the currency of justice. And since entitlements to shares in whatever currency a theorist favours are, on distributive views, themselves grounded in whatever more general interests of individuals are thought to support that currency over alternatives, distributive views ultimately ground at least some entitlements to resources and socially provided services in the justice-relevant interests that those resources or services might promote. Relational egalitarians claim that distributive theorists have failed to appreciate the role that an ideal of egalitarian social relationships should play in an appropriate conception of the value of equality. Though some who embrace this criticism of prominent distributive approaches do not view relational egalitarianism as a competitor to distributive views, 4 many of the most prominent relational egalitarians do see their approach as an alternative to such views, rather than as a complement to them. 5 My focus in this paper is on relational egalitarian views conceived of as competitors to distributive approaches to equality and justice; none of my arguments applies against the view that distributive approaches should be complemented by a concern for relational equality. 6 For ease of presentation, I will, in the remainder of the paper, use the label "relational egalitarianism" to refer only to views that constitute alternatives to distributive approaches, and "relational egalitarians" to refer only to proponents of such views. Relational-egalitarian views that constitute alternatives to distributive approaches hold that the fundamental value that grounds entitlements of justice is egalitarian social relationships, rather than the kinds of interests that might be taken to support one view about the currency of justice over others. On these relational egalitarian views, entitlements of justice, including distributive entitlements, should be understood as grounded, in some way or other, in the value of egalitarian social relations. For relational egalitarians, then, it is ultimately the value of egalitarian social relationships that explains why individuals have whatever particular entitlements of justice that they do, including entitlements to a share of society's resources, to opportunities, and to the provision of services such as healthcare. My aim in this paper is to suggest that reflecting on the kind of explanation that relational egalitarians are committed to offering of a certain class of our justicebased entitlements raises doubts about the relational egalitarian project of attempting to ground all entitlements of justice in the value of egalitarian social relationships, rather than allowing that at least some such entitlements might be grounded in the kinds of values underlying distributive approaches. I will use the entitlement to healthcare provision as my central example, since I think that this case highlights the challenge facing relational egalitarians in a particularly striking way. 7 The central claim that I will defend is that even if relational egalitarian views can avoid implausible implications regarding the extension of justice-based entitlements to healthcare, it is more difficult to see how they can avoid what seem to me to be implausible explanations of why individuals have the justice-based entitlements that they do. To put this point another way, I will argue that,even if relational egalitarians can give a plausible answer to the question "Who is entitled to what, when it comes to the social provision of healthcare?," it is less clear that they can offer an equally plausible answer to the question "Why are individuals entitled to the socially provided health care that they are?" To the extent that I am correct that relational egalitarian views are committed to offering implausible explanations of the grounds of justice-based entitlements to healthcare, this seems to me to provide at least some support for a more fundamentally distributive approach to thinking about justice in healthcare provision, since plausible distributive approaches are consistent with quite intuitive explanations of the grounds of justice-based entitlements to healthcare. More generally, the success of my challenge to relational egalitarian explanations of justice-based entitlements to health care would suggest that relational egalitarians will struggle to provide plausible explanations for a number of other widely endorsed entitlements of justice. The force of the concerns that I will raise for relational egalitarian approaches to justice in healthcare provision, however, do not by themselves generate support for any particular more fundamentally distributive theory. The success of my argument, then, will not necessarily lead us in the direction of what has, in recent years, been the main competitor to relational egalitarianism, both in discussions of health and healthcare justice, and in discussions of egalitarian justice more generally-namely, luck egalitarianism. 8 Luck egalitarianism offers a distinctive type of answer to the question of why individuals are entitled to the socially provided healthcare that they are. That answer is, roughly, that such care is necessary to remedy inequalities in health that are the result of brute luck, rather than the result of option luck, or, in other words, the result of choices for which individuals can be held responsible. And although I am inclined to think that this luck egalitarian answer is at least more plausible than what relational egalitarians can offer, I do not think that it is necessarily the most plausible answer available. 9 I hope, then, that reflecting on the question about the grounds of entitlements of justice in healthcare that I will focus on in this paper can help to lead egalitarian discussions of health and healthcare justice in new directions. I will not, however, attempt to pursue any of those directions here. I will proceed in the remainder of the paper as follows. In section 1, I will describe the key features of relational egalitarianism, drawing primarily on Elizabeth Anderson's development of the view. In particular, I will highlight the kind of explanations that relational egalitarians are committed to offering for justicebased entitlements to resources, opportunities, and service provision. In section 2, I will examine the explanations available to relational egalitarians for entitlements to healthcare provision, and argue that, at least in certain kinds of cases, these explanations seem unsatisfying. The difficulty of providing satisfying explanations for entitlements to healthcare provision within a relational egalitarian framework, I will suggest, provides some reason to favour a more fundamentally distributive approach to justice in health and healthcare provision. I will conclude, in section 3, by briefly highlighting the limits of the argument developed in section 2, and by suggesting how it might inform our thinking about the divide between relational and distributive approaches to justice going forward. RELATIONAL EGALITARIANISM While some views that can be described as versions of relational egalitarianism claim only that the value of equality is best understood in relational egalitarian terms, and allow that justice may be an entirely distinct value that can at times compete with equality, my concern in this paper is relational egalitarian approaches that aim to offer alternatives to distributive approaches to justice. 10 Relational egalitarianism, insofar as it constitutes an alternative to such distributive approaches, is both a view about how the value of equality is best understood, and a view about the basis of entitlements of justice, including distributive entitlements. Relational egalitarian views, then, constitute a type of egalitarian view about justice that can be contrasted with the type represented by the distributive views that relational egalitarians have aimed to challenge. Several prominent relational egalitarians clearly conceive of their views as offering alternatives to distributive approaches to justice, in addition to offering an account of the value of equality. Anderson, for example, explicitly contrasts the view that she develops with luck egalitarian approaches to justice. She says that, contrary to what is implied by luck egalitarianism, on her relational egalitarian view, "the proper negative aim of egalitarian justice is not to eliminate the impact of brute luck from human affairs, but to end oppression" (1999, p. 288). Elsewhere, she makes it clear that, on her view, it is relational egalitarian principles that explain when inequality in the distribution of "non-relational goods" is and is not unjust. 11 She says, for example, that while "luck egalitarians claim that inequality is unjust when it is accidental… [,] relational egalitarians claim that inequality is unjust when it disadvantages people: when it reflects, embodies, or causes inequality of authority, status, or standing" (2010, p. 1-2, italics in original). 12 Samuel Scheffler endorses a slightly weaker view than Anderson's about the connection between relational equality and the requirements of distributive justice. On his view, the content of principles of distributive justice is explained by a range of values, including, but not limited to, equality as understood in relational terms (2015, p. 42). Like Anderson, however, he insists that relational egalitarianism is "a genuine alternative to the distributive view" of egalitarian justice, as opposed to a version of such a view (2015, p. 23). He adds that "if we accept the relational view, this will affect the way we think about the content of distributive justice" (ibid). Specifically, the relational approach that Scheffler favours "asks what the broader [relational] ideal of equality implies about distributive questions" (ibid). Like Anderson, then, Scheffler believes that relational egalitarianism will play an important role at least in explaining a range of distributive entitlements, and that the explanations offered for such entitlements by distributive views should be rejected. 13 Christian Schemmel is, among self-described relational egalitarians, perhaps the most explicit about understanding relational egalitarianism as a view about justice, in addition to a view about how we should understand the value of equality. Relational egalitarianism, he says, "is a view about social justice" (2011, p. 366). He notes that "it is unclear what social justice as relational equality demands in distributive terms" (ibid, p. 365), and aims to argue that "a relational egalitarian conception of social justice yields powerful intrinsic and instrumental reasons of justice to care about distributive inequality in socially produced goods -despite its according center stage to just social relationships and not to the distribution of goods per se" (ibid). On Schemmel's view, then, the requirements of distributive justice are explained by the requirements of just social relationships, which are, on the relational egalitarian view of justice that he endorses, the fundamental justice-relevant value. 14 It is clear, then, that at least some prominent relational egalitarians hold that the value of egalitarian social relationships provides the ground-level explanation for entitlements of justice, including distributive entitlements. This should not be surprising, since relational egalitarianism was developed by its early proponents as an alternative to distributive approaches to equality and justice, and in particular to luck egalitarianism. 15 Before I move on to consider the kinds of explanations that can be given in relational-egalitarian terms for entitlements of justice to socially provided healthcare, it is worth highlighting some further key features of relational egalitarian views. This will serve as additional background for thinking about the distributive implications of relational egalitarianism, and the kinds of explanations that can be offered within the relational egalitarian framework for distributive entitlements. According to Anderson, a central, minimal aim of relational egalitarianism is to eliminate relations of oppression, including domination, exploitation, and marginalization (1999, p. 313; see also Schemmel 2011, p. 366). Opposing these hierarchical relations, relational egalitarians "seek a social order in which persons stand in relations of equality" (Anderson 1999, p. 313; see also Anderson 2012, p. 40 andScheffler 2015, p. 21-23). Achieving relational equality, according to Anderson, requires eliminating at least three types of hierarchy, which are "typically based on ascriptive group identities such as race, ethnicity, caste, class, gender, religion, language, citizenship status, marital status, age, and sexuality" (2012, p. 42). The first are "hierarchies of domination or command," in which some are "subject to the arbitrary, unaccountable authority of social superiors and thereby made powerless" (2012, p. 42-43). The second are "hierarchies of esteem," in which "those occupying inferior positions are stigmatized -subject to publicly authoritative stereotypes that represent them as proper objects of dishonor, contempt, disgust, fear, or hatred on the basis of their group identities" (2012, p. 43; see also Schemmel 2011, p. 380-385). And the third are "hierarchies of standing," in which the interests of those favoured are "given special weight in the deliberations of others and in the normal (habitual, unconscious, often automatic) operation of social institutions" (2012, p. 43; see also Scheffler 2015, p. 35, 37-38 andSchemmel 2012). In virtue of their concern to eliminate these forms of hierarchy, relational egalitarians are committed to democratic norms according to which everyone is entitled to participate in open discussion as part of a project of collective self-determination, and everyone's claim to be heard and treated with equal respect is to be acknowledged. Relational egalitarians, then, are committed to a requirement of political equality (Anderson 2012, p. 46-47;Scheffler 2015, p. 37). Standing in relations of political equality requires that all citizens have the capabilities that are necessary to function as equal citizens in a democratic state (Anderson 1999, p. 316). The value of relations of political equality, then, will ground entitlements of justice to whatever is necessary for citizens to function as equals in a democratic state, such as a sufficient level of socially provided education. Anderson's view is not, however, concerned only with the way in which the various types of hierarchy described might undermine political equality. Equal political rights, along with social provision of all of the necessary conditions for individuals to exercise those rights, are, at least in principle, consistent with private relations of domination and exploitation. But Anderson takes these inegalitarian private relations to be unjust as well, and so holds that the capabilities necessary to avoid private oppression must be socially provided. More generally, she accepts a broad view of social equality, according to which individuals must be capable of relating to each other as equals not only within the political arena, but also in civil society more broadly, including in market transactions and in the range of activities that constitute the broader social life of a society. 16 There is, I think, quite a bit that is appealing about Anderson's characterization of her view and about the claim that egalitarian social relationships are a fundamental concern of justice. And the view does seem to be able to incorporate a wide range of the entitlements to resources, services, and opportunities that egalitarians of all types are typically committed to endorsing. For example, having the capability to function as an equal citizen clearly requires having access to adequate food, clothing, and shelter, as well as sufficient education. It also plausibly requires, as Anderson points out (1999, p. 317), effective access to medical care. The ideal of social equality seems clearly capable of grounding entitlements to a sufficient income, to equal opportunity in the pursuit of desirable careers, and to a wide range of familiar social and political rights. The unique feature of relational egalitarianism that is important for my purposes in this paper is not the content of the entitlements that it entails (though these will differ from the entitlements entailed by at least some alternative egalitarian views), but rather the fact that these entitlements are taken to be grounded in the more fundamental value of egalitarian social relationships. Here is how Anderson puts this point with respect to the distribution of resources: "Certain patterns in the distribution of goods may be instrumental to securing [egalitarian social] relationships, follow from them, or even be constitutive of them. But [relational] egalitarians are fundamentally concerned with the relationships within which goods are distributed" (1999, p. 313-314; see also Scheffler 2003, p. 23 andSchemmel 2011, p. 365). 17 In other words, on relational egalitarian views, any distributive entitlements of justice that individuals have must be explained by their status as a means to egalitarian social relationships, as a necessary consequence of egalitarian social relationships, or as an essential feature of egalitarian relationships themselves. More generally, entitlements of justice must be explained in terms of the value of egalitarian social relationships. 18 Egalitarian social relationships are, then, something of a master value within relational egalitarian views. Individuals' fundamental entitlement of justice is to be capable of standing in egalitarian relations with all of their fellow citizens; and they are derivatively entitled to anything that is a necessary means to, a necessary consequence of, or a constitutive element of such relations. It is clear that distributive entitlements will sometimes be necessary means to egalitarian social relationships. For example, access to adequate education is surely a necessary condition of becoming capable of functioning as an equal citizen in a democratic society. It also seems at least plausible that certain distributive entitlements might follow as a consequence of the fact that citizens in fact stand in egalitarian social relationships. For example, if a society's economic structure is designed in a way that fosters fair equality of opportunity 19 and the egalitarian social relations that can plausibly be thought to be encouraged in conditions in which individuals engage in economic activity on fair terms, it seems plausible that the distributive outcomes of voluntary transactions generate entitlements of justice. 20 It is at least somewhat less clear what it might mean for a distributive pattern or set of entitlements to be constitutive of egalitarian social relationships. One approach to developing this possibility, which will be relevant to the discussion of entitlements to healthcare provision, is to claim that social provision of certain goods or services is an essential expression, via social institutions, of citizens' equal status. 21 The central idea behind this approach is that part of what it is to stand in egalitarian relationships with one's fellow citizens is to live under shared institutions whose policies properly express the equal status of all. If it can then be argued that, in the absence of policies ensuring the provision of certain goods or services to all, the relevant institutions could not possibly be taken to properly express the equal status of all citizens, then we could conclude that those policies are a necessary condition of egalitarian social relations, not because they are a necessary means of bringing about some other state of affairs that is important from the perspective of relational equality, but instead because they constitute the only available way of expressing the equal status of all in policy. RELATIONAL EGALITARIANISM AND ENTITLEMENTS TO HEALTHCARE What do the central features of relational egalitarian views noted in the previous section imply about justice-based entitlements to healthcare? One thing that they imply is that, on a relational egalitarian view, the content of individuals' entitlements to healthcare will depend on what, in the way of healthcare, is necessary to ensure that they are capable of standing in egalitarian social relations to their fellow citizens. In addition, the explanation of why individuals are entitled to what they are, and why they are not entitled to other things, will be that the things to which they are entitled are necessary to ensure that they are capable of standing in egalitarian social relations to their fellow citizens, while the lack of other things from which they might benefit is at least consistent with the development and maintenance of egalitarian social relations. 22 One possible concern about a relational egalitarian account of entitlements to healthcare is that it will not be able to account for all of the entitlements that we intuitively think people have as a matter of justice. In other words, we might worry that relational egalitarianism has implausible implications regarding the extension of entitlements to healthcare. We might worry about this because there seem to be cases in which we think that people are entitled to socially provided healthcare, but in which it is at best unclear whether the care to which we think they are entitled can plausibly be understood as necessary to the development or maintenance of egalitarian relationships, constitutive of such relationships, or an essential expression, via health policy, of citizens' equal status. Consider the following case: Valerie suffers from condition X, which flares up occasionally. When it flares up, it makes it quite painful for Valerie to walk more than a short distance. Nonetheless, she remains capable of getting anywhere that she wants to go, and the condition does not prevent her from performing any essential tasks at her job. No one treats her any differently as a result of her condition, and having it in no way undermines the bases of her self-respect. Still, her life would be significantly better if she were able to avoid the pain that the condition causes. In order to see why relational egalitarianism might face a problem regarding cases like Valerie's, it will be helpful to consider, first, what we should say if it turns out that her condition is entirely untreatable. Would we think that she simply could not stand to her fellow citizens in an egalitarian relationship of the kind that Anderson and other relational egalitarians have in mind? Surely this cannot be the case. Those with untreatable chronic pain, and many other untreatable conditions, are clearly capable of standing in egalitarian relations to their fellow citizens. It would, I think, be an obviously unacceptable implication of a conception of the egalitarian relationships that ground entitlements of justice if it turned out that Valerie, or, for example, someone with an untreatable physical disability requiring the use of a wheelchair to get around, simply cannot stand in the sort of relations to her (or his) fellow citizens that ground entitlements of justice. Now consider what a relational egalitarian can say about Valerie's entitlement to socially provided treatment for condition X in a case in which such treatment is available. I assume that relational egalitarians will want to hold that, at least as long as the treatment is not extremely expensive, and as long as there are not many more urgent justice-relevant concerns that need to be addressed and ought to take priority, Valerie will be entitled to socially provided treatment. But if her pain is not a barrier to her ability to stand in egalitarian relations to her fellow citizens when it is untreatable, then at least certain ways of accounting for her entitlement to treatment are not going to be available to the relational egalitarian. Specifically, it cannot be claimed that alleviating pain of the kind that she experiences is necessary for the development or maintenance of egalitarian social relations between those who suffer from that kind of pain and their fellow citizens. After all, the pain is not itself a barrier to such relations, as we saw from considering the case in which it is untreatable. This may not seem like a significant problem, since, as I noted earlier, relational egalitarians can claim, of some entitlements of justice, that social provision is an essential expression, via social institutions, of citizens' equal status. And it may seem quite plausible to say that providing treatment for pain like Valerie's, when it is available, is such an essential expression. Failure to provide it, we might think, would amount to the community expressing that she has an inferior status within society, since viewing her as an equal would seem to require the sort of concern about her pain that would generate social provision of available treatment. This seems to me to be the kind of explanation that a relational egalitarian will likely have to offer for entitlements to treatment in cases like Valerie's, 23 which I assume they will generally want to endorse. But I think that there are reasons to be concerned about explanations of this kind. One reason for concern is that it is far from clear that the appeal to the need for policy to express the equal status of citizens is distinctive of relational egalitarianism. 24 This, of course, does not provide any reason to reject a relational egalitarian approach. It does, however, prevent relational egalitarians from appealing to the fact that their view allows for this kind of explanation in order to provide support for their approach as against alternatives. A second reason for concern is that it is not clear that the appeal to the need for policy to express the equal status of citizens avoids implicit commitment to claims that, it seems to me, relational egalitarians are committed to rejecting, and which are endorsed by proponents of more fundamentally distributive approaches. First, a wide variety of egalitarian views, including luck egalitarian views, hold that policy must reflect and express the equal status of citizens. Of course, there is disagreement about exactly which policies properly do this, since there is also disagreement about which fundamental values must inform policy if it is to have the appropriate expressive content. What is supposed to be distinctive about relational egalitarianism is that it holds that the value of egalitarian social relationships, not other values, must ground policy in order to properly reflect and express citizens' equal status. In order to be a distinctive view, relational egalitarianism requires an independent account of the content and requirements of egalitarian social relationships, which can then serve as a criterion for assessing candidate entitlements of justice. On such a view, in order for something to be an entitlement of justice, it must be necessary for the promotion or maintenance of egalitarian relationships as defined by the relevant view, or else constitutive of such relationships. If something is neither necessary as a means to nor constitutive of egalitarian social relations, then it is difficult to see how proponents of the view that such relations are the fundamental value that grounds entitlements of justice can claim that providing that thing is necessary to express citizens' equal status. In the absence of an argument that appeals to an independent account of the content of egalitarian social relations for the claim that providing treatment for Valerie's pain is either necessary as a means to or constitutive of such relations, then, it seems ad hoc for a relational egalitarian to claim that the provision of treatment is a necessary expression of her equal status. 25 Since her condition is not itself a barrier to egalitarian social relations (as was shown by considering the case in which it is untreatable), the explanation of why the claim that providing treatment is an essential expression of her equal status is true cannot be that providing the treatment is a necessary means to bringing about, or is constitutive of, the conditions for egalitarian social relations. Instead, if it is true that providing treatment for her condition is the only way that the community can properly express her equal status, the explanation for this would seem to be that alleviating her pain matters in itself, in a way that is relevant to justice-that is, it matters even though the presence of the pain is not itself a barrier to egalitarian social relations between her and her fellow citizens. But this is something that, it seems to me, a relational egalitarian cannot say. What is supposed to be distinctive of relational egalitarianism is that it holds that our fundamental justice-relevant interest is in egalitarian social relationships with our fellow citizens, and that any other justice-relevant interest that we have is derivative of that fundamental interest. On this view, to the extent that we have a justice-relevant interest in, say, the alleviation of pain, which grounds entitlements to things like medical care, this has to be explained, ultimately, in terms of our fundamental justice-relevant interest in egalitarian social relationships. Where an interest that people have is not connected in the right way to their interest in egalitarian social relations, relational egalitarians have to accept that it is not a justice-relevant interest that can ground justice-based entitlements. And trying to avoid this implication, where it seems intuitively implausible, by claiming that providing for the interest is an essential expression of a person's equal status, seems objectionably ad hoc. 26 V O L U M E 1 3 N U M É R O 3 A U T O M N E / F A L L Note that more explicitly distributive views seem to be able to handle cases like Valerie's quite a bit more easily. Many such views accept that avoidance of pain is itself a fundamental justice-relevant interest, 27 while others accept that our justice-based entitlements to resources and services are themselves explained by our broader interests, including the interest in avoiding pain. 28 I suspect that the best response on behalf of relational egalitarianism is to argue that if the community were to fail to provide available treatment for Valerie's condition, this would in fact undermine what could otherwise be egalitarian social relations between her and her fellow citizens. This could not be because her condition itself makes egalitarian relations impossible, but must instead be because the community's failure to provide relief when it could have done so will necessarily affect the way in which Valerie can relate to her fellow citizens. In particular, the thought is that the community's refusal to provide available treatment would make it impossible for her to engage with her fellow citizens on terms of equality, perhaps because the community's chosen policy cannot be plausibly interpreted other than as an indication that she is viewed as having inferior status. On the one hand, it seems to me plausible that the community's failure to provide available treatment to Valerie would, at least in some circumstances, undermine what could otherwise be egalitarian social relations between her and her fellow citizens. Because of this, it seems true that relational egalitarians can plausibly insist that their view is consistent with the intuition that she is entitled, as a matter of justice, to socially provided treatment. It is, however, difficult to see how the ground-level explanation of her entitlement could lie in the value of egalitarian social relations, as it must for a relational egalitarian. This is because when we ask why it is that failure to provide treatment would undermine the possibility of egalitarian social relations, the answer cannot be that the condition itself is incompatible with egalitarian relations. Instead, it seems to be the failure to alleviate avoidable pain that makes it the case that, in the absence of socially provided treatment, egalitarian social relations would be undermined. We take it that Valerie would be justified in thinking that the community is not treating her as it should, that she is being denied something to which she is entitled as a matter of justice. And it is the fact that she would be justified in objecting to the policy, on independent grounds, that explains why the policy would undermine the possibility of egalitarian social relations. If we did not think that there are good independent grounds for objecting to the policy, then we would not have any reason for thinking that it would undermine egalitarian social relations. Therefore, the fact that the policy would undermine egalitarian social relations cannot explain why Valerie would be justified in objecting to it. Instead, the order of explanation goes the other way. But relational egalitarians cannot accept what seems to be the right direction of explanation here. It seems to be the case that Valerie's independent interest in pain avoidance explains why she would be justified in objecting to a policy that does not include socially provided treatment for her condition, and the fact that she would be justified in objecting to the policy explains why the policy would undermine the possibility of egalitarian social relations between her and her fellow citizens. But this line of explanation attributes to Valerie a fundamental justice-relevant interest in pain avoidance, and that seems to be something that relational egalitarians are committed to rejecting. There is a closely related and, I think, simpler point that we can see in light of the line of reasoning that I have developed. It now seems that there is a way in which the relational egalitarian can get what will seem, at least in many cases, to be the correct answer about Valerie's entitlement to treatment for her painful condition. It does seem true that the community's failure to provide treatment would, in the absence of conditions that would justify this failure, undermine the possibility of egalitarian social relations between her and her fellow citizens. So, relational egalitarianism can, it seems, avoid extension problems in cases like Valerie's. It can, that is, give what appear to be the correct answers to questions about who is entitled to what in the way of healthcare. I suspect that this will be true in at least most cases, so that relational egalitarian views will not face any significant problems regarding the extension of entitlements to healthcare. But in cases like Valerie's, the explanation that relational egalitarians must give of why individuals are entitled to the healthcare that they are seems difficult to accept. If we ask why Valerie is entitled as a matter of justice to treatment for her condition, the right explanation seems to be that she has an important interest in the avoidance of pain that the community is obligated to take seriously when making health policy. That is a straightforward and, it seems to me, intuitively compelling answer to the question. The relational egalitarian, on the other hand, must say that she is entitled to treatment because the failure to provide it would, in some way or other, undermine egalitarian social relations. I have acknowledged that when it is true that a person is entitled to treatment, but not provided with it, this is likely to undermine egalitarian social relations. But it simply does not seem as though this fact can constitute the ground-level explanation of why someone like Valerie is entitled to treatment for her condition. To see why, imagine that we are asked whether we think that she is entitled to treatment, and aim to answer this question in a way that is consistent with a commitment to relational egalitarianism. It would appear that what we would have to say is something like the following: Well, of course the condition is quite painful, but what we really need to know in order to determine whether she is entitled to treatment is whether failing to provide it would undermine egalitarian social relations. If it would, then she is entitled to the treatment. Otherwise, justice does not require that it be provided. It may be true that, barring unusual conditions, every failure to treat a treatable painful condition would undermine egalitarian social relations. If this is the case, then relational egalitarianism will not have any particular problems getting the right extension when it comes to healthcare policy. But its explanations of why it is that people are entitled to the treatment that they are strike me as difficult to accept, and certainly less intuitive than the alternative of referring directly to the sort of justice-relevant interest in pain avoidance that more fundamentally distributive views can allow that we have. 29 CONCLUSION: RELATIONAL AND DISTRIBUTIVE APPROACHES TO JUSTICE The fact that relational egalitarian views face the kind of difficulty that I have highlighted when it comes to providing plausible explanations of justice-based entitlements to healthcare seems to me to constitute a significant challenge to the relational egalitarian project of grounding entitlements of justice in the value of egalitarian social relationships. Nevertheless, I do not take the argument that I have offered in this paper to amount to anything like a decisive case against relational egalitarian approaches to justice, or a vindication of a more fundamentally distributive approach. What I have offered is a characterization of a challenge for relational egalitarianism that, it seems to me, has not been fully appreciated in discussions of the view thus far. I take myself, then, to have presented relational egalitarians with a plausible line of objection to their view, which an adequate defence of the view must address. One response that a relational egalitarian might offer to my challenge is to acknowledge that the explanations of entitlements to healthcare that are available on the relational egalitarian approach are indeed counterintuitive, but to claim that we nonetheless ought to accept them, since the more fundamentally distributive approaches that are consistent with more intuitively plausible explanations face even more significant objections. 30 I accept that this is a possibility worth taking seriously, although I am at least cautiously optimistic about the prospects of developing an approach that avoids commitment to the kinds of explanations of entitlements to services such as healthcare provision that I have criticized, while also accommodating what seems to me to be the central valuable insight that relational egalitarian views have brought to recent discussions of justice, namely that individuals have a fundamental justice-relevant interest in standing in egalitarian social relations to their fellow citizens. One way of attempting to develop such a view is to include egalitarian social relations within a pluralist account of the currency of justice. 31 Although this approach has been suggested by some luck egalitarians (Lippert-Rasmussen 2015b), I suspect that it may be at least somewhat easier to develop within views that include distributive principles that are inconsistent with luck egalitarianism than within views that include central luck egalitarian commitments. For example, the luck egalitarian commitment to permitting distributive inequalities that are the result of choices for which individuals can be held responsible appears to put at least some pressure on a view to permit distributive inequalities that might threaten egalitarian social relations. More generally, the fact that people find themselves on the disadvantaged side of inegalitarian relations with some of their fellow citizens can, in principle, be the result of choices for which they can be held responsible. 32 There appears, then, to be at least some difficulty facing those luck egalitarians who might attempt to incorporate egalitarian social relations directly into the currency of justice and to combine that account of the currency of justice with a luck egalitarian distributive principle. 97 Consider, alternatively, the relative ease with which it appears possible to combine a pluralist account of the currency of justice that includes egalitarian social relations with, for example, a sufficientarian distributive principle. If we hold that justice requires that everyone be provided with a sufficient share of the elements that make up a pluralist account of the currency of justice, it seems open to us to hold that, with respect to social relations, sufficiency requires equality. We can, on this type of view, also hold that sufficiency with respect to goods and services such as income and healthcare requires that all citizens be provided, insofar as this is possible, with, for example, a share of these goods that allows them to live a pleasant, rich, and satisfying life. 33 And since pain avoidance is clearly a constitutive feature of the values that, on this type of view, ground the entitlement to a sufficient share of goods and services, Valerie's entitlement to treatment for her condition can be explained in a way that is much more intuitively plausible than the explanations available on relational egalitarian views. 34 It is unclear to me what the best version of a view of this general type might look like, and also unclear whether such a view can ultimately be defended. I cannot pursue the matter further here, but must leave it for future work. What I do hope to have accomplished in this paper is to have provided some reasons for those who are attracted to relational egalitarian approaches to justice to take seriously the possibility that at least some entitlements of justice must be grounded in values other than egalitarian social relationships. If I have succeeded in this aim, then the project of developing a view that takes both egalitarian social relationships and basic interests such as pain avoidance as fundamental justice-relevant interests should become more appealing than it has appeared to be thus far. This would, it seems to me, be a positive development within debates about the fundamental values that ground requirements and entitlements of justice. 98 The seminal contribution is Anderson (1999); see also Anderson (2010 and and Scheffler (2003Scheffler ( , 2005Scheffler ( , and 2015). 2 Important discussions within the distributive framework include Dworkin (1981a and1981b) and Cohen (1989). 3 Both luck egalitarian views (e.g., Cohen 1989) and Rawlsian views (e.g., Rawls 1999) share this feature. 4 See, for example, Wolff and De-Shalit (2007), Fourie (2012), and Lippert-Rasmussen (2012, 2015a, and 2015b. 5 See, for example, Anderson (1999, 2010), Scheffler (2003, 2005, and Schemmel (2011 and. 6 Indeed, I am inclined to think that this view is correct. 7 For sympathetic discussion of relational egalitarian approaches to health and health care justice, see Voigt and Wester (2015) and Kelleher (2016). 8 For recent discussion of the relationship between luck egalitarianism and relational egalitarianism (or democratic egalitarianism, as it is sometimes called) see Anderson (2010) and Lippert-Rasmussen (2012, 2015a, and 2015b. With regard to health and healthcare, see Kelleher (2016, p. 89-94). For a defence of a luck-egalitarian approach to justice in health and healthcare provision, see Segall (2010). 9 Once again, some who endorse the criticism that prominent distributive approaches are problematic because they have neglected the value of egalitarian social relations do not reject distributive accounts entirely, and so hold that the right kind of commitment to the value of relational equality is not necessarily incompatible with at least some distributive approaches, potentially including luck egalitarian approaches. The contrast that I suggest between luck egalitarian and relational egalitarian answers to the question of why individuals are entitled to the socially provided healthcare that they are applies only to relational egalitarian views that constitute competitors to distributive approaches such as luck egalitarianism. 10 I am grateful to an anonymous reviewer for prompting me to clarify this. Views that include a relational egalitarian component that is treated as separable from, and potentially in competition with, justice can be found in Cohen (2009) and Mason (2012). 11 Presumably Anderson uses the phrase "non-relational goods" to refer to the various kinds of goods that distributive theorists might think constitute part of the proper currency of justice. 12 Further evidence that Anderson conceives of her relational egalitarian view as, at least in part, a view about justice, and about distributive justice in particular, can be found in her claim that "relational egalitarians identify justice with a virtue of agents (including institutions). It is a disposition to treat individuals in accordance with principles that express, embody, and sustain relations of social equality. Distributions of socially allocated goods are just if they are the result of everyone acting in accordance with such principles" (Anderson 2010, p. 2; see also Anderson 2012, p. 44). 13 It is a bit difficult to state precisely to what extent my argument in this paper constitutes a challenge to Scheffler's overall view, since he does not specify which values, apart from relational equality, can contribute to explaining distributive entitlements. It seems to me, however, that Scheffler's insistence that the relational egalitarian view that he endorses constitutes a genuine alternative to distributive views puts at least some pressure on him to reject the kinds of explanations of entitlements to socially provided healthcare that I will argue seem most plausible. 14 See also Schemmel's remarks about the justice-relevance of relational egalitarian considerations in his 2012 contribution (p. 124-125, 128-129, 131, 133-134). 15 This fact about the development of relational egalitarianism is noted by Schemmel (2011, p. 389). It is most explicit in Anderson (1999 and2010) and Scheffler (2003 and2005). 16 Anderson discusses what she views as the problematically inegalitarian relationships that exist in contemporary workplaces between superiors and subordinates in her 2017 contribution. 17 Kasper Lippert-Rasmussen describes relational egalitarianism's concern for distributive matters in a somewhat narrower way. Relational egalitarians, he says, "contend that distribution matters only instrumentally in virtue of its impact on social relations and the degree to which these are suitably egalitarian" (2012, p. 118). This description seems to me unduly narrow, since Anderson's claim that some distributive requirements might be constitutive of egalitarian social relations seems at least plausible. I am grateful to an anonymous reviewer for helping me to clarify the relationship between Anderson's and Lippert-Rasmussen's descriptions of relational egalitarianism's concern for distributive issues. 18 Schemmel's (2011) argument that range constraints on distributive inequality are required as a matter of justice clearly proceeds on the assumption that this claim is correct. 19 For the ideal of fair equality of opportunity, see Rawls (1999, p. 73-78). 20 It is important that, for relational egalitarians, the conditions in which individuals engage in economic transactions must actually realize egalitarian social relationships in order for the distributive outcomes of voluntary transactions to generate robust entitlements of justice. This requirement will, on at least many views of what egalitarian social relationships consist in and require, rule out entitlements being generated in all of the cases in which, for example, right libertarians will take them to be generated. 21 For an argument that takes this form, but which focuses on range constraints on distributive inequality, rather than on entitlements to socially provided healthcare, see Schemmel (2011, p. 371-375). 22 Voigt and Wester describe the implications of relational egalitarianism for entitlements to healthcare in this way (2015, p. 211), and they note that both Anderson (1999, p. 317) and Scheffler (2003, p. 23) suggest this as well. 23 For discussion, see Voigt and Wester (2015, p. 212-214). 24 As an anonymous reviewer helpfully points out, it seems consistent with Ronald Dworkin's view that coercive institutions must express equal concern, via policy, for those subject to their authority (2000, p. 1). 25 In some circumstances, relational egalitarians (and others) might plausibly deny that Valerie is entitled to treatment for her pain, and so accept that there is no argument that can, or needs to, be made to the effect that providing it is an essential expression, via social institutions, of her equal status. This would plausibly be true in cases in which society faces a shortage of resources and there are more urgent priorities that must be addressed first, or perhaps in cases in which the treatment is, for reasons that cannot be justly remedied by society's institutions, extremely costly. It might also be true in cases in which society has chosen to prioritize providing a variety of other goods and services to Valerie and people like her, and has reasonably left treatment for her particular condition off the list of socially provided services. I am assuming, however, that relational egalitarians will, in at least some cases, want to insist that Valerie is entitled to socially provided treatment, and I am considering what kinds of explanations they can offer for this entitlement in those cases. I am grateful to an anonymous reviewer for prompting me to clarify this. 26 Relational egalitarians might claim that the explanation of Valerie's entitlement to treatment for her pain is that relating as equals within a political community requires that everyone's interests, or at least their justice-relevant interests, are equally taken into account in decisions made on behalf the community (see Scheffler 2015, p. 35 and 38). While this claim is plausible, for reasons that are given in the remainder of this section, I believe that the structure of the explanation that it allows relational egalitarians to provide for entitlements to socially provided healthcare is less plausible than alternative explanatory structures available on distributive views. I am grateful to an anonymous reviewer for prompting me to consider this type of explanation. 27 All welfarist views clearly have this implication, regardless of their position on the appropriate distributive principles, as do all positions that take welfare to be among the components of the correct currency of justice. For a view of the latter type, see Cohen (1989). At least some distributive views, however, may face greater difficulty offering quite as simple and intuitive an explanation of Valerie's entitlement to socially provided treatment. It seems to me that this provides at least some reason to favour views that include welfare as part of the currency of justice, though I cannot defend that claim here. 28 Consider, for example, a view on which resources are accepted as the currency of justice because of concerns about the implications of views that include welfare as part of the currency in cases involving expensive tastes (Dworkin 1981a, p. 228-240). Proponents of such a view might plausibly hold that a central part of the explanation of our resourcist entitlements is that the resources to which we are entitled will typically serve as means to promote various interests that we have, including, potentially, the interest in avoiding pain. 29 A large issue that arises for views that accept the kind of explanation of entitlements to socially provided healthcare that I claim is plausible is whether they can justify limiting the entitlements to members of a particular political community. Relational egalitarians might claim that it is an advantage of their approach that it can more easily justify this limitation, since it is plausible and widely accepted that the demands of social equality apply only within, and not across, political communities. I obviously cannot address this issue in any detail, but it seems to me that there are two reasons to doubt that relational egalitarians can claim a clear advantage over distributive views here. The first is that there are no obvious grounds for thinking that distributive theorists cannot consistently hold that an individual's interest in pain avoidance grounds entitlements of justice only within their particular community. And the second is that it is not obvious that there are compelling grounds on which relational egalitarians can deny that the value of egalitarian social relations can ground entitlements, and therefore obligations, of justice that apply across the boundaries of political communities. 30 The idea here is that we should judge competing theoretical positions according to a standard of relative plausibility and, at least provisionally, accept the one, of the sufficiently plausible alternatives, that is most plausible in comparison with the others. This will, at least in many cases, commit us to accepting views that we acknowledge face potentially significant objections, simply because all of the available views face at least some significant objections. For an argument that adopts this notion of relative plausibility as its standard, see Murphy (2000). 31 Lippert-Rasmussen (2015b) develops a view of this kind, on which he includes social standing in the currency of justice within a luck egalitarian framework. G. A. Cohen (2009) suggests that an ideal of "community," which bears strong resemblances to what relational egalitarians typically have in mind when referring to egalitarian social relations, might constitute a set of background conditions within which principles of luck egalitarian distributive justice should operate. Cohen's view does not, strictly speaking, build egalitarian social relations into the currency of justice, as he understands it. A view that incorporates Cohen's set of normative commitments could, however, be described in those terms. 32 Of course, in the actual world, inegalitarian social relations overwhelmingly do not derive from choices for which those on the disadvantaged side can be held responsible. 33 This is, of course, a rather imprecise criterion. It is, however, sufficient for my merely illustrative purposes here. Anderson (1999) suggests that relational egalitarianism might be best interpreted as implying a sufficientarian distributive requirement; for criticism see Schemmel (2011). 34 As an anonymous reviewer points out, it may be that, on some sufficientarian views, Valerie will not be entitled to socially provided treatment for her condition. If we think that the correct view of justice should imply that she is, at least in some cases (e.g., those in which it is not too expensive), entitled to treatment, then we should reject those sufficientarian views. The important point for my purposes is that sufficientarian views that do imply that she is entitled to socially provided treatment can provide what seems to be a quite plausible explanation of her entitlement.
13,353.2
2018-01-01T00:00:00.000
[ "Medicine", "Philosophy" ]
Functional characterization of the murine homolog of the B cell-specific coactivator BOB.1/OBF.1. B cell-specific transcriptional promoter activity mediated by the octamer motif requires the Oct1 or Oct2 protein and additional B cell-restricted cofactors. One such cofactor, BOB.1/OBF.1, was recently isolated from human B cells. Here, we describe the isolation and detailed characterization of the murine homolog. Full-length cDNAs and genomic clones were isolated, and the gene structure was determined. Comparison of the deduced amino acids shows 88% sequence identity between mouse and human BOB.1/OBF.1. The NH2-terminal 126 amino acids of BOB.1/OBF.1 are both essential and sufficient for interaction with the POU domains of either Oct1 or Oct2. This protein-protein interaction does not require the simultaneous binding of Oct proteins to DNA, and high resolution footprinting of the Oct-DNA interaction reveals that binding of BOB.1/OBF.1 to Oct1 or Oct2 does not alter the interaction with DNA. BOB.1/OBF.1 can efficiently activate octamer-dependent promoters in fibroblasts; however, it fails to stimulate octamer-dependent enhancer activity. Fusion of subdomains of BOB.1/OBF.1 with the GAL4 DNA binding domain reveals that both NH2- and COOH-terminal domains of BOB.1/OBF.1 contribute to full transactivation function, the COOH-terminal domain is more efficient in this transactivation assay. Consistent with the failure of full-length BOB.1/OBF.1 to stimulate octamer-dependent enhancer elements in non B cells, the GAL4 fusions likewise only stimulate from a promoter-proximal position. The octamer motif (ATGCAAAT) or its reverse complement was originally identified as a conserved element present in virtually all immunoglobulin promoters as well as in enhancer elements of immunoglobulin genes (1,2). In addition, it is also conserved in a variety of other genes specifically expressed in B cells (3)(4)(5). The role of the octamer motif for mediating B cell-specific transcription was most convincingly demonstrated when it was shown that a single copy of this motif confers B cell specificity onto a minimal heterologous promoter element (6). Likewise, multimerized octamer motifs efficiently functioned as B cell-specific enhancer elements (7). However, functional octamer motifs are also conserved in the regulatory regions of a variety of genes, which show ubiquitous expression patterns (1,8,9). Several transcription factors could be identified in different cell types that specifically interacted with the octamer motif (10,11). All mammalian cell types express the Oct1 protein (12,13). B cells, in addition, express a second type of octamer transcription factors, the Oct2 proteins (14 -18). Expression of Oct2 is largely confined to the lymphoid lineage, and there it is expressed as a family of isoforms which arise by alternative splicing from a single transcription unit (19). Oct1 and Oct2 belong to a growing family of transcription factors that all share a homologous DNA-binding domain, the POU domain (20,21). This POU domain is a bipartite DNA-binding domain consisting of a POU-specific and a POU-homeo subdomain. Both subdomains are required for efficient DNA binding (22,23), and recent crystallographic studies reveal that the two subdomains interact with opposite major grooves in the DNA double helix (24). In addition to Oct1 and Oct2, many other transcription factors have been identified that share a POU domain (25). Some of them, like Oct4 and Oct6, which are expressed in the germ line, also efficiently interact with the conserved octamer motif (11). The original hypothesis that Oct2 is determining the B cellspecific functions of the octamer motifs, whereas Oct1 would be responsible for its ubiquitous activities (8,25,26), was questioned by a variety of observations. In vitro transcription experiments failed to reveal a significant difference between Oct1 and Oct2 proteins (27,28). In some B cell ϫ T cell hybrid cell lines, octamer-dependent transcriptional activity was extinguished, although Oct2 expression was maintained (29). Moreover, a thorough investigation of gene expression in somatic cell hybrids from myeloma ϫ fibroblast cells showed a variable expression of the Oct2 gene. 1 Finally, stably transfected Oct2 did not activate octamer-dependent regulatory elements in NIH/3T3 fibroblasts, BW5147 T cells, or COS1 cells, whereas octamer-dependent promoter activity was evident in B cells lacking the Oct2 transcription factor (31,32). Likewise, expression of many genes containing octamer motifs in their regulatory elements, like the immunoglobulin genes, B29/Ig-␤ and CD20, was unaffected in B cells from mice lacking Oct2 due to a mutation introduced by homologous recombination into the endogenous oct2 gene (33,34). These observations argued in favor of the existence of B cell-specific cofactors, which upon interaction with Oct1 or Oct2, would determine the transactivation potential of these transcription factors (31,35). Biochemical fractionation of B cell-derived nuclear extracts revealed the presence of an activity (OCA-B ϭ octamer coactivator from B cells) stimulating octamer-dependent immunoglobulin promoter activity (27,36). Indeed, employing a yeast one-hybrid screen, two groups recently reported the successful isolation of cDNAs for such a B cell-specific octamer * This work was supported by Grant SFB229 (to T. W.) from the Deutsche Forschungsgemeinschaft. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EMBL Data Bank with accession number(s) Z54283. ‡ To whom correspondence should be addressed. cofactor from human cDNA libraries. The identified cDNAs encode the same protein which was designated BOB.1 for B cell Oct-binding protein 1 (37) or OBF.1 for Oct binding factor 1 (38). Here we describe the isolation and detailed functional characterization of the murine homolog of BOB.1/OBF.1. We show that BOB.1/OBF.1 is an efficient octamer coactivator that allows Oct1 and Oct2 to function on promoter-proximal octamer motifs. However, this factor is unable to mediate the activity of octamer motifs from distal enhancer positions. MATERIALS AND METHODS Cloning of the Murine BOB.1/OBF.1 Homolog-Using the published human sequence (37) the following primers were synthesized: BOB.5: 5Ј GTC CTC GAG CAT GCT CTG GCA AAA ACC C 3Ј, BOB.3: 5Ј AGC GGA TCC TAA AAG CCT TCC ACA GAG AG. These primers correspond to the 5Ј and 3Ј end of the human BOB.1 coding sequence. 2 g of poly(A) ϩ RNA from S194 cells were reverse-transcribed with oligo(dT) primers and superscript reverse transcriptase (Life Technologies, Inc.) following the manufacture's instructions. 5% of the cDNA synthesized were used together with primers BOB.5 and BOB.3 in a polymerase chain reaction amplification protocol as described (19). The resulting fragments were gel-purified, digested with XhoI and BamHI and cloned into the pBluescript vector (Stratagene). This cloned fragment was then used to screen a S194 cDNA library (39). The sequence of the longest cDNA isolates was determined by standard plasmid double strand sequencing protocols and primer walking (40). For isolation of a genomic clone, 2 ϫ 10 5 cosmid clones were screened with the same BOB.1/OBF.1 polymerase chain reaction fragment. A single cosmid was obtained that contains the complete coding region of murine BOB.1/OBF.1. The genomic map was determined by a combination of restriction enzyme mapping, subcloning, and sequencing. Construction of BOB.1/OBF.1 Expression Plasmids-The primers used for the original polymerase chain reaction amplification of murine BOB.1/OBF.1 were derived from the human sequence and had introduced an amino acid exchange in the extreme 5Ј position of murine BOB.1 (position 6 in the coding sequence is a proline in human BOB.1/ OBF.1 and a serine in the murine homolog). Therefore, two new primers were synthesized that allow the precise and correct amplification of the murine coding sequence: BOB.5.N, 5Ј CTC TCG AGA GCC ATG CTC TGG CAA AAA T 3Ј and BOB.3.N, 5Ј GCG GAT CCT AAA AGC CCT CCA CGG A 3Ј. In addition, for the individual amplification of NH 2 -and COOH-terminal subdomains, two internal primers were synthesized: BOB.5.1, 5Ј TGT CTC GAG CGT GTG CCC CAG CTA CAC G 3Ј (corresponding to position 446 to 465 with respect to the published human BOB.1/OBF.1 cDNA sequence, but introducing a XhoI site) and BOB3.1, 5Ј CTG GGA TCC TAG GGC TGC ACA TAC ATG TC 3Ј (corresponding to position 436 to 419 with respect to the published human BOB.1/OBF.1 cDNA sequence, introducing a stop codon and a BamHI site at the end). The resulting full-length, NH 2 -and COOH-terminal fragments were cloned into three types of vectors: pMT/PKA (39) that allows expression as a fusion protein with a hexa-His moiety at the amino terminus; a vector containing the GAL4 DNA-binding domain (41); and a derivative of this GAL4 expression vector that had the GAL4 DNA-binding domain removed allowing expression of BOB.1/OBF.1 or the individual domains by themselves. Primer Extension Analysis-The conditions for the primer extension analysis were as described (42). Sequences of the primer used were as follows: P1, 5Ј TGA AGC AGA CAG TTT GGC 3Ј (corresponding to position Ϫ30 to Ϫ47 with respect to the AUG codon); P2, 5Ј CAC GCT TTC TTC TCA GTA 3Ј (corresponding to position ϩ107 to ϩ90 with respect to the AUG codon). Conditions for EMSA 2 and Supershift Experiments-Nuclear extracts from the indicated cells were prepared as described (43). For EMSA experiments, 1-4 g of nuclear extract was incubated in a 20-l reaction with 30,000 to 40,000 cpm (0.5-1 ng of DNA fragment), 0.5 g of poly(dI-dC) in a buffer containing 50 mM KCl, 20 mM Hepes, pH 7.7, 1 mM EDTA, 0.25 mg/ml bovine serum albumin, and 4% Ficoll 400. 2 l of in vitro translated BOB.1/OBF.1 protein (or the respective subdomains) generated as fusion with the hexa-His moiety or 2 l of the unprogrammed TNT-lysates (Promega) were added where indicated. All reactions were incubated for 10Ј at room temperature prior to loading on a prerun 4% polyacrylamide gel (19:1), run in 0.35 ϫ TBE. For antibody supershifts, the mixture of nuclear extract plus reticulocyte lysate was incubated with the appropriate antibody for 10 min before adding the probe mix. Modified conditions for EMSA allowing the detection of the endogenous BOB.1/OBF.1-containing complexes and conditions for copper-orthophenanthroline were as described previously (31,44) Cell Culture and Transfection-All cells were kept in Dulbecco's modified Eagle's medium/glutamax (Life Technologies, Inc.) supplemented with 10% fetal bovine serum, penicillin/streptomycin, and 50 M ␤-mercaptoethanol. S194 cells were transfected using the DEAEdextran protocol, and NIH/3T3 cells and stable Oct2 transformants of NIH/3T3 cells were transfected by the calcium phosphate coprecipitation procedure. CMV-lacZ was cotransfected in all experiments. LacZ enzyme activity was used to normalize for different transfection efficiencies in the individual experiments. A minimum of three independent transfections were performed and standard deviations were calculated. Coprecipitation Assays-Coprecipitation with nickel-nitrilotriacetic acid-agarose was performed essentially as described (45). Briefly Conditions for antibody coprecipitation have also been described before (39,45). 10 l of Oct2-specific rabbit antibodies coupled to 10 mg of protein A-Sepharose CL4B (Pharmacia) were mixed with 200 g of HeLa nuclear extract from cells infected with recombinant Oct2 expressing or wild type vaccinia viruses (28). 20 l of in vitro translated, [ 35 S]methionine-labeled full-length BOB.1/OBF.1 or the individual protein domains were added and incubated for 3 h at 4°C in 500 l of interaction buffer A (20 mM Tris/HCl, pH 8.0, 0.1 M NaCl, 1 mM EDTA, 0.5% Nonidet P-40). Reactions were washed four times for 15 min each with 1 ml of interaction buffer A. Where indicated, 200 g/ml ethidium bromide were present throughout the interaction and washing procedure. Generation of Polyclonal Rabbit Antibodies-The NH 2 -terminal domain of murine BOB.1/OBF.1 was expressed as hexa-His fusion protein in Escherichia coli, purified by denaturing nickel-chelating chromatography followed by renaturation and chromatography on a Macroprep S column (Bio-Rad). For some immunizations, the protein was further purified by preparative SDS-polyacrylamide gel electrophoresis and electroelution from the gel. RESULTS Using the sequence information of the human BOB.1/OBF.1 cDNA clone, we generated a probe encompassing the complete coding sequence of murine BOB.1/OBF.1 by reverse transcription-polymerase chain reaction and used this probe to screen a cDNA library from the murine S194 plasmacytoma cell line (see "Materials and Methods" for details). The largest cDNA isolates contained about 2550 nucleotides and included the complete putative coding sequence for the murine BOB.1/ OBF.1 homolog with about 70 nucleotides of 5Ј leader sequence and 1700 nucleotides 3Ј-nontranslated sequence (Fig. 1A). Comparison of the deduced amino acid sequence of the murine BOB.1/OBF.1 clone with its human counterpart revealed a high degree of sequence conservation. The two clones are 88% identical at the amino acid level in the coding region. Interestingly, the high degree of conservation also extends in the 5Ј leader sequence. Only two nucleotide exchanges can be found in the 60 nucleotides preceding the AUG initiation codon. The cDNA clone was further used to isolate a cosmid clone containing the genomic locus of murine BOB.1/OBF.1. A combination of restriction enzyme mapping and sequence analysis revealed that the coding sequence of murine BOB.1/OBF.1 is split over 5 exons (Fig. 1B). The AUG initiation codon is localized in exon 1, which is separated from the remainder of the coding region by a large 17-kilobase pair intron. Exons 2, 3, and 4 are small exons of 131, 43, and 266 base pairs separated by introns of 492 and 82 base pairs, respectively. Exon 5 is just over 2000 base pairs long and encompasses the 3Ј end of the coding sequence as well as the complete 3Ј-noncoding sequences. The 3Ј-noncoding sequence contains multiple simple nucleotide repeats, and these repeats show some mouse strain polymorphism as they differ between the cDNAs isolated from S194 cells (BALB/c mouse strain) and the genomic cosmid clone derived from a 129 mouse strain library (data not shown). With the exception of the splice acceptor site of exon 3, which is divergent from the consensus sequence, all other exon-intron boundaries are in excellent agreement with consensus sequences known for mammalian exons and introns (Fig. 1C). At the 3Ј end of the mRNA a consensus poly(A) addition signal is conserved (Fig. 1A). Inspection of the 5Ј leader sequence of BOB.1/OBF.1 of the longest cDNA isolate showed that there is no in frame stop codon present in this sequence. We therefore determined the 5Ј end of the murine BOB.1/OBF.1 RNA by primer extension analysis to test whether there might be a longer upstream reading frame. Two primers were utilized for this analysis, one extending from Ϫ30 to Ϫ47 with respect to the AUG initiation codon (P1), the second one was localized in exon 2 at a position extending from ϩ107 to ϩ90, again with respect to the AUG codon (P2) (Fig. 2A). Primer extension analysis was performed on two different murine B cell RNAs, S194 and WEHI 231, representing plasmacytoma and mature B cell lines, respec- 4), or RNA from S194 cells (lanes 5 and 6). Extension products for P1 and P2 are indicated by arrows. The arrow with a question mark indicates a longer extension product specifically obtained with primer P2 (see text). The asterisk marks a prematurely terminated primer extension product seen in the reactions with P2. M, size marker (labeled pBR 322 DNA digested with MspI). C, comigration of P1 primer extension products with a sequencing ladder derived from sequencing the genomic clone with primer P1. Sequencing reactions (A, C, G, T) were loaded on the left half of the figure, the extension products for WEHI231, S194, and, as a negative control, tRNA, are shown. The arrowhead indicates the position of the P1-specific extension products. tively. The size of the extension product with primer P1 was 65 nucleotides for both cell lines, primer P2 gave rise to an extension product of 205 nucleotides (Fig. 2B). An additional extension product of roughly 350 nucleotides was observed with primer P2 (marked by a question mark in Fig. 2B). As no corresponding product could be seen with P1, the significance of this extension product remains elusive. These analyses localize the start site of transcription at a position about 15-20 nucleotides upstream of the end of the longest cDNA isolate. To identify the starting nucleotide of the RNA sequence, the primer extension reaction with the P1 primer was run on a sequence gel together with the sequence of the genomic region using the same primer (Fig. 2C), and the deduced sequence was included in Fig. 1A. Consistent with the presence of a single major initiation site, inspection of the 5Ј upstream putative promoter region revealed a sequence element fitting known TATA consensus motifs (TTTAAAAA) at a position Ϫ22 to Ϫ29 relative to the transcriptional initiation site (data not shown). A hallmark of the human BOB.1/OBF.1 protein is its interaction with the Oct1 and Oct2 transcription factors, which is thought to be a prerequisite for the transactivation of octamerdependent promoters. Likewise, in vitro translated murine BOB.1/OBF.1 protein interacted with both Oct1 and Oct2 and resulted in supershifts which were detectable in an EMSA experiment (Fig. 3A). The identity of the individual complexes was confirmed using antibodies specific for Oct1, Oct2, or the BOB.1/OBF.1 protein (Fig. 3B). Inspection of the intensities of the various complexes suggests that Oct1 and Oct2 interact with BOB.1/OBF.1 with similar affinities. In an effort to functionally dissect the BOB.1/OBF.1 protein, we expressed the NH 2 -and COOH-terminal half of murine BOB.1/OBF.1 individually and tested the interaction with Oct proteins in an EMSA experiment. Only the NH 2 -terminal domain was able to interact with Oct proteins, resulting in an indicative supershift (Fig. 3C). In contrast, no supershift could be detected with the isolated COOH-terminal domain of BOB.1/OBF.1. So far, all experiments analyzing the interaction of BOB.1/ OBF.1 and Oct1/Oct2 had been performed as EMSA supershifts (37,38) and above). In these experiments, the POU domain of the octamer proteins had been identified as the domain sufficient for interaction; however, interaction of BOB.1/OBF.1 with other domains of the octamer proteins could not be ruled out. We have therefore utilized a coprecipitation assay to study the interaction of various Oct2 domains with BOB.1/OBF.1. Unlabeled in vitro translated BOB./OBF.1 was generated as fusion protein with an NH 2 -terminal hexa-His moiety. Individual domains of Oct2 fused to the GAL4 DNA binding domain were in vitro translated as labeled proteins, mixed with BOB.1/OBF.1, and precipitated with nickel-nitrilotriacetic acid-agarose. Only the POU domain of Oct2, but not the NH 2 -and COOH-terminal domains, nor the GAL4 DNA binding domain alone, was coprecipitated efficiently together with BOB.1/OBF.1 (Fig. 4A). The POU domain is a multifunctional domain important both for specific DNA binding as well as for protein-protein interactions (39,45). We wanted to determine whether DNA binding of the POU domain was a prerequisite for interaction with BOB.1/OBF.1. We therefore mixed labeled full-length BOB.1/OBF.1 or the individual NH 2 -and COOH-terminal domains with nuclear extracts from HeLa cells containing ectopically expressed Oct2 protein. We then performed coprecipitation experiments with antibodies specific for Oct2 in the presence and absence of ethidium bromide. As ethidium bromide intercalates into DNA and thereby interferes with protein-DNA interactions, this method has been established as a tool to discriminate between bona fide protein-protein interac-tions versus assembly of proteins on the same piece of DNA (46). Full-length BOB.1/OBF.1 as well as the NH 2 -terminal domain efficiently coprecipitated with the anti Oct2 antibody regardless of the presence or absence of ethidium bromide (Fig. 4B). In contrast, the COOH-terminal domain was not recovered in these coprecipitation experiments regardless of the conditions. This result is in line with the failure of the COOHterminal domain to interact with Oct proteins in the EMSA experiments (Fig. 3B) and suggests that this domain does not make stable contact with the Oct proteins. No BOB.1/OBF.1 proteins were coprecipitated when Oct2 protein was missing from the nuclear extracts confirming the specificity of the assay (Fig. 4B). The fact that Oct2 binding to DNA was not required for interaction with BOB.1/OBF.1 did not exclude the possibility that the interaction with BOB.1/OBF.1 would affect the Oct-DNA interaction. To analyze this possibility, we performed high-resolution in-gel chemical footprinting analyses of the binary Oct-DNA complex as well as the ternary BOB.1/OBF.1-Oct-DNA complex. Identical protection patterns were observed for both complexes, regardless of whether Oct1 or Oct2 containing complexes were analyzed (Fig. 5, A and B). Using slightly modified conditions for EMSA experiments, we had previously identified a B cell-specific complex migrating slower than the Oct1 complex, but containing the Oct1 protein (31). This complex resembled the ternary BOB.1/OBF.1-Oct1 complex in several respects, such as migration behavior and the fact that no extra contacts were detectable by copper-orthophenanthroline footprinting (31). We therefore investigated whether this complex contained the BOB.1/OBF.1 protein. Whereas the preimmune serum did not affect this ternary complex, the antibody raised against the recombinant BOB.1/OBF.1 protein completely abolished this EMSA complex, suggesting that indeed this complex represents the endogenous BOB.1/OBF.1-Oct1 complex in B cells (Fig. 5C). We do not know presently why this endogenous complex is more difficult to detect in EMSA experiments as compared with the one containing the recombinant BOB.1/OBF.1 protein. We had previously proposed that B cells contain two types of octamer coactivators (35). One that can interact with either Oct1 or Oct2 to mediate octamer-dependent promoter activity, and a second one that specifically interacts with the carboxyl terminus of Oct2 and is involved in mediating octamer-dependent enhancer activity in B cells. The observation that BOB.1/ OBF.1 interacts with both Oct1 and Oct2 suggested that it would be the first type of transcriptional coactivator, namely a specific promoter cofactor. To test this hypothesis, a BOB.1/ OBF.1 expression vector was cotransfected with an octamerdependent promoter reporter into NIH/3T3 fibroblasts. In the absence of cotransfected BOB.1/OBF.1, the wild type octamer reporter showed the same activity as a mutant version, bearing point mutations in the octamer motifs (Fig. 6A). BOB.1/OBF.1 stimulated the wild type reporter construct to a level of activity comparable with the activity of this reporter in B cells (Fig. 6A and Ref. 28). Activation depended on the integrity of the octamer motifs because the mutant reporter was not stimulated by BOB.1/OBF.1 cotransfection. When the NH 2 -and COOH-terminal halves of BOB.1/OBF.1 were tested individually, only the NH 2 -terminal domain gave a low, but reproducible, activation of the wild type octamer containing reporter (Fig. 6B). In contrast, the COOH-terminal fragment failed to show any stimulation, consistent with our previous observation that this domain does not interact specifically with the octamer transcription factors. A slightly different strategy had to be used in order to investigate whether BOB.1/OBF.1 would also stimulate transcrip-tion from a distal enhancer position, because we had previously shown that this activity is strictly Oct2-dependent (31,47). We therefore analyzed BOB.1/OBF.1 activity in NIH/3T3 cells that were stably transfected with an Oct2 expression vector (31). These stably transfected fibroblasts express amounts of the Oct2 protein similar to typical B cell lines (31). We first tested the stable transfectants in a BOB.1/OBF.1 cotransfection ex-periment with the octamer-dependent promoter reporters described before. Interestingly, the stimulation observed by BOB.1/OBF.1 cotransfection was comparable in the parental NIH/3T3 cells and the Oct2-positive NIH/3T3 cells (Fig. 6C) lanes 1 and 5), antibodies specific for the Oct1 protein (lanes 2 and 6), antibodies specific for Oct2 ( lanes 3 and 7), or antibodies raised against recombinant BOB.1/OBF.1 (lanes 4 and 8). constructs bearing a multimerized octamer motif-containing fragment from the murine heavy chain enhancer at a position 3Ј of the luciferase gene driven by an upstream chicken lysozyme promoter was used (31,47). This reporter has been previously used to detect octamer-dependent enhancer activity in B cells. Interestingly, no stimulation of this reporter could be observed by cotransfection of full length BOB.1/OBF.1 (Fig. 6C). This result suggests that BOB.1/OBF.1 is in fact a promoter-specific cofactor and unable to mediate the B cell-specific octamer enhancer activities. This inability of BOB.1/OBF.1 to stimulate the enhancer reporter was not due to the fact that no functional ternary Oct-BOB.1/OBF.1 complexes can be formed on the multimerized heavy chain enhancer fragment used. When the same multimerized fragment was moved into the proximity of the TATA box, BOB.1/OBF.1 was again capable of activating this element (Fig. 6C). The above interaction domain mapping experiments and cotransfection experiments had revealed that the NH 2 -terminal domain of BOB.1/OBF.1 interacts with the Oct transcription factors and contains a residual transactivation function, yet transactivation by the full-length BOB.1/OBF.1 clone was significantly more prominent. In order to map the potential transactivation functions within the BOB.1/OBF.1 coactivator, we generated fusion proteins with the GAL4 DNA-binding domain and either full-length BOB.1/OBF.1 or the NH 2 -or COOHterminal domain of BOB.1/OBF.1 separately (Fig. 7C). These Characterization of Murine BOB.1/OBF.1 clones were cotransfected with a reporter bearing multimerized binding sites for the GAL4 transcription factors in a promoter proximal position (41). All three fusion proteins efficiently stimulated this reporter. The COOH-terminal domain of BOB.1/OBF.1 showed about 3-fold higher activity than the NH 2 terminus, however, (Fig. 7A). This result suggests that the predominant transactivation function of the BOB.1/OBF.1 protein resides in the COOH-terminal 130 amino acids. Given the above described inability of BOB.1/OBF.1 to stimulate octamer-dependent enhancer elements together with Oct2 in fibroblasts, we wanted to determine whether the GAL4 fusion proteins would be able to stimulate transcription from a distance. To this end, we utilized a reporter bearing the GAL4 binding sites in a distal enhancer position and cotransfected this reporter with the GAL4 fusion protein expression vectors. Fig. 7B shows that neither of the GAL4 fusions is capable of stimulating this reporter, suggesting that failure to stimulate from a distance is an intrinsic property of the transactivation domains of the BOB.1/OBF.1 protein. DISCUSSION One of the hitherto unique features of B cell-specific transactivation mediated by the octamer transcription factors is their requirement for the presence of additional B cell-specific coactivators. Here, we describe the molecular analysis of one such coactivator that allows Oct1 and Oct2 to activate transcription from a promoter-proximal position in B cells. We had previously shown that B cells contain two distinct types of coactivators. One responsible for the B cell-specific activity of octamer-containing promoters and a second type that confers activity on octamer regulatory elements from distal enhancer positions (31,35,47). The results presented here unequivocally identify BOB.1/OBF.1 as a specific coactivator from promoter-proximal positions. This conclusion is supported by the following lines of evidence. 1) Whereas cotransfection of BOB.1/OBF.1 into NIH/3T3 fibroblasts was sufficient to fully activate octamer-dependent promoter elements, it did not result in enhancer activation, regardless of the presence or absence of Oct2. 2) Fusion proteins of the GAL4 DNA binding domain with full-length BOB.1/OBF.1 or individual domains of the coactivator efficiently stimulated reporters containing GAL4 binding sites in a promoter-proximal position but failed to activate when the binding sites were present in distal enhancer positions. 3) BOB.1/OBF.1 promiscuously interacts with both Oct1 and Oct2, whereas the putative enhancer cofactor specifically requires Oct2 for a functional interaction (31,35,47). Furthermore, our previous experiments suggested that Oct1, if anything, reduced octamer-dependent enhancer activity (31,47). 4) Finally, we had previously identified the COOH-terminal transactivation domain of Oct2 as a prerequisite for stimulating octamer-dependent enhancer activity in B cells. This result suggested that the enhancer cofactor might specifically interact with this domain rather than the POU domain. In our proteinprotein interaction analysis presented here, we failed to detect any evidence for specific interaction between BOB.1/OBF.1 and the COOH-terminal domain of Oct2. In summary, these results suggest that BOB.1/OBF.1 is the/one of the B cell-specific coactivator(s) responsible for the B cell-specific function of octamer-containing promoters. Furthermore, they suggest that additional, distinct cofactors exist in B cells that are required for At present, the molecular mechanism responsible for the observed coactivation by BOB.1/OBF.1 is largely obscure. When transactivation properties of the NH 2 -and COOH-terminal transactivation domains of the Oct2 transcription factor were measured in GAL4 fusion experiments, significant activity of the individual domains could be scored, which were not significantly lower than the transactivation observed for GAL4 fusion proteins containing full-length BOB.1/OBF.1 or individual domains thereof (41,48). A possible explanation for this apparent paradox could be that due to an intramolecular masking process, the transactivation domains of Oct1 and Oct2 might not be accessible for interaction with general transcription factors, most likely the transcription factor IID (TFIID) complex (49). The main function of the coactivator then would be to unfold and unmask these transactivation domains due to its physical interaction with the Oct1 and Oct2 transcription factors. Intramolecular masking of the transactivation domain has been suggested for the MyoD transcription factor (50). A specific conformational change induced by binding of MyoD to DNA is hypothesized to be responsible for the release of the masked transactivation domain (50,51). From our results it is unclear whether a similar masking/unmasking process is responsible for the activation of the octamer transcription factors in B cells. Clearly, a more complex situation than for MyoD has to be envisaged, as an additional cofactor, namely BOB.1/ OBF.1, is required for the activation to take place. Interestingly, involvement of additional cofactors could also not be ruled out in the MyoD activation process (51). Although our results demonstrate that Oct2 and BOB.1/OBF.1 can interact off DNA, these experiments do not exclude a role for DNAbinding in the activation process. In that respect it is of interest to note that in our previous GAL4 fusion experiments with The SV40 based expression vector was derived from pSG5 (Stratagene). The different reporter plasmids have been described previously (28,31,47). Briefly, 4 ϫ wt.TATA contains four copies of a synthetic octamer motif upstream of the HSV-thymidine kinase TATA box (T). The octamer motifs contain a single point mutation in 4 ϫ mut.TATA. CL(ED) contains the chicken lysozyme promoter and six copies of a 50-base pair fragment derived from the murine heavy chain intronic enhancer element comprising the E4 and Oct motifs. The octamer motif contains several point mutations in the CL(Ed) construct. The same hexameric wild type and mutant enhancer multimers were cloned upstream of the minimal HSV-TATA region in (ED)/(Ed).TATA. different Oct2 domains we failed to detect transcriptional activity for a fusion protein that contained the POU domain fused to the GAL4 DNA-binding domain (the identical protein used for the coprecipitation experiment in Fig. 4A) even in B cells (41). This result could be interpreted in two ways: (i) a functional complex requires DNA binding of the POU domain and/or (ii) the transactivation domains of the Oct proteins are essential for BOB.1/OBF.1-mediated transactivation. Significantly more work has been performed to elucidate another coactivation pathway where octamer proteins are involved. The Oct1 protein is a critical component for the activation of several viral promoters after infection of cells with herpes simplex virus. There, a complex between the viral VP16 protein and Oct1 on specific promoter motifs containing a TA-ATGARAT consensus is responsible for efficient coactivation (52,53). The functional complex formed on the DNA actually is composed of Oct1, VP16, as well as an additional cellular protein, HCF (54 -58). However, as Oct1 is the only DNA-binding component in this system, it was unclear how specific octamer motifs (the GARAT-containing motifs) were selected for VP16mediated transactivation. A recent investigation of Oct1 bind-ing to different octamer motifs by high resolution chemical footprinting suggested that the Oct1 POU domain adopts a specific conformation when binding to GARAT-containing motifs as compared with two other octamer motifs. It was suggested that this specific conformation would then be recognized by the viral coactivator (59). However, the additional binding of the coactivator to Oct1 did not affect the POU domain contacts to DNA as measured by chemical footprinting. In agreement with these findings, we also failed to detect any alterations of Oct1 or Oct2 DNA contact upon binding of the BOB.1/OBF.1 cofactor. This does not exclude the possibility, however, that, upon interaction with the cofactor, the overall conformation of the Oct1 and Oct2 proteins is changed. Could different conformation of Oct proteins on different octamer binding sides be responsible for the differences observed with respect to promoter and enhancer activation by BOB.1/OBF.1? This interpretation is highly unlikely for the following two reasons. First, supershifted EMSA complexes containing Oct1 or Oct2 plus the BOB.1/OBF.1 coactivator could be observed on the various binding sites used for the promoter and the enhancer reporter constructs (compare, for example, Figs. 3 and 5). Furthermore, we have shown that the very same elements that failed to function at a distance, were efficiently activated by BOB.1/OBF.1 when placed in a promoter-proximal position (Fig. 6D). These results together with the evidence discussed above argue for an independent distinct enhancer coactivator. The observation that BOB.1/OBF.1 only contacts the POU domain of the octamer transcription factors further supports the dual function of the POU domain in these proteins. In addition to being responsible for specific DNA-binding, its role in orchestrating protein-protein interaction is becoming more and more apparent over the last years. In addition to viral proteins such as the described VP16 herpes simplex virus coactivator or the adenovirus E1A protein (30), interaction with cellular proteins has also been shown to be mediated by the POU domain of Oct1 and Oct2. We could previously show that the POU domains of Oct1 and Oct2 specifically interact with TBP, the TATA-binding protein component of transcription factor IID (45). More recently, using the POU domain as probe in a protein-protein interaction screening protocol, we were able to isolate HMG2, an abundant non-histone nucleoprotein, as an interacting partner protein (39). In contrast to the interaction with BOB.1/OBF.1 which has been described to be specific for Oct1 and Oct2 (37,38), HMG2-POU domain interactions are more promiscuous as the Oct6 POU domain was also shown to interact with HMG2 (39).
7,735
1995-12-15T00:00:00.000
[ "Biology" ]
Concentration invariant odor coding Humans can identify visual objects independently of view angle and lighting, words independently of volume and pitch, and smells independently of concentration. The computational principles underlying invariant object recognition remain mostly unknown. Here we propose that, in olfaction, a small and relatively stable set made of the earliest activated receptors forms a code for concentration invariant odor identity. One prediction of this “primacy coding” scheme is that decisions based on odor identity can be made solely using early odor-evoked neural activity. Using an optogenetic masking paradigm, we define the sensory integration time necessary for odor identification and demonstrate that animals can use information occurring <100 ms after inhalation onset to identify odors. Using multi-electrode array recordings of odor responses in the olfactory bulb, we find that concentration invariant units respond earliest and at latencies that are within this behaviorally-defined time window. We propose a computational model demonstrating how such a code can be read by neural circuits of the olfactory system. first few glomeruli activated during a sniff are those receiving input from the most sensitive ORs for a given odorant. We propose that solely the members of this small set of early glomeruli encode odor identity. While this primary set varies between odorants, we assume that it is mostly preserved across concentrations of the same odorant. As concentration is increased, responses of less sensitive glomeruli are recruited later than the primary set, thereby maintaining the members of the early set and preserving encoded odor identity information (Fig 1B, C). One of the central predictions of primacy coding is that animals use early "slices" of odorevoked neural activity to define odor identity independently of the remainder of the pattern of evoked activity. To test this hypothesis, we developed an optogenetic masking paradigm in which we could create a temporally controlled masking stimulus during an odor discrimination task ( Fig 1D). Through delayed triggering of this optogenetic masking stimulus relative to inhalation onset, we could preserve early epochs of odor-evoked information while making the overall combinatorial code unreliable through the activation of a large, heterogeneous subset of OSNs. We reasoned that if odor identity can be defined using only a small subset of earlyresponding primary glomeruli, our mask should not impair odor identification as long as it is initiated after this identity-defining subset. Conversely, activation of the mask before this initial subset of glomeruli become active should impair odor discrimination. To produce the mask stimulus, we delivered 2 light pulses (25 mW, 2ms duration, 10 ms interpulse interval) to the olfactory epithelium in both nostrils of the transgenic mouse expressing Channel Rhodopsin2 (ChR2) in all OSNs 22 . The light was delivered via optical fiber stubs implanted above the olfactory epithelium. To characterize the neural response to the mask stimulus we measured light and odor evoked activity of mitral-tufted (MT) cells (n=119: 39 single-unit, 80 multi-unit) in the OB, which are the first recipient of information from OSNs ( Fig.2A). The masking stimulus generated response occurring after a short delay following light onset (mean=11.5ms, mode=8ms) (Fig. 2C). The overall mask excitatory response lasts approximately 50 ms, followed by prolonged inhibitory response until approximately 200 ms (Fig. 2B, Extended Data Fig. 2E). Linear classifier cross-validation results for unmasked data for two odor presentations (limonene, pinene). Responses to unmasked odor presentations were time-binned (10 ms bins) and used to train support vector machine (SVM) classifiers. For each time point, SVMs were trained on the response vector inclusive of bins from t=0 to that time. Each classifier's performance is described by cross-validation on unmasked trials (blue) and testing on masked trials (green). Shaded areas indicate 95% confidence interval (Clopper-Pearson method). How effective is the mask in eliminating odorant responses? Out of 119 recorded units, 29 responded to one of two odorants (pinene, limonene). The mask, presented at 20 ms latency post inhalation onset, modified most of the odor responses, even though odor responses typically occur much later than the mask stimulus ( Fig. 2A). To characterize the effect of mask on information available for discrimination, a support vector machine (SVM) was trained to classify MT cell responses to two odorants (limonene vs pinene) with and without masking stimulus. Based only on the odor responses of 29 units, classification performance on an individual trial raised from chance level to 100 % in the first 260 ms from the inhalation onset. On the masked trials, performance of the classifier stayed at chance level until ~180 ms post inhalation onset, and never exceeded 68% (Fig. 2D). We may assume that the mask is efficient in eliminating odor information for at least 100 ms duration following the mask. To test the effect of the mask on behavior, mice (n=3) were trained in a head-fixed 2-alternative choice paradigm to discriminate between two odorants (eugenol and 2-hydroxyacetophenone) for a water reward (Fig. 3). To ensure that decisions were based on odor identity and not intensity, we scrambled the odorant concentrations by presenting five concentrations within a two order of magnitude range. On probe trials within the session, the optogenetic mask was presented with target odor. The probe trial structure was used to prevent animals from adopting a novel strategy to overcome the effect of the mask. The animals' performance was strongly affected by the masking stimulus when it was initiated between 0 and 50 ms after inhalation onset (Fig 3B, S1E). The presence of the mask lowered the mean performance at these early latencies to almost chance level of 56% compared with the unmasked performance of 92% (at odorant concentration 1 µM). As the onset latency of the mask is increased between 50 ms, performance in the odor identification task recovers and approximates the unmasked asymptotic performance at approximately 100 ms. Changes in odorant concentration should change the absolute timing of OSN recruitment and, thus, affect the timing of odor percept formation in our task. We tested this prediction by fitting a Weibull generalized linear model (see Methods) to masking data for two concentrations of odorant and comparing the thresholds of these fits. A 10-fold decrease in concentration delays the recovery of performance in masking trials by 13.3 ms (100.3 vs. 87.0 ms, bootstrapped 95% confidence intervals [94.6, 106.3] and [82.5, 92.1], one-sided bootstrap test p = 0.0007). As expected, odor percepts are formed later for lower concentration odorants, likely due to delayed recruitment of OSN activity 12 . Very similar dependencies have been observed for other odor pairs and 5 other mice (Fig. S1D). Reaction time (RT) has been historically used to determine the timing of sensory processing and decision-making based on sensory stimuli. The dependence of performance on both mask latency and RT are qualitatively similar, except that the reaction-time curve is shifted by approximately 66 ms. This relationship is similar for both concentrations tested here (65.5 and 66.2 ms, Fig. 3C, Insert). The concentration-dependent shift in RT vs performance can be wholly explained by the shift observed in our masking paradigm, indicating that peripheral encoding of odor information limits the timing of olfactory decision making. The observed delay between the RT and mask can be ascribed to a motor delay that is constant across concentrations. A. Behavioral task schematic: Mice were trained to respond to 2 odors A and B at 5 different concentrations to lick left or right water spout. Mask timed to the onset of the first inhalation after odor delivery was presented during subset of trials for two concentrations (asterisks). B. Discrimination performance versus mask latency for odors (2-hydroxyacetophenone and eugenol) at two concentrations. Mask stimulus presentation was initiated on the first inhalation of odorant and after the mask onset latency, t mask , had elapsed. High concentration annotated in black markers, low concentration in grey. Error bars indicate 95% confidence interval estimates. Weibull fit to data indicated with thick lines. Markers above performance curves indicate Weibull threshold latency values for each fit with 95% confidence interval estimates. C. Mouse reaction time vs. performance in unmasked stimuli of two concentrations (as above). Dots indicate data binned into bins of 125 trials. Lines indicate Weibull fit. Inset: difference between mask and reaction time threshold latencies. D. Mouse reaction time vs. performance for unmasked trials compared with late masked trials (t >= 100 ms) for the same dataset. Points represent bins of 50 trials each. E. Performance versus mask latency for discrimination of pure (blue) carvone enantiomers versus mixtures made with those odorants (green). Is it possible that the presence of mask trials makes an animal to adopt decision strategy amplifying significance of the initial time interval? To address this issue, we performed two analyses. First, we compared reaction time in unmasked trials and trials where the mask was presented late (t ≥ 100 ms) and had minimal effect on performance ( Fig. 3D and S1G). The Weibull fits to the dependencies of performance on reaction time in both cases are nearly identical. Second, we see no effects of learning on mask trials between early and late behavioral sessions, comparing the performance on the first and last 20 masked trials across all sessions (Fisher-Exact test p > 0.05). (Fig. S1F). Together, these results provide evidence that randomly presented masking trials do not change the animal decision strategy and animals typically use early evoked information in the odor discrimination tasks. Does primacy coding strategy apply only to simple tasks? To make the task more difficult, we trained animals to discriminate between mixtures of carvone enantiomers (60:40 vs 40:60). Using mixtures to reduce the contrast between stimuli decreases discrimination performance relative to pure carvones (83.3% vs 90.8%) and delays the mask threshold time (118 ms vs 101 ms) (Fig. 3E). As with the latency shift between concentrations of odorant, the shift due to decreased contrast is consistent with the measure of reaction time vs performance (Fig. S1E). To generate an equivalent level of performance in the more difficult mixture discrimination task, sensory information must be integrated by the subject over a longer period of time. However, the extension in integration time is relatively small on the scale of the length of a sniff, demonstrating that animals are still using early information to make this more difficult discrimination. Our behavioral results define the temporal window in which odor information is integrated. To determine if primacy codes exist in the OB at timescales consistent with our behavior, we recorded responses of 338 MT cells to 3 odorant concentrations spanning a range of 2 order of magnitude. The primacy coding model predicts the existence of MT neurons that display excitatory responses to odor across a wide range of concentrations in a narrow temporal window at the beginning of the sniff. Among 119 units which exhibit excitatory responses to odorant αpinene, 15 responded to all three concentrations, a subset that we called "concentration-stable" (Fig. 4A). The remaining 104 units responded only to subset of concentrations, which we called "concentration-unstable" (Fig. 4B). As expected, a number of responsive units grew as concentration increased (Fig 4C). While the population response at low concentration was dominated by concentration-stable units, unstable units were 3.7 times more numerous than stable at the highest concentration tested. According to our primacy coding model, the identity of early, stable units that are responsive across concentrations represent odor identity. The stable population's mean response latency was shorter than the unstable population at all concentrations (conc 10 -3 : 86.8ms vs 177.2ms, p=0.0003; conc 10 -2 : 82.7ms vs 148.2ms, p=0.0071; and conc 10 -1 : 85.7ms vs 149.0ms, p=0.0249; one-sided KS test). While the latencies of the unstable and stable populations overlapped, a subset of stable units responded earlier than all unstable units at the highest concentration, consistent with the primacy coding model ( Fig 4D). The latencies of these stable units scaled with concentration, as expected by the model and behavioral result. The mean latency shift between concentrations for this early, concentration-stable subset was 15.5 ms, (σ: 5.5 ms). This is comparable to the timing shifts in behavioral identification across concentration (13.3 ms). The stable population's activity encoded odor-identity information and was not representative of non-specific odor responses; only one of these concentration-stable units was responsive to another odorant tested, α-limonene. Importantly, we find that latencies and even latency relationships are not preserved across concentrations in awake animals, although this has been observed in anesthetized animals ( Fig. 4D) 20 . Instead, both absolute and relative latencies of stable responses units shift dramatically as concentration increases, clustering into early and late subsets at the highest concentrations. We observed that the excitatory responses of late set of stable cells was proceeded by transient inhibition at high concentrations ( Fig. 4A lower panel, 4E). This transient odor-evoked inhibition was observed widely in our recorded population (Fig. S3F, S3H), and we hypothesize that it is due to lateral and feedback inhibition in the OB. For late stable units, we predict that earlier members of the MT population drive inhibition arriving earlier than or in coincidence with feedforward excitation. This shunts the neuron's initial excitatory response, causing an apparent increase in response latency. As a result, we conclude that these units are not in the primary set, as it is obvious that earlier members of the population must proceed them to generate the network activity responsible for this inhibition. So, while short latency is predictive of concentration-invariance, the relationships of response latencies within our recorded population as a whole are not conserved across concentrations and are unlikely to encode odor identity. What elements of olfactory networks are sufficient to process primacy information? How does mask affect odor recognition? Our behavioral data shows an incomplete suppression of animal performance at small masking delays (Fig. 3B). In a more difficult discrimination task, the effect of mask is slightly delayed compared to easier tasks (Fig. 3E). Are these features expected within the primacy coding mechanism? To address these questions, we developed a computational primacy decoding model based on known features of the OB and piriform cortical (PC) circuits. First, our model includes random feedforward connectivity between the OB and PC, which provides the basis for coincidence detection of MT cell activity arriving early in the sniff cycle. Second, our model includes random recurrent inhibitory circuits in the PC (blue in Fig. 5A). The role of this connectivity is to suppress late arriving, "non-primary" input to PC neurons. This architecture is consistent with observed global and broadly tuned inhibition in the PC 23 and has been used as a feature of network models for the processing of fine temporal information in the PC, including the proposed mechanism for coincidence detection 24 . Finally, we assumed that PC neurons have a memory property such that once activated, they can maintain the persistent activity in order to retain the odorant identity in the network until the initiation of action. Although our computational model does not explicitly specify the mechanism of persistency, it has been hypothesized to emerge from voltage dependent synaptic channels, such as NMDAR or GABA B activating KIR channels 24 . Overall, our network included randomly connected 300 MC and 1000 PC neurons. Our model provides insights into the mechanism of optogenetic suppression of the animal's performance for early delivered masks. We simulated odor-evoked activity in MT cells as a random spatiotemporal pattern and the mask as synchronous activity independent of odor ( Fig. 5B,C). When the mask follows initial odor-evoked activity, it does not affect odorant dependent activity patterns in PC (Fig. 5B), due to the broad inhibitory network, which suppresses excitatory inputs from the late mask or odorant dependent inputs. In contrast, when the mask precedes odor activation, a pattern of activity emerging in the PC is not odorant-dependent and is different from those of the odorants alone (Fig. 5C). This light-evoked pattern can be viewed as a new percept that is unrelated to the original odors. Our model qualitatively explains an incomplete suppression of discrimination for early masks (t<0.2) ( Fig. 2B and Fig. 5D, black and gray lines). This occurs due to the presence of noise: on certain trials, odor-dependent OB responses are more robust than on average, causing failure of the mask on these trials (Fig. S4). In case of complex mixture, the behavioral response is delayed compared to an easy stimulus (Fig. 5D, blue vs. black lines) consistent with experimental results (Fig. 2E). This occurs because the differences in pools of primary glomeruli emerge slightly later in the sniff cycle, as both of the stimuli have similar sets of initially activated glomeruli. This result implies that primacy read out mechanism does not take the sequence of glomerular activation into account, since such differences are expected to occur early. In a version of the model sensitive to the activation sequence, the delay in performance for complex tasks is strongly reduced (Fig. S5). Overall, our computational model confirms that the experimental data is consistent with the identity of primary MT cells rather their recruitment order being important for the coding of odor identity. We demonstrate here that the earliest evoked neural activity can be used to make olfactory decisions and demonstrate that neural activity in this time window may encode odor identity across concentrations through primacy coding. Previous behavioral studies demonstrate that early epochs of odor-evoked activity are sufficient for the detection of odorants 25,26 and concentration discrimination 27 in rodents. However, because these experiments did not scramble odor concentrations during testing, it is impossible to rule out whether animals were solely using intensity cues to guide their decisions. In earlier imaging studies it has been proposed that the most sensitive glomeruli may encode individual odors 15,28 , however, to our knowledge, our work provides the first behavioral evidence and computational support for this hypothesis. While primacy coding relies on timing of activation of glomeruli, it has distinct features and predictions compared to latency coding schemes previously proposed in olfaction and other sensory modalities 14,24,[29][30][31] . In fact, we find in our recordings that both absolute and relative latencies within the population of odor-responsive MT units are not maintained across concentrations in awake animals. Rather, primacy coding suggests the unique role of the early activated glomeruli, which we find animals can use and which appear to be stable across concentrations of odorant. Second, the primacy model emphasizes the role of the set of early responding neurons rather than the sequence of glomerular activation. Our experiments with mixtures in combination with computational modeling provide indirect evidence towards this prediction. Further experiments with precise temporal control of early activated glomeruli may confirm this prediction. Third, the primacy model sets limits for information capacity of the olfactory code. The upper bound estimate for the number of odorant identities represented by the primacy code is ~" ! where N is the number of different OR types and p is size of the subset used to define identity. With p = 5-6 and the N = 350 OR genes found in the human genome, the coding scheme can represent ~10 10 -10 12 different odors. Forth, the primacy model emphasizes the role of individual glomeruli for odor coding. In mice, a deletion of a single receptor TAAR4 is sufficient to abolish aversive behavior to a specific odor 32 . Studies of human genetic variability lends evidence that only highly sensitive receptors predominate in defining odor quality. Subjects with different alleles of a single OR report differences in perceptual qualities for strong ligands of the OR, while they are likely to report similar qualities for weaker ligands 33 . Fifth, primacy makes specific predictions about mixtures of odorants because the earliest activity should dominate perceptual qualities of the mixture. Perceptual masking of slowly perceived odors by fast odors (temporal suppression) has been observed in human psychophysics, but warrants more attention 34 . If both odorants evoke early activity within the primary set, we predict that this combination will cooperate to synthesize a new odor percept. Finally, primacy coding does not make explicit provision for parallel processing of components in mixtures, a task where humans demonstrate poor performance 35 . Primacy coding suggests a relatively simple solution to the complex computational problem of robust concentration invariant representation of odorant identity in olfaction. It provides inherently rapid odor identification, a vast coding capacity and can be implemented by the architecture of the olfactory system. Materials and Methods. Mice. Behavioral concentration series data were collected in 4 OMP-ChR2-YFP heterozygous mice (2 female, 2 male). Mixture data were collected in a separate cohort of 5 OMP-ChR2-YFP heterozygous mice (2 female, 3 male). Electrophysiological data to characterize masking were collected from a separate cohort of mice (n=2). Five male C57B/6 mice (Jackson Labs) were used for concentration series electrophysiology. Subjects were 8-12 weeks old at implantation and were maintained on 12hr light-dark cycle in isolated cages after implantation. All procedures were approved by the IACUC of NYULMC in compliance with the NIH guidelines for the care and use of laboratory animals. Sniff recording. Sniff was monitored via intra-nasal pressure. An 8 mm long, 21-gauge cannula was implanted into the anterior dorsal recess. Total insertion depth from the surface of the nasal bone was 1.5 mm. Pressure change relative to atmospheric pressure was measured using a pressure sensor (24PCEFJ6G, Honeywell) and amplified (AD620, Analog Devices). A Schmitt (dual-threshold) trigger was used to define inhalation and exhalation onsets in real time on an Arduino microcontroller. For concentration series electrophysiology, respiration was measured using an externally mounted pressure sensor placed in front of the subjects' nares. Surgery. Mice were anesthetized using isofluorane gas during surgery. The head bar, pressure cannula and optical fibers were implanted in a single surgery. The nasal cannula was implanted in a small hole in the anterior nasal bone and affixed with glue and dental cement. The optical fibers were implanted bilaterally in two holes drilled in the posterior nasal bones and affixed using the same technique. Two small screws (size #000-120 x 0.0625", Small Parts, Inc.) used to stabilize the implant and provide electrical connection for lick detection were implanted in the skull at a location approximately corresponding with S1 cortex. Animals were allowed to recover for at least 3 days before water deprivation. Stimulus delivery and behavioral control Behavioral control and data acquisition was computer-controlled using the custom Voyeur software 36 . For odor stimulus delivery, we used an 8-odor olfactometer (Extended Data Fig. 1B). Odorants were diluted in mineral oil and stored in amber volatile organic analysis vials. Olfactometery manifolds, valves, and tubing contacting odorized air consisted entirely of PTFE to minimize cross-contamination of odorant. During odor presentation, nitrogen carrier gas is diverted through a single vial and enters the main air stream, resulting in an adjustable dilution in a range between 10x and 100x. Airflow rates for carrier and main flow rates were controlled using two mass flow controllers (Bronkhorst). During periods between stimuli, animals were presented with 1 L/min background air and olfactometer air was directed to exhaust using a four-way PTFE final valve (NResearch). Odorant concentrations were controlled using a combination of gas-and liquid-phase dilution. Through manipulation of the ratio of odorized and clean air flowrates, we were able to achieve a dilution range of 10-1% odorized air (10x). In experiments where more that 10-fold dilution was required, we used liquid dilution to increase our range. Because liquid dilution ratios do not accurately predict headspace (gas-phase) concentrations, liquid dilutions were assayed using a photo ionization detector (Aurora Scientific) to determine relative concentrations of odorant in gas-phase. For concentration-series electrophysiology, all dilutions were made in air-phase by diluting odorized air with unodorized air. Mixtures of carvone enantiomers were made in liquid phase. Enantiomers have identical vapor pressures and solvent interactions, thereby allowing accurate prediction of component ratios in gas phase from liquid mixture ratios. Light masking stimulus was provided by two 473-nm, 105 um ID fiber coupled diode laser (Blue Sky Research, PN: FTEC2471) terminated in a ceramic ferrule. During behavioral sessions, the laser source ferrule was mated to a ferrule permanently implanted on the mouse. Implanted ferrules (MM-FER2007-304-4050, Precision Fiber Products, Milpitas, CA) were fabricated with 400 um ID, 0.39 NA fiber (FT400UMT, Thorlabs, Newton, NJ) and etched using hydrofluoric acid to provide diffuse light within the nasal cavity. Laser power was calibrated using a light power meter (Thorlabs) prior to behavioral sessions at the ferrule tip. Water reward stimulus was delivered through two 21-gauge stainless steel lick tubes (Small Parts) and controlled using pinch valves (BioChem Fluidics). Licks were detected by measuring change in resistance at the lick port when animals made contact with the lick tube using custom lick detectors (Janelia, HHMI). Behavioral procedure and training. Animals were water deprived for at least 5 days prior to start of behavioral training. Animals were housed on a 12:12 light-dark cycle were tested between ZT 15 and ZT 24. To acclimatize animal to head-fixation and behavioral apparatus, animals were shaped by given water through a single lick tube until animals received their entire 1 ml water ration during a session. In subsequent sessions, a second lick tube was introduced. To encourage exploratory behavior in subsequent training, animals were rewarded for alternating licks between left and right lick tubes. Two-lick shaping sessions persisted until animals successfully received entire 1 ml water ration in a session. Odor discrimination was trained on subsequent sessions with only slight modifications to the paradigm used in testing sessions. After a variable inter-trial interval (12-15 seconds) and with the condition that 1 second had elapsed since the last licking activity was recorded. Odor stimulus was delivered for 500 ms and was initiated on the start of exhalation so that odor stimulus was stable prior to inhalation (Extended Data Fig. 1C-D). For each trial, odor concentration was drawn randomly. After stimulus onset, a "grace" period was enforced where licks were not scored to reduce impulsive licking prior to odor sampling. To eliminate stereotypic response bias, trials were chosen using a bias correction algorithm during training and testing 37 . For initial training, this grace period included time point until 500 ms after the first inhalation of odorant. After criterion performance was met, this grace period was shorted in the following session to 300, then to 150 ms for testing. Randomization was preformed using Mersenne Twister RNG (Numpy). Testing sessions were conducted only after animals reached criterion performance (>80%) on the odor discrimination task. Mask trials was randomly interleaved in sessions for only 2 of the 5 concentrations presented ( Fig. 2A). Trial ordering within a session was computer controlled and the investigator was blind to the conditions of each trial. For masking data at multiple concentrations, masking trials was presented in every other session with both concentrations masked in the same sessions at rate of 17% of total trials. For masking data for carvone mixtures, masking trials were interleaved throughout each session at a rate of 8.3% of total trials. Data were excluded from animals that did not complete training and testing due to loss of sniff signal, illness, or loss of implant. Two animals were excluded from mixture experiment due to faulty fiber implantation. Data from experiments represented in Fig 3B and S2D were collected from the same animals. Behavioral analysis. All behavioral analysis was conducted using custom scripts on the Anaconda Python distribution (NumPy 1.9.2, SciPy 0.15.1) 38 . Binomial proportion confidence intervals were calculated using the Clopper-Pearson "exact" method. Trials from sessions in which animals preformed a level of <80% correct responses were excluded from analysis. Data were fit with the Weibull psychometric function using maximum likelihood estimation method: For masking data, γ (guess rate: the asymptotic performance at short latencies) was fixed using the average from masking at time points < 60 ms and λ (lapse rate: the asymptotic performance at long latencies) was fixed based on the data obtained in unmasked trials within these sessions. For reaction time analysis, all parameters were fit. Confidence intervals for fit parameters (thresholds) were estimated using the 2.5 th and 97.5 th percentile of distributions created by fitting each of 10,000 bootstrap simulations for each experimental condition. To bootstrap, trials were randomly drawn with replacement using Mersenne Twister RNG (Numpy). Reaction time data was taken only from trials in which no mask stimuli were presented. These sessions were interleaved with mask sessions. Reaction time performance data and timing was taken using the first responses following odor stimulus onset irrespective of grace period. These data were fit using the same techniques as above, but with all parameters free. Trials with very long reaction times (>= 300 ms) were truncated from this analysis, as performance was not monotonic after approximately 300 ms. Masking electrophysiology. Electrophysiology was conducted in awake animals during using acute recording techniques. Six-shank, 64-ch silicone probes (Buzsaki 64sp, NeuroNexus) were used to record neural activity. Neural data were acquired using "Whisper" acquisition system (Janelia, HHMI) at 20833 Hz using SpikeGL software (Janelia, HHMI). Action potentials were detected and clustered using SpikeDetekt2 and KlustaKwik with manual clustering preformed using KlustaViewa 39 . All basic analysis was done using the Anaconda Python distribution. Mask response latency was determined by comparing baseline (no mask) activity distribution to mask response. To construct baseline sample distribution, PSTHs for 7 sniffs prior to every mask trial were sampled. From these baseline samples, 100,000 samples of the same size as the number of mask trial were drawn with replacement to create a simulated PSTH from the same number of trials as the masked PSTH. Finally, the PSTH from masked trials was compared with the baseline PSTH distribution. Latency to mask response was defined as the first bin where firing rate was >3 fold greater than the baseline PSTH and the bin was at the 0.0001th percentile of the bootstrapped baseline distribution. Linear SVM classifier. Population vectors were assembled from spike trains of recorded unit (n=29) that responded to one of the odors presented. For each cell and for each trial a 35dimensional vector was created by binning action potential events into 10 ms bins from the 0 ms to 350 ms after the first inhalation onset of the odorant. Activity vectors from cells were concatenated and standardized to make a population vector for training and testing. Linear classifiers were created using the Scikit-learn v0.16.1 LinearSVC class 40 . To assess performance of the classifiers on unmasked trials, a leave-one-out cross validation strategy was used. To classify masked trials, classifiers were built using all available unmasked trials (N=19), and the classifier was scored on its performance at classifying all masked trials. Concentration series electrophysiology. NeuroNexus A64 Poly5 2x32 probes were used to record acutely from awake animals. Units were detected using Spyking Circus software (v0.3.0) 41 . Significant odor responses were defined by comparing inhalation-aligned blank odor response distributions with responses to odor presentations (Fig. S3A-D). To establish these distributions, blank and odor responses were bootstrapped across trials, summed across trials, and smoothed with a 3-sigma Gaussian kernel of width 30 ms. For each time point, the bootstrapped firing rates were fitted with a Gaussian and the Cohen's D (discriminability) score was calculated for each time bin. Response latencies were defined as the first time-bin in with a D-score above 5, where overlap is < 0.01 (Fig. S3E). Computational model. Our model was based on random and sparse connectivity between the MT cells and the PC cells, as well as within the cortex as detailed below. Our simulation included 300 MT cells and 1000 cells in the PC. Neurons in PC formed random sparse inhibitory recurrent connections to other cells in PC with 50% probability. Non-zero inhibitory recurrent connection weights within PC were The state of each PC neuron was defined by the input that this neuron receives u i that satisfied the equation . Here τ = 0.05 is the time constant measured in the fraction of sniffing cycle. The activation state for each PC neuron had a hysteretic dependence on its inputs f i = F ± (u i ). The activation function F ± was single valued for values of input variable u satisfying u > u + = 0.2 and u < u − = −150. For these values of parameters, the activation function F ± was equal to 1 and 0, respectively. Within the bistable range, i.e., for u − ≤ u ≤ u + , F ± was bistable and remained constant depending on prior history. Therefore, if a neuron was activated, the activation function within the bistable range remained equal to 1, whereas for an inactivated neuron, the activation function was 0. Activation occurred when inputs exceeded u + , and inactivation happened when inputs fell below u − . Each simulation was carried out over the period of 700 time units using Runge-Kutta method with time step Δt = 0.002. The simulation started at t = -0.2 and lasted until t=1.2. t=0 corresponds to the onset of inhalation, and t=1 is the end of the early stage of the sniff cycle. The masking stimulus was presented between t=0 and t=1. This time interval was expected to reproduce the early part of the sniff cycle during which odorant identity is established. To model MT cell responses to odorants we generated random spatiotemporal patterns of MT cell activity. For the case of pure odorants (black and gray in Fig. 5D), we used two different random patterns for each of the two odorants. The response of each responsive MT cell was represented by a transient that lasted 0.5 time units (in fractions of the early part of the sniff cycle). The transient response consisted of an increase of the output of a mitral cell from 0 to 1 and reset back to 0. For low/high concentration conditions, the earliest transients began at t=0.4/0.25 respectively and the following MTC transients were distributed at random times with the time step of 0.02 (every 0.02 a MTC was recruited). This feature was intended to replicate the tendency of MT cells to respond later in the sniff cycle to lower odorant concentrations 42 . For the mixture case (Fig. 5D), we generated two sets of random recruitment orders for each of the pure binary compounds within the mixture and combined them into a single sequence by offsetting one of them by 2 positions, depending on which component concentration was bigger (60% + 40% vs. 40% + 60% composition). Following the earliest transient onset at t=0.25, as in high concentration case, one MTC was recruited every time interval of 0.02 with the recruitment order as described above. In each trial, MTC transients had a finite probability to be emitted to mimic the experimentally observed transient event reliability 43 . The probability was p=0.8/0.9 for low/high concentration conditions. To simulate the ChR2 stimulation, we added a pulse that began at the time indicated in Figure 4 and lasted 0.1. The amplitude of the pulse was 0.18. The pulse was present in 75% of MTCs chosen randomly. We added normally distributed white noise with the standard deviation of 0.1 to the activity of MTCs. This was done to reproduce observed features of psychophysical performance. We tested our simulations for a range of parameters and verified that the qualitative conclusions are not sensitive to the exact set of parameters chosen. We ran 500 trials for each of the two odorants and each concentration. After each set of 10 trials, we reset the randomly selected weights and parameters in the model to mimic trials performed by different animals. The perceived identity of the stimulus in each trial was inferred from the activity of PC cells at the end of the simulation (t=1.2) from the template pattern of PC response that maximally overlapped the evoked response. To obtain the template, we ran the simulation once without noise for every condition. Code availability. Code used for data acquisition is available at https://github.com/olfa-lab. Code used for analysis and modeling will be made available upon request. End Notes. E. D-score over time. Positive values specify excitation (odor response distribution is greater than blank distribution) and negative values specify inhibition relative to unit's blank response. F. Mean D-score odor response across all units for each concentration tested. Note early inhibition at highest concentration. G. Individual unit D-score odor responses for all units. Each axis represents responses at the concentration specified above the axis. The threshold used to determine response latencies is overlaid as a dashed line. The threshold is constant across concentrations. H. D-scored responses of recorded population for one concentration of odorant. Units are sorted by latency to inhibitory response. Latency is determined by the first time point in which each units' odor response D-score crossed a threshold of -5. Supplementary Figure 4. In our computational model, an incomplete suppression of animals' behavior to 50% by the masking stimulus at short masking times is due to sources of variability and noise. Solid red line: same as in Figure 5D. Dotted figure, the same with no neural noise and for a single network weight configuration (emulating the conditions of a single animal). The latter modification was made to eliminate the variability due to varying performance in different animals. The behavior as affected by the mask jumps from pure 50% and 100% supporting the interpretation that, in the model, the deviations from pure 50% performance are due to internal sources of noise and individual variabilities. Order-independent model. The patterns of PC activation are the same for different MT activation orders (letters on tops of panels), differing in the early stages of the sequence (differences underlined). These two glomerular activation sequences were chosen as examples to demonstrate the recruitment order independence of this instance of the model. C. Glomerular activation sequences in the modelling of masking response. To test the performance of the model in response to mask, we presented two different random sequences for pure odorants (top). For the case of mixtures, the recruitment order was obtained by mixing two pure odor sequences with the temporal offset equal to two positions. The sequence corresponding to the lower concentration component (40%) was delayed w.r.t. the higher concentration sequence (60%). Arrows show differences between pools of activated glomeruli. They occur later in the sequence (4 th and 6 th positions) delaying the effect of mask in the case of similar mixtures. D. Order independent model displays a small delay in the masking response for the case of mixtures (blue vs. black dotted lines). E-F. Order dependent model displays differing patterns of PC activation when the order of MT cell activation is different. To implement the sensitivity of the model to the order of activation sequence, we increased the strength of PB->PC weights from 0.15 to 0.3. This makes PC start detecting coincidences earlier in the sniff cycle. G. The activation sequences in this model were taken to be the same as in the order-independent model. Yet, the differences in the activation sequence emerge earlier in the sniff cycle (arrows) leading to earlier psychophysical performance. H. The order-dependent model shows a much smaller difference in the timing of behavioral responses to mask. The comparison of order-independent and order-dependent models suggests that the slight delay observed in the performance in similar mixtures of enantiomers (Fig. 3H) may be due to a form of order-independence in the decoding of the primacy sequence.
9,140.6
2017-04-06T00:00:00.000
[ "Biology" ]
Max Weber ’ s Methodology : The Method of Falsification Applied to Text Interpretations Text interpretations are usually leading to ambiguous results. This is especially the case for the interpretation of Max Weber’s methodology. I discuss Thomas Burger’s interpretation that Weber applied Rickert’s methodology and that he never developed his own standpoint regarding the methodological problems. In contrast to this view, I propose an alternative interpretation based on the Methodenstreit in economics and the philosophies of Kant and Rickert. In my opinion Weber offered a very unique solution to an old philosophical problem, which resulted in a complete break with the Platonic and Aristotelian tradition. His solution is what I call the postulate of internal consistency (a logical consistent application of an arbitrary scheme of interpretation). I will use Popper’s method of deductive falsification to decide if Burger’s or my interpretation produces fewer contradictions. Introduction Many sociologists interpret Max Weber's methodology in a specific direction, with the aim to use his methodology for their own problems.But they usually ignore aspects, which seem to be unimportant for them.This is in principle not problematic.But a selective interpretation becomes a problem, if with Weber's authority the own methodological approach is claimed to be the only acceptable in sociology (Scaff, 1984: p. 191).In such a case it is not anymore justified to overlook some aspects of Weber's methodology or the problem, which Weber wanted to solve (cf.Burger, 1976: p. ix). But it is possible to achieve an interpretation, which unquestionable gives an accurate description of his me-thodology?In the past, sociologists clearly failed to provide such an interpretation.Is it therefore impossible?Are interpretations necessarily arbitrary and relative?I do not believe so.I propose here the method of falsification as an alternative method for text interpretations, in the hope that we will get closer to the truth. Thomas Burger's Interpretation of Max Weber's Methodology Thomas Burger's interpretation of Weber's methodology is based on the hypothesis that Weber never developed his own methodology but just applied the methodology of Heinrich Rickert.Burger states "that this agreement is almost total within the era under consideration, i.e., the theory of concept formation and its epistemological foundation" and that Weber "never changed his views in any relevant way" (Burger, 1976: p. 7).Burger (1976: p. 4) cites Tenbruck's hypothesis that Weber was not really interested in methodology, but was temporarily pushed into this field by the crisis in economics.The sources of Weber's methodological writings are not philosophical problems but auxiliary methodological considerations of a specialized scientist.Methodological reflection is a means but not an end (Tenbruck, 1959: pp. 582-583).Burger (1976: p. 7) agrees with Tenbruck that Weber was not a methodologist, but he rejects Tenbruck's conclusion that Weber "was not in possession of a coherent and systematic methodological theory."For Burger, Weber had such a systematic methodology, but it was not his own.It was the methodology of Heinrich Rickert. Burger (1976: p. xv) defines the problem, which Max Weber wanted to solve and which he took from Heinrich Rickert, as follows: B 1 "What is it that makes the writing of history a justifiable undertaking?"Burger reformulates this question in a more "general" way.B 2 "Of all the possible things that one could want to know, which ones are a legitimate object of investigation?" This reformulation is actually not more general, but much more concrete, because it rejects one of two possible applications of a theory.The application of a theory can have two functions.A theory can be applied to test its validity, or a theory can be applied as a heuristic scheme of interpretation for the understanding of a singular case, without the aim to verify or falsify the theory.Burger's formulation implies therefore that Weber's methodology is not about the question "how we can check and improve and expand the [general] knowledge which we have" (cf.also Burger, 1976: pp. 76-77). Burger's strategy to support his hypotheses begins with an examination of Heinrich Rickert's methodology.I will give a short summary of Burger's description of Rickert's methodology, but I will also add information about Rickert's relation to Immanuel Kant.In my opinion, Kant is important for the understanding of Rickert and Weber. 3) But Rickert added to Kant's genetic a priori a value-related perspective.Kant showed that human beings have the inborn capacity to choose elements of the reality and to bring them into a logical (e.g.causal) form, but he did not explain, which elements are chosen.Rickert explained this choice of the elements by their relation to values.The values are determining the perspective and therefore what parts of the physical world are perceived as "real" (Rickert, 1921a: p. 332;cf. Burger, 1976: pp. 14-16). 4) Rickert rejected the left open possibility in Kant's work that a connection exists between the mind construction and the real thing (Ding an sich).It is questionable, if for Kant actors can force causality on nature, because the real things are following causal laws, or if this can be done independent of the existing or non-existing causality in the nature.For Rickert on the other hand neither the content of an object nor the causal relation between objects can be perceived, their images are only products of the mind (Rickert, 1921a: pp. 126, 334, 347;Burger, 1976: pp. 15-16, 20). 5) Rickert concluded that the mind is therefore producing the objective reality by accepting values or rejecting un-values (Rickert, 1921a: pp. 171, 188, 199, 348;cf. Burger, 1976: p. 15).6) Rickert applied finally the Kantian problem of "How is science possible?" to the historical sciences (Burger, 1976: p. 13).Do the historical sciences belong to science or metaphysics?This is the problem formulated by Burger as the question B 1 and B 2 which Rickert and Weber tried to solve. However, out of Rickert's assumptions arose a new problem, which had to be solved first, before the historical sciences could be declared a science.If every interpretation is based on the subjective value choices-and therefore on a subjective perspective, then the objectivity of knowledge in the historical sciences is not anymore guaranteed (Burger, 1976: pp. 17-18).Rickert's solution to this problem was that an ultimate value is used as a normative criterion accepted by everyone to produce intersubjectively valid historical interpretations (Rickert, 1921a: pp. 175-176, 346;Burger, 1976: pp. 16, 18, 41-42, 49-51).He furthermore stated that objective knowledge can be produced in the form of concept definitions.Concepts are the result of an inductive abstraction. As a next step Rickert accepted Windelbrand's definitions of two different types of sciences: nomological and ideographic sciences.Nomological sciences are using abstractions to build general concepts in the form of laws like physics, there as ideographic sciences are using abstractions to build individual concepts like the historical sciences (Burger, 1976: pp. 22-23).An individual concept can be described as the sum of several general concepts in their unique combination.A singular phenomenon is therefore the result of a specific constellation of several causal effects (Burger, 1976: pp. 48-49). In this sense are general laws not important for the historical sciences.General concepts are a means but not an aim.They are important for constructing individual concepts (Burger, 1976: pp. 33-34).But laws of history cannot be constructed, because individual (historical) concepts cannot refer to something general (Burger, 1976: p. 47). After describing Rickert's methodology, Burger argues that Weber agreed in all relevant aspects with Rickert: he accepted 1) the differences between generalizing and individualizing sciences, 2) the idea of individual concepts as a sum of a unique formation of general concepts, 3) that concept definitions are objective representations of reality, 4) an ultimate value as basis for all researchers' perspectives, and 5) finally, Rickert's problem of how to choose the essential elements for a description of a singular phenomenon. Related to the first point, Burger remarks that Weber rejected the thesis that nomological knowledge is the only legitimate form of scientific knowledge.And he shared Rickert's view that the individualizing method is as scientific as the formulation of general laws.Both methods of abstraction fulfill their specific tasks and they cannot replace each other (Weber, 1985: p. 187;Burger, 1976: pp. 60, 74).These methods of abstraction are a "process of selecting the essential phenomena from concrete reality" (Burger, 1976: pp. 73-74;cf. Weber, 1985: pp. 4, 13, 13 n.1), and "to abstract" means "to take away" or to ignore unimportant parts of reality (Burger, 1976: p. 74).The method of generalizing is focusing on one element or one relation between two elements in all cases and all other elements are ignored (Weber, 1985: p. 5;Burger, 1976: p. 69).The method of individualizing is dealing with all elements and their relations in one case in connection to the previous case and all other cases are ignored. Burger furthermore stated that Weber agreed with Rickert that applying one of these two methods would solve the problem of the infinite multiplicity of empirical reality (Burger, 1976: p. 70).I find this statement astonishing, because Hume showed that a method of generalizing in the sense of inductive verification fails, there as Kant showed that also the method of inductive intuition does not overcome the infinite multiplicity of reality for general statements.Both methods fail, because it is impossible to know all cases and to infer from the known to the unknown cases.And in the same way the method of individualizing fails as Rickert and Weber emphasized, because it is impossible to perceive all elements in one case.This is the content of their statement of the quantitative and qualitative infiniteness of reality (Rickert, 1921a: p. 121;Rickert, 1921b: p. 29;Rickert, 1921c: p. 33;Weber, 1985: p. 4).Therefore the acceptance of two different methods of abstraction is rather the definition of the problem than the solution.The problem is to guarantee the truth of our statements although it is based on incomplete experience, which leaded Rickert and Weber to the problem of selecting elements of the infinite reality.However, this is a problem for both types of sciences: for generalizing as well as for individualizing sciences. Burger states referring to the second point that for Weber as for Rickert an individual historical phenomenon in the sense of a unique constellation is causally explained by the previous individual historical phenomenon and so on (Weber, 1985: p. 172;Burger, 1976: p. 86).In detail an individual phenomenon can be described as the result of a specific combination of general causal effects (Weber, 1985: pp. 201-202;Burger, 1976: p. 132), random effects, and individual effects in the sense of human actors' motives in the preceding situation (cf.Burger, 1976: p. 86).General concepts are in the historical sciences a means but not an end.They are important for the understanding of phenomena "which are significant from concrete individual viewpoints" (Weber, 1985: pp. 208-209;Burger, 1976: p. 69, cf. p. 167).However, historical phenomena cannot be deduced completely from general concepts or causal laws (Weber, 1985: pp. 13, 18, 28, 174;Burger, 1976: p. 76), simply because the unique constellation of the effects of these general laws which constitute the phenomenon cannot be explained by an all determining holistic law itself. Concerning the third point, scientific knowledge was for Weber an "ordering in thought of empirical reality" (denkende Ordnung der empirischen Wirklichkeit) or an "intellectual mastery of empirical data" (Weber, 1985: pp. 150, 208;Burger, 1976: p. 61).The emphasize of "empirical" suggests that Weber did not agreed with Rickert's rejection of Kant's Ding an sich, and that for him the mental constructions of the reality were dependent on the real facts.However, Burger argues against this interpretation, because he tries to avoid any connection between Max Weber and positivism.And this can be done most successfully, if Weber was not interested in the reality at all but only in the mental representation of the reality in the mind (Burger, 1976: p. 61). Burger describes therefore "concrete facts" as "mental images", which are the material of concept formation for Rickert and Weber (Burger, 1976: p. 62).And he explicitly states that the material of concept formation is a "fact" and not an "experienced sensation."1However, Burger admits that sensations are not completely unimportant, because the mind transforms them-through a categorical judgment-into facts (Burger, 1976: p. 63;Weber, 1985: pp. 72-73, 109-110).Nevertheless, Burger (1976: p. 68) maintains that concepts are the aim of scientific knowledge and not a means to acquire information about reality2 . The objectivity of the concepts is guaranteed in Burger's interpretation by the application of categories, for example the category of "causality" (Burger, 1976: pp. 64-65, 70;cf. Weber, 1985: pp. 89, 212-213).A concept is an objective representation of the empirical reality, if it was formed correctly (Burger, 1976: p. 69). It is scientific knowledge because its content is facts and its form is conceptual (whereas the content of factual knowledge is ideas of sensations, and its form is categorical) (Burger, 1976: p. 70). Then Burger (1976: pp. 67-68) suggests that Weber applied an essentialistic method for selecting the facts, which are worth knowing, although he states in a footnote clearly that "it hardly needs emphasis that 'essential', as Weber uses it, has nothing to do with the 'nature' of things" (Burger, 1976: p. 191 n. 12).Of course, Burger does not say directly that Weber is an essentialist, but he translates "wesentlich" as "essential", although in the context of Weber's work "important" would be the far better translation (cf.Weber, 1985, pp. 5, 86, 171 cited by Burger, 1976, p. 67).This gives the impression that Weber had as Carl Menger an Aristotelian philosophy for his concept formation in mind, a position which he clearly rejected.It is my impression that Burger deliberately chose this translation, because he seems to be well aware of the impossibility to verify a general concept inductively, because the future cannot be known (Weber, 1985, pp. 4, 14, 75 n. 2, 92 n.1, 237;Burger, 1976: pp. 32, 65).But he states that Weber applied a method of induction (Burger, 1976: p. 205).Therefore the gap between the known cases and all cases including the unknown cases can only be bridged inductively by the method of intuition.But this would mean that Weber was a methodological essentialist3 . All efforts to acquire intellectual knowledge of the infinite reality through the finite human mind, therefore, rest on the tacit assumption that in each case only a finite part of this reality can be scientifically grasped, and that it alone is "essential" in the sense of "worth knowing" (Burger, 1976: p. 73). The problem arises now for Burger to explain, why Weber called the propositions of the classical economic theory "ideal types" and not genuine general concepts or "laws of nature" (Weber, 1985, pp. 140, 179, 189-190), if they were for him objective valid.For Burger Weber regarded the economic propositions-when applied to all the concrete empirical phenomena to which they are taken to refer-as only approximately correct."Ideal types" were for Weber heuristic schemes of interpretation and not a hypothesis about the causal connections in reality (Burger, 1976: pp. 120-121), and they are therefore not rejected, if they fail to describe a concrete instance accurate (Burger, 1976: p. 204 n. 31).However, Burger's solution is that the objective valid "ideal types" will describe a concrete instance correctly, "if the world would function according to certain specified principles" (Burger, 1976: p. 133).In other words, the concepts as constructions of the mind are objective valid, but they can only be called laws, if in reality the same causal laws determine the processes as they there constructed by the human minds.And because we cannot know anything about reality, Weber called these concepts "ideal types" (cf.Burger, 1976: pp. 165-166) 4 . Burger concludes then in relation to the fourth point that Weber applied Rickert's solution to the problem of selecting elements of reality."Cultural interests" based on values determine what is significant for the researcher (Weber, 1985: p. 181;Burger, 1976: p. 80).And the term "cultural interests" implies for Burger (1976: p. 80) that every member of a society has values relating to common concerns."Otherwise he would not be interested in those phenomena in which other persons' values relating to collective concerns are embodied."However, Burger has to admit that Weber was not stating this explicitly, but he assumes that Rickert's argumentation was widely known, and that Weber regarded it as unnecessary to make his point clear.A historical description is therefore objective, if the selection of the essential elements was based on a general value of the researcher's culture (Burger, 1976: p. 80). In other words, Burger suggests that the general values of the historian's society are determining the selection of the essential elements in the description of history.And because all historians of a specific society share this same general value, the result of writing history will be objective.However, the problem is that only Burger (1976: pp. 81-82) speaks of "general cultural values" there as Weber was just talking of "cultural values".To close this gab, Burger states that a great number of general cultural values exist and that they "are too many to allow the description of all the constellations of phenomena to which they are attached, and the historical developments of all these phenomena, in one single historical account" (Burger, 1976: p. 81).Therefore is the historian "free" to choose the guiding values (Weber, 1985: p. 124 n. 1;Burger, 1976: p. 81).But is the result of historical writings in this case still objective?Burger has to admit that for Weber the historical sciences are not objective in this sense, because a plurality of viewpoints must be accepted (Weber, 1985: pp. 170, 184;Burger, 1976: pp. 82, 84).Burger (1976: p. 84) concludes therefore that the objectivity of the historical sciences can be achieved, if history is written from all viewpoints related to the general cultural values of the researchers' society. Therefore cultural values and with them the viewpoints of history are infinite, because different societies or the same society in different times may have different cultural values.But in the same time they are finite for a society in a "specific moment or period in time" (Burger, 1976: p. 87).And the objectivity of historical writings is only given for historians of the same society in the same time period.Burger finally discusses the point that "there is unanimous agreement on the opinion that Rickert postulated the existence of absolutely valid values which can serve as viewpoints for historians, whereas Weber maintained that they are 'subjective'" (Weber, 1985: p. 183;cf. von Schelting, 1934: p. 228;Fleischmann, 1964: p. 198).Burger rejects this interpretation and states that Weber's text is in this case ambiguous (Burger, 1976: p. 87).He argues that Weber did accept for the natural sciences concepts which contain potential judgments of general validity (Weber, 1985: p. 4) and he therefore did accept "the idea of absolutely universal knowledge", although the method of induction is based only on "an empirically limited number of facts only"-in other words Weber believed in essential definitions of concepts based on intuition.And if Weber was an essentialist in relation to the natural sciences, it would be illogical to assume something else for the historical sciences (Burger, 1976: p. 88).And when Weber was talking about "subjective" viewpoints, he was not clearly separation his belief in general cultural values from "the fact that different people are attracted to different topics and viewpoints, and the fact that viewpoints change over time" (Burger, 1976: p. 89).Burger (1976: p. 64;Weber, 1985: pp. 134-135) points out concerning the fifth point that Weber was separating two different concepts of causality: one typical for the generalizing ("natural") sciences and one typical for the individualizing ("historical") sciences.For the generalizing sciences the concept of causality means a causal law, the regular appearance of a cause and an effect, like fire and heat.In the individualizing sciences the meaning of causality refers to the idea that one singular phenomenon was caused by a specific preceding other singular phenomenon.The idea of regularity loses its importance, if the researcher looks at a singular phenomenon as a unique constellation of causal regularities that was caused by other singular phenomena with their unique con-stellations of causal regularities (Weber, 1985: pp. 134-135;Burger, 1976: p. 64). Burger claims finally that Weber rejected the equation of "causal law" and "causality", but in all the references he gives to support his interpretation, Weber stated only his rejection of holistic causal laws (Weber, 1985: pp. 144, 178, 186).Holistic causal laws could not be the aim for the historical sciences, since Weber regarded history as an individualizing science, which uses causal laws only as a tool to describe some aspects of a singular phenomenon.Burger furthermore criticizes Julien Freund for his interpretation that Weber was accepting and using the two different concepts of causality as "the idea of rational action, a sort of dynamic between two qualitatively different phenomena, on the one hand; and [as] the idea of subsumption under a general rule, on the other" (Freund, 1968: pp. 49-50).It is substantial for Burger's argumentation to dismiss any interest of Weber in causal regularities (Burger, 1976: p. 93), because he cannot justify otherwise his hypothesis B 2 that Weber's problem was the selection of important elements of reality and not to check and improve and expand knowledge.The latter problem would directly lead to the problem of the truth of our general knowledge, which is the problem of positivism. As a consequence Burger concludes that Weber advocated the method of understanding (Verstehen) of the "inner states" of human actors in contrast to the grasping (Begreifen) of meaningless facts.In this sense Weber was striving for the knowledge of the human actors' motives and plans, "which cause their actions and thereby give these actions their subjective meanings".And he therefore was not interested in the causal forces, which might have an impact on the actions too.Weber was interested in "culture" and not in "nature" (Burger, 1976: p. 103). Finally, Burger comes to the conclusion that Weber's sociological program "to formulate type concepts and generalized uniformities of empirical processes" (Weber, 1968: p. 19) should not be confused with "the formulation of universal laws and their subsequent testing in order to confirm or falsify them" (Burger, 1976: pp. 137-138).In Burger's interpretation sociology is only a complementary science for the historical sciences and the only task is to construct "ideal worlds" or concepts and schemes of interpretation as a heuristic tool.Therefore Burger's hypothesis B 2 that Weber's problem was the selection of essential elements out of the infinite reality seems to be justified. Critique of Burger's Interpretation of Weber's Methodology I do not agree with Burger's interpretation of Weber's methodology, although I found his analysis in large parts important for a better understanding of Weber's work.However, Burger's main thesis that Weber just applied Heinrich Rickert's methodology is for me not acceptable.Weber shared Rickert's problem concerning the objectivity of the historical sciences, but he solved the problem in a different way.In my opinion, the weak point of Burger's interpretation is the mostly exclusive reference to Rickert's methodology, although he states that Weber was not a methodologist, and that he was only pushed into this field, because of the methodological problems in economics.I would guess that it is then as important to analyze the problems in economics in Weber's time to understand what he wanted to solve with Rickert's methodology. In Weber's time economics in the German speaking area was divided into two schools: the German Historical School of Gustav Schmoller and the Austrian School of Economics of Carl Menger.These two schools were in a quarrel-the so-called Methodenstreit-about the correct approach in economics.Schmoller advocated holistic historical analyses of the economy and the society, whereas Menger demanded a deductive approach based on general economic laws combined with a methodological individualism.The Methodenstreit can be therefore characterized as a conflict between holism and individualism on the one hand and an inductive historical approach-where historical cases are the starting point and the general laws the aim-and a deductive theoretical approach-where the general laws are the starting point and the description of historical cases is the aim (Hansen, 1968: p. 139;Schluchter, 1989: p. 4).However, this is only the foreground of this quarrel.In fact the methodological core of both schools was not that different.Schmoller wanted to form general laws with the method of inductive verification based on extensive studies of historical cases.It was his aim to formulate general statements which do not contradict the collected knowledge, but to justify the validity of these statements he demanded a large number of historical cases (Hansen, 1968: pp. 151-152).Menger shortened this procedure by the method of inductive intuition.He formed abstract general laws based on his knowledge and stated that these laws are a priori true (Hansen, 1968: p. 162).It is important to realize that from a Kantian or Neo-Kantian point of view both methodological approaches are fundamentally wrong.Immanuel Kant accepted David Hume's proof that a logical consistent method of inductive verification is impossible.It is impossible, because we cannot know the future and we are not allowed to draw any inferences from the limited known cases to all cases including the unknown cases (Hume, 1978: pp. 89, 139).Inferences from the known to the unknown would be only reasonable, if nothing changes in reality, but this is not the case (Hume, 1978: p. 89).Kant on the other hand showed that also the method of inductive intuition produces no valid results.Intuition leads for Kant to metaphysics but not to empirical sciences.This is the topic of Kant's Critique of Pure Reason.My first conclusion is therefore that Weber had a profound knowledge of methodology and philosophy, because he would have never realized the importance of Rickert's methodology for the Methodenstreit, if he would not have known Kant's philosophy very well, since the main problem of both economic schools was not clearly discussed in the Methodenstreit (cf.Turner & Factor, 1984: p. 39;Honigsheim, 1968: pp. 28-29). A second aspect is important in relation to the Methodenstreit, which do contemporary sociologists not often realize, but which is probably the key for the understanding of Max Weber's methodology.It is the question why Menger replaced the holistic inductive approach of the dominant German Historical School with a subjective deductive approach.The reason of this step was the failure of the German Historical School to explain one of the most important aspects of economics: price changes (Hayek, 1973: pp. 3, 5).Prices were regarded in the Historical School as objective values depending only on the good itself.For example Karl Marx said that prices are equivalent to the working time needed to produce the good.However, such definitions can only describe fixed prices, but they fail to explain changes of prices.It was the marginal revolution leaded by Menger, Jevons, and Walras, which solved this problem.Menger explained prices as the result of the subjective needs of the actors and the objective conditions to satisfy this needs (Hayek, 1973: p. 6). These two elements-the subjective motives of the actors and the objective restrictions-are still today the forming elements of the modern economic theory.It is the foundation of the economic logic of choice or the economic calculus (Hayek, 1973: p. 7).And it is the formulation of the basic idea of the methodological individualism (Hayek, 1973: p. 8).But the problem of formulating true statements about the observed actors' motives leaded Menger to postulate that all actors are behaving rational.To avoid a psychological reductionism Menger stated that the rationality principle is a priori true (Menger, 1883(Menger, : p. 42 [1985: p. 62]: p. 62]).Therefore, Menger's non-psychological solution to the explanation problem of price changes forced him to give up the method of inductive verification and the holistic approach.Max Weber's position in the Methodenstreit can now be described like this: Weber had the same scientific interests as Gustav Schmoller (Hennis, 1994: pp. 115-117).Weber's aim was the description of historical processes, especially the phenomenon of Western modernity.But he accepted Menger's solution to the problem of price changes and therefore he accepted the methodological individualism (cf.Burger, 1994: p. 94) and a priori given general concepts as starting point (cf.Hennis, 1994: p. 113).Furthermore the distinction between individual subjective motives and the objective forces of the situation became an important aspect of both his theoretical and historical writings.In fact, Weber's historical writings can be seen as an application of Menger's marginal utility theory to historical problems.However, Weber did not accept Menger's rationality principle and replaced it with his classification system of action types.And finally Weber rejected the methodologies of both Schmoller and Menger, because both economic schools were not able to guarantee the truth of their statements, although they claimed to produce objective results (cf.Hennis, 1994: p. 110). To solve the methodological problems he turned to the Neo-Kantian philosophy of Heinrich Rickert.I agree with Burger that Weber had no objections against the distinction between generalizing and individualizing sciences, and that for him sociology belonged to the generalizing or "natural" sciences (Burger, 1976: p. 68;cf. Weber, 1980: p. 9).He also regarded as Rickert an individual phenomenon as a unique combination of general causal laws.Therefore I can only conclude that Weber demanded nomological knowledge as a prerequisite for the description of singular historical phenomena.And the fact that neither sociology nor economics were able to provide such general laws and concepts in Weber's time forced him to develop these general concepts by himself.This is the reason for his later change of interest from historical economics to theoretical sociology.However, the point is that Weber was interested in historical as well as nomological knowledge, and this leaded him to the problem of the truth of general statements, because descriptions of historical events must be wrong, if the elements-the general causal laws-in the construction of the individual type are wrong.This conclusion contradicts Burger's hypothesis B 2 that Weber demanded as only scientific procedure the selection of relevant elements out of the infinite reality, because in my interpretation the validity of the individualizing method forced him to deal also with the validity of the generalizing method.Regarding the generalizing sciences Weber re-jected under the influence of Hume and Kant both methods of inductive verification and intuition.He called as a consequence the general concepts "ideal types", because the truth of these general concepts cannot be guaranteed.Regarding the individualizing sciences Weber rejected Rickert's solution of objective concepts based on an ultimate cultural value, because Weber did not believed in the existence of such an objective value.As a result he called also the individual concepts "ideal types", because an arbitrary selection of the infinite elements cannot be avoided (cf.Weber, 1985: pp. 4, 14, 75 n. 2, 92 n. 1, 171, 177, 237;Burger, 1976: p. 65).Weber's "ideal types" are in this sense a solution to the problem of truth of scientific statements.His solution is a radical break with a several thousand-year-old philosophical tradition (Voegelin, 1952: p. 20) and it is a consequent application of the critical results of Hume and Kant (cf. Turner & Factor, 1984: p. 38).If we cannot assure the truth of our scientific statements, then we must stop to claim their objectivity (cf.Turner & Factor, 1984: pp. 34, 36-37).Therefore Weber regarded "ideal types" as a priori chosen but not a priori true.This is a clear distinction to Menger's methodological essentialism.They are based on subjective values and interests of the researcher.They are intersubjective understandable but not necessary accepted by all researchers.For example, a sociologist of the Critical Theory can understand Weber's ideal types, but he will probably reject them inclusive Weber's approach, because for him Weber's values lacked a "critical" attitude.This is obviously a distinction to Rickert's methodology based on objective values. But Weber also declared that an objective historical science is possible, although every history is written from a specific viewpoint determined by the subjective values of the researcher.Weber's solution demands of every researcher clear definitions of the a priori and arbitrary chosen general concepts.The clear definitions are necessary for other researchers to understand the taken perspective.And the objectivity is guaranteed by the logical consistent application of this viewpoint, because every researcher who applies this understandable perspective will see the described phenomenon in the same way.Therefore can the results of the historical sciences be objective, although they necessary depend on subjective values and viewpoints.I will call this solution of logical deductions out of subjectively chosen perspectives the postulate of internal consistency.Once the subjective viewpoint is taken the scientific procedure has to be applied objectively and this means logically consistent.This is what Weber had in mind, when he demanded the absence of value judgements in the application of general concepts, although obviously every research approach is based on them. My interpretation of Weber's methodology contradicts now Burger's interpretation in several points.First of all, I agree with Burger that Weber accepted the Neo-Kantian distinction of generalizing and individualizing sciences and general laws are used to describe individual historical phenomena.However, I disagree with Burger that Weber regarded "ideal types" as objective valid, research perspectives as determined by objective general cultural values, and the problem of the validity of nomological knowledge as unimportant.In my opinion Weber's position differed in the following points: 1) He rejected the idea that ideal types are objective true, because he saw no solution for the method of abstraction in the generalizing and individualizing sciences, which could guarantee the truth of general and individual concepts. 2) He therefore rejected every form of methodological essentialism. 3) Weber never accepted Rickert's objective values as foundation for his methodology.4) Weber defined two concepts of causality: on the one hand causality as causal regularities in the sense of general laws and on the other hand causality as the causation of a phenomenon by subjective motives of human actors.And both concepts were equivalent important for him.5) Therefore was sociology for Weber not just a complementary science for history, but the necessary foundation of an objective historical science. 6) And finally the objectivity of the historical sciences is for Weber guaranteed by the postulate of internal consistency and not by the construction of objective ideal types based on categories.Weber's unique solution for the methodological problems depends on the logical deductive application of general concepts and not in their inductive construction. The Method of Deductive Falsification for Text Interpretations Despite the differences of Thomas Burger's and my interpretation, I think that Burger (1976: p. 58) is offering a sufficient solution to the problem of truth in the individualizing sciences: not only the internal consistency of the deductions out of the chosen perspective but also the external consistency between the interpretation and the facts (the whole text or work) is important. What Burger calls here internal consistency is what I defined as the concept of external consistency.The internal consistency refers to the function of a theory as a heuristic tool.It is not the aim to test or confirm a theory but to apply it as a guide for an interpretation.And an interpretation is internally consistent, if all conclusions out of the premises are logically and therefore objectively understandable.An interpretation is on the other hand externally consistent, if the conclusions do not contradict with the facts.The aim is here to test the interpretation against the sentences in the text, and if contradictions occur, then the interpretation and the founding premises can be rejected.Burger's proposal is therefore nothing else than Karl Popper's method of deductive falsification.And I agree with Burger that this solution is better than Weber's postulate of internal consistency, because also wrong perspectives can be falsified in this approach.However, it is important to realize that although this solution is Popper's method of deductive falsification, it is not Popper's methodology, because Popper regarded the problem of truth in the individualizing sciences as unsolvable.History was just a collection of stories and therefore no science at all.He furthermore never considered the application of method of deductive falsification in text interpretations, because for him the meaning problem-as it was described in the Logical Positivism-was only an illusionary problem (Popper, 1989: p. 253). Karl Raimund Popper's solution to the problem of truth in the natural sciences is the "testability" or "falsifiability" criterion.Popper's methodology starts with Hume's and Kant's insight that an inductive strategy-a strategy to induce out of subjective experiences more and more abstract theories-can never prove the truth of a theory (Popper, 1989: p. 3).But Popper realized that theories can still be proven to be wrong by the method of deductive falsification, although the verification approach is not logically justified.A theory is falsified if a prognosis deduced out of the theory does not stand the test against reality (Popper, 1987: p. 104).But if a theory survives the test, any conclusion about the truth of this theory is not allowed, because it still could fail in the future (Popper, 1989: p. 15).An objective causal explanation consists of two types of sentences: general sentences -hypothetical laws-and specific sentences-initial conditions-which are only given in a concrete situation (Popper, 1989: pp. 31-32).Out of a combination of general and specific sentences a prognosis can be derived, which can be tested against the basic sentences-statements about singular facts.If contradictions occur between the projected results and the basic sentences, then a general statement must be wrong.In other words, Popper's solution is an extended version of Weber's: Popper demanded not only the internal consistency of theoretical systems but also their external consistency.More generally formulated, Popper said that contradictions are a bad sign in every case, because they indicate that something is wrong.I guess that most scientists can accept this sentence and I have no doubt that this insight can be applied also to the individualizing sciences and to text interpretations.In relation to text interpretations the method of deductive falsification means therefore that a hypothetical scheme of interpretation is falsified, if it leads to conclusions (the prognosis in the empirical sciences), which contradict some (basic) statements of the interpreted text. However, the application of the method of deductive falsifications to text interpretations leads to some problems.First of all, we can only apply this method, if we assume that the interpreted text is logical consistent.This assumption implies that contradictions are the result of a wrong scheme of interpretation and not of the illogical thoughts of the author or of his insufficient ambiguous writing style.Obviously this is a very strong assumption, which I am not willing to make in every case.Therefore, this approach is restricted to specific logical scientific texts.However, I do not think that this point renders the method of deductive falsification for text interpretations irrelevant, because I see no reason to study a scientific text, which is illogical nonsense or just tautological.From such a text we could learn nothing about reality (the reality is in this case the author's ideas 5 ). The second problematic point is related to Popper's demand of repeatable falsifications.Contradictions alone are for Popper a necessary but not a sufficient criterion.Accepted basic sentences falsify a theory only if the singular facts stated in the basic sentences can be repeated and can be causally explained by an alternative hypothesis (Popper, 1989: pp. 54-55).This is a problem for text interpretations, because texts are singular phenomena with a finite number of sentences and therefore not repeatable 6 .This means that falsifications of text in-5 Alfred Schutz's distinction of first order and second order constructs is here important.In the natural sciences the researcher constructs first order models of the reality (the causal regularities in the nature).But in the social sciences two types of realities coexist.Some social scientists (like Marx and Parsons) construct first order models of the social reality (the causal regularities in the society or the regularities of the social system).Others (like Weber and Schutz) also modulate second order constructions of the actor's constructions of the social reality (the actor's knowledge, ideas, and beliefs about the social and natural reality).In the case of text interpretations only the second order constructs are important.The aim is to find out what the author wanted to say and how he saw the reality and not if his beliefs or ideas about reality were correct. 6Texts are not repeatable but they are expandable.The author can add other texts to clarify or to change his position. terpretations will be based on much less evidence than falsifications of general statements in the natural sciences, there in ambiguous cases a series of empirical tests can help to clarify the evidence.This leads to the problem that in some cases it will be impossible to decide which scheme of interpretation produces fewer contradictions.A point can be reached there a further development of an interpretation cannot be accomplished. However, besides of these difficulties at least in one aspect the application of the method of deductive falsification in text interpretations has an advantage over the application in the empirical sciences.The facts stated in basic sentences are unquestionable given in the form of written sentences in the texts, there as basic sentences in the empirical sciences are only conjectures based on observations.These conjectures are accepted, if the researchers can reach a consensus (Popper, 1989: p. 73).It is obviously that this gives the falsification approach in the empirical sciences a weak basis. My conclusion is that although the application of the method of deductive falsification to text interpretations is not unproblematic, it is necessary in accordance with Burger that schemes of interpretations that produce contradictions are avoided.And this task is most effectively accomplished with the method of deductive falsification. The Method of Falsification Applied to the Interpretation of Weber's Methodology I will follow now Burger's suggestion of "testing" his and mine interpretation of Max Weber's methodology.I will regard the scheme of interpretation that leads to more contradictions as falsified.And the surviving hypothetical scheme as corroborated as long as no other evidence is shown.My hypothesis is that Max Weber was facing the problem of the objectivity of the historical sciences as defined by Burger.But the point that Weber constructed individual concepts out of a unique constellation of general concepts forced him to deal with the problem of truth in the individualizing as well as the generalizing sciences.Weber's problem can therefore be formulated as an extended version of Kant's problem: E 1 How are "natural" (generalizing) and "historical" (individualizing) sciences possible?And Weber solved this problem in a unique way, by accepting the subjectivity and arbitrariness of the general and individual concepts and by demanding the internal logical consistency of the derivations out of these a priori given concepts. I will propose a list of basic sentences to evaluate the potential of the two given schemes of interpretation.e 1 The concepts and "laws" of pure economic theory are examples of this kind of ideal type.(Weber, 1968: p. 9;[1980: p. 4]) e 2 Developmental sequences too can be constructed into ideal types [•••].(Weber, 1949: p. 101;cf. p. 103 [1985: p. 203;cf. p. 205]) The first two statements e 1 and e 2 of Weber indicate that for him micro-and macro-regularities or causal laws are ideal types.b 1 The ideal-typical concept will help to develop our skill in imputation in research: it is no "hypothesis" but it offers guidance to the construction of hypotheses.It is not a description of reality but it aims to give unambiguous means of expression to such a description.(Weber, 1985: p. 190; proposed and translated by Burger, 1976: p. 121) b 2 Those interpretative schemas [TB: ideal types]••• are not only "hypotheses" in analogy to the hypothetical "laws" of natural science.When concrete processes are heuristically interpreted, they can function as hypotheses.But in contrast to the hypotheses of the natural sciences the insight that in a concrete instance they do not contain a valid interpretation does not affect their usefulness for the establishment of knowledge [•••].(Weber, 1985: p. 131; proposed and translated by Burger, 1976: p. 204 n. 31) e 3 In order to penetrate to the real causal interrelationships, we construct unreal ones.(Weber, 1949(Weber, , pp. 185-186 [1985: p. 287]: p. 287]) Combined with the statements b 1 , b 2 , and e 3 this means that for Weber general causal laws as well as individual historical concepts are not real-not even hypothetical-and it makes any sense to falsify them.Ideal types are nothing more than a heuristic tool.I can only conclude out of these statements that ideal types are not objective true.This supports my interpretation, that Weber regarded any solution for the method of abstraction in the generalizing and individualizing sciences as logically valid and that therefore the truth of general and individual concepts cannot be justified.Burger has severe problems to make sense out of these statements in his in-terpretation.He solves the problem-without giving reference to Weber-by stating that ideal types are objective true, but they would be only causal laws, if the reality is as we imagined it (Burger, 1976: p. 133).Obviously, Burger's and my definition of "objective" seems to be a little bit different.I am only willing to call something "objective", if it corresponds to reality.On the other hand I am not certain how Burger defines "objectivity", but I guess that he had probably something like "intersubjectivity" in the sense of shared beliefs about the reality in mind.However, I cannot see how this should support Burger's interpretation that for Weber ideal types are objective as the result of the application of an inductive procedure. On the other hand Burger proposed another statement b 3 which clearly supports his interpretation and which produces problems in my interpretation.b 3 The objective validity of all empirical knowledge rests exclusively upon the ordering of the given reality according to categories which are subjective in a specific sense, namely in that they present the presuppositions of our knowledge and are based on the presupposition of the value of those truths which empirical knowledge alone is able to give us.(Weber, 1985: pp. 212-213; proposed and translated by Burger, 1976: p. 70) The first part of this statement is Kant's solution to the question how science or more concrete how Newton's theory is possible.He avoided the invalid methods of inductive verification and intuition by claiming that our mind is forcing order on the chaotic reality7 .And the pre-empirical categories-as for example the category of causality-are producing this order.The second part is Rickert's extension of Kant's solution: not only are the biological given categories but also the cultural values pre-empirical.I can only assume that Weber adopted here the language of Rickert, although he was well aware that "objective" in this sentence does not mean that the application of categories and values produces true knowledge.Marianne Weber reports Weber scepticism against Rickert's terminology in statement e 4 .e 4 I have finished Rickert.He is very good; in large part I find in him the thoughts that I have had myself, though not in logically finished form.I have reservations about his terminology.(Weber, 1975(Weber, : p. 260 [1984: p. 273: p. 273]) I think that Weber refers in this comment to Rickert's usage of the word "objectivity".In this sense he would not have a problem with Rickert's methodology as long as he does not claim that his method of defining concepts can guarantee the truth of general or individual statements.Therefore Weber uses consequent the term "ideal type" instant of the terms "concept" or "law".However, I need an auxiliary assumption, which I cannot support further with Weber's statement, to avoid a contradiction with my interpretation.As a result the previous statements support neither Burger's nor my interpretation. But regarding the next point, all statements b 1 , b 2 , e 1 -e 4 corroborate my interpretation that Weber rejected every form of methodological essentialism, because the basic idea of the methodological essentialism is that essences are true.The first statements clearly contradict this idea.And even the statement e 4 does not support this view, because Kant's categories are applied to the observable reality (physics) and not to the essences in the background (metaphysics).This point confirms my interpretation and contradicts the impression about Weber given by Burger. The following statements refer to the main question, if Weber did or did not accepted Rickert's objective values as foundation for his methodology.b 4 The historian is "free" as far as the choice of the guiding values is concerned which in turn determine the selection and formation of the "historical individual" [•••] which is to be explained.From here on, however, he is absolutely bound to the principles governing the establishment of causal interdependences.He is "free" in a certain sense only as far as the inclusion of logically "accidental" elements is concerned.(Weber, 1985: p. 124 n. 1; proposed and translated by Burger, 1976: p. 81) b 5 There is no absolutely "objective" scientific analysis of culture-or put perhaps more narrowly but certainly not essentially differently for our purposes-of "social phenomena" independent of special and "one-sided" viewpoints according to which-expressly or tacitly, consciously or unconsciously-they are selected, analyzed and organized for expository purposes.(Weber, 1985: p. 170;proposed and translated by Burger, 1976: p. 88) b 6 Now, without any question are all those value-ideas "subjective".(Weber, 1985: p. 183;proposed and translated by Burger, 1976: p. 87) The basic sentences b 4 -b 6 support only the common interpretation that Weber did not accepted Rickert's ob-jective values.Values are chosen freely, they are one-sided and subjective.Nothing indicates Burger's main assumption that Weber was using Rickert's methodology without any critical distance.However, Burger tries to corroborate his interpretation with the following statement b 7 there Weber mentioned "universal cultural values".b 7 When we require from the historian and the social research worker as an elementary presupposition that he distinguish the important from the unimportant and that he should have the necessary "point of view" for this distinction, we merely mean that he must understand how to relate the events of reality-consciously or unconsciously-to universal "cultural values" and then to select those relationships which are significant for us.(Weber, 1985: p. 181;proposed and translated by Burger, 1976: p. 82) But is this really Weber's standpoint?It follows directly the passage e 5 there he criticizes researchers who believe in the objective determination of a correct perspective.And he finally stated in the basic sentence b 8 that it is a personal matter of which perspective is chosen.e 5 If the notion that those standpoints can be derived from the "facts themselves" continually recurs, it is due to the naïve self-deception of the specialist who is unaware that it is due to the evaluative ideas with which he unconsciously approaches his subject matter, that he has selected from an absolute infinity a tiny portion with the study of which he concerns himself.(Weber, 1949(Weber, : p. 82 [1985: p. 181: p. 181 the refraction of values in the prism of his mind gives direction to his work.(Weber, 1985: p. 182;proposed and translated by Burger, 1976: p. 85; in braces is the translation by Shils and Finch [Weber, 1949: p. 82]) Burger can only avoid a contradiction by formulating the auxiliary assumption that an objective history is possible, if a complete history is written from all cultural viewpoints, although different cultural values exist and the researchers are free to choose between them.But Burger has to admit that Weber never said so.As a result, it is much easier to interpret the statements b 4 -b 8 and e 5 in my scheme than in Burger's.Therefore my interpretation that Weber rejected objective values is supported. The next point concerns the question what Weber understood under the term "causality".Statement e 6 shows that Weber made a distinction between two concepts of "causality": the first concept is based on the idea of a temporal connection between two individual phenomena (something unique is "caused" by something previously unique) and the second concept derives from the idea of a regular connection between two phenomena (cause and effect as a law).The first concept as a temporal connection between individual phenomena can be thought as a macro-connection between historical situations (e.g. the treatise of Versailles was leading to Hitler's rise) or as a micro-connection between the actor's motives and the result of their actions.Statement e 7 indicates that Weber regarded the micro-connection as the relevant temporal cause for individual phenomena.e 6 In its full-one might say: its "original"-sense, [the category of causality] has two components: on the one hand, the idea of "having an effect", as a (so to speak) dynamic link between phenomena that are qualitatively different from one another; on the other, the idea of being bound by "rules".(Weber, 2012(Weber, : pp. 86-87 [1985: pp. 134-135: pp. 134-135]) e 7 A causal approach will also embrace the "internal" aspect of the process, as well as the idea that an act "has to be brought about", the balancing of the "means" and, finally, the consideration of the "purpose" [of the action]: in a causal investigation, all these phenomena, and not just the "external" ones, will be treated as being strictly determined.(Weber, 2012(Weber, : p. 227 [1985: p. 361: p. 361]; emphasis added) The question is now, if Weber demanded an analysis of the actors' motives or the forces of objective situation as important for the social sciences.Burger suggests that Weber's method of Verstehen only refers to the motives of the actor's and not to the objective situation in the sense of sociological or economic laws (e.g. the law of the decreasing marginal utility8 ).However, the statements e 8 and e 9 support my interpretation that both aspects are relevant for Weber, and that his approach is identical with the basic idea of Menger's marginal utility theory without the rationality assumption.e 8 "Meaning" may be of two kinds.The term may refer first to the actual existing meaning in the given concrete case of a particular actor, or to the average or approximate meaning attributable to a given plurality of actors; or secondly to the theoretically conceived pure type of subjective meaning attributed to the hypothetical actor or actors in a given type of action.In no case does it refer to an objectively "correct" meaning or one, which is "true" in some metaphysical sense.(Weber, 1968(Weber, : p. 4 [1980: p. 1: p. 1]) e 9 In all these cases understanding involves the interpretative grasp of the meaning present in one of the following contexts: 1) as in the historical approach, the actually intended meaning for concrete individual action; or 2) as in cases of sociological mass phenomena, the average of, or an approximation to, the actually intended meaning; or 3) the meaning appropriate to a scientifically formulated pure type (an ideal type) of a common phenomenon.(Weber, 1968(Weber, : p. 9 [1980: p. 4: p. 4]) Both statements are formulated analogous consisting of three parts.Basic sentence e 8 refers to the "sense connections" of the actor.The first part describes the subjective (and individual) motive of the actor, the second part concerns the objective regularities of the situation, and the third part refers to the meanings of the ideal types9 .Basic sentence e 9 concerns the understandable "sense attribution" of the scientist through constructions of general types (the third part) and their applications to individual (the first part) and regular or general phenomena (the second part).The method of Verstehen therefore clearly includes the explanation of causal regularities and does not only deal with the subjective motives of the actors.The same attitude is stated in the basic sentences e 10 and e 11 there the method of Verstehen includes causal explanations.e 10 Thus causal explanation depends on being able to determine that there is a probability, which in the rare ideal case can be numerically stated, but is always in some sense calculable, that a given observable event (overt or subjective) will be followed or accompanied by another event.A correct causal interpretation of a concrete course of action is arrived at when the overt action and the motives have both been correctly apprehendedand at the same time their relation has become meaningfully comprehensible.A correct causal interpretation of typical action means that the process that is claimed to be typical is shown to be both adequately grasped on the level of meaning and at the same time the interpretation is to some degree causally adequate.(Weber, 1968(Weber, : pp. 11-12 [1980: p. 5: p. 5]; emphases added) e 11 However, sociology would protest against the assumption that [interpretative] "understanding" and causal "explanation" have no relationship with another.It is true that they begin their work at opposite poles of what happens.[•••] But nevertheless, mental interrelations that can be understood in terms of their meaning, and in particular motivational sequences oriented according to purposive rationality, can certainly qualify as links in a chain of causation that, for instance, begins with "external" circumstances and at the end again leads to "external" behaviour.(Weber, 2012(Weber, : p. 279 [1985: pp. 436-437]: pp. 436-437]; emphasis added) And the following statements e 12 and e 13 show that for Weber neither regularity nor understandable motives alone are enough for sociological rules.Only in their combination can knowledge in the social sciences produced.e 12 Statistical uniformities constitute understandable types of action, and thus constitute sociological generalizations, only when they can be regarded as manifestations of the understandable subjective meaning of a course of social action.Conversely, formulations of a rational course of subjectively understandable action constitute sociological types of empirical process only when they can be empirically observed with a significant degree of approximation.(Weber, 1968(Weber, : p. 12 [1980: p. 6: p. 6]; emphases added) e 13 It is customary to designate various sociological generalizations [•••] as "laws".These are in fact typical probabilities confirmed by observation to the effect that under certain given conditions an expected course of social action will occur, which is understandable in terms of the typical motives and typical subjective intentions of the actors.(Weber, 1968(Weber, : p. 18 [1980: p. 9: p. 9]; emphases added) The basic sentences e 6 -e 13 corroborate my interpretation and they contradict Burger's.For Weber objective causal regularities and the subjective motives of the actors as the causes of actions are equivalent relevant for the analysis of historical phenomena.And if causal regularities are important for the historical sciences, then it is obvious that a specialized science has to provide such causal laws.And it is rather unimportant if for Weber sociology was a complementary science for the historical science or the necessary foundation of an objective historical science.It is unquestionable that Weber developed a great interest in a generalizing sociology. The last point of discussion is the question what guaranteed for Weber the objectivity of the historical science: the logical consistent deductive application of general concepts as in my interpretation or their inductive construction based on categories as in Burger's interpretation.b 9 Undoubtedly, the value ideas are "subjective".[•••] But obviously, it does not follow from this that research in the cultural sciences can only have results that are "subjective" in the sense of being valid for one person but not for another.Rather, what varies is the degree to which such results interest one person but not another. In other words: what becomes the object of investigation [•••] that is determined by the value ideas that govern the scholar and are dominant in his age.As far as the method of investigation is concerned-how it proceeds-the guiding "point of view" [•••] {determines the formation of the conceptual tools employed by the scholar; but, in applying these tools,} he is obviously, here as everywhere else, bound to the norms governing our thought [•••].(Weber, 2012(Weber, : p. 121 [1985: pp. 183-184]: pp. 183-184]; proposed by Burger, 1976: p. 89; the part in braces was missing in Burger's translation; emphases added by CE) Statement b 9 indicates that the construction depends on the values, but since they are subjective and arbitrarily chosen, they cannot guarantee the objectivity of the historical sciences.But the application of the ideal types can justify an objective historical science, if it follows the "norms of our thinking".And "norms" cannot refer in this sentence to an "intersubjectively accepted standard of selection" (Burger, 1976: p. 89), because the selection of relevant aspects is a problem of the inductive construction of ideal types and not a problem of the application.Therefore, I conclude that "norms of our thinking" are the rules of logic (Turner & Factor, 1984: p. 34), which again supports my interpretation that Weber's solution to truth problem in the generalizing and individualizing sciences is the postulate of internal consistency and that every research is based on an arbitrary subjective point of view. Which hypothesis fits therefore better to the basic sentences proposed here?It is my interpretation, although also my interpretation cannot avoid contradictions completely.As a result Burger's hypothesis B 1 and B 2 as well as his interpretation that Weber only used Rickert's methodology without any substantial development are falsified.However, this does not mean that my interpretation is true.It stays a conjecture, a hypothesis.Other researchers are invited to add other basic sentences and to test alternative hypotheses against mine.A guideline for new contributions can be formulated as follows: 1) Give a new perspective: formulate a new hypothesis about Weber's problem, which can guide the interpretation. 2) Add new basic sentences: extend the list of Weber's statements, but do not delete already given sentences. 3) Question already given basic sentences: 4) Weber never wrote this.Preference should be given to the last authorized edition by the author or, if it was published posthumous, then the first edition should be preferred.And the original language should be preferred over translations. 5) Weber wrote this, but in a specific context: e.g. if the author criticized somebody else.I will regard my hypothesis and my interpretation as falsified, if another hypothesis (after adding new basic sentences) produces fewer contradictions in the interpretation.It seems to me that Burger was right to propose a method of deductive falsification for text interpretations.It is a good method to exclude wrong interpretations, although it does not guarantee the truth.
14,210
2014-08-06T00:00:00.000
[ "Philosophy" ]
Study of Core Competence of Logistics Cluster : The Integration and the Extension of Value Chain This paper focuses on the study of core competence of logistics cluster with the solution in two levels: one is the integration of supply chain, and the other point is the extension of value chain; both of them are based on the measurement of agglomeration level of logistics cluster and association level of cluster external resources. Hereby the MAEI model is proposed which is used to evaluate the agglomeration and association level as well as to enhance the core competence of logistics cluster by the solution of integration and extension of value chain. Introduction and Problem Presentation Logistics clusters as per definition are geographically concentrated sets of logistics-related business activities, which have already become one of the most important regional development strategies.The enterprises and business functions which are involved in such cluster can share logistics expertise and know-how as well as enjoy cost and service benefits.Particularly for the regions being of sound maritime location and hinterland networks, enhancing core competence of logistics cluster should be an authentic method to settle the problems, such as:  how to provide the systematic services and acquire adequate benefits from the international port and maritime markets;  how to make better positive feedback loop by developing the cluster cooperation;  how to enhance the core competence of holistic region as well as the enterprises evolved in the clusters;  how to acquire the sustainable driving forces for competitive advantages. Rather than conventional regional economy assumption, it is much more significant to foster the core competence of logistics cluster to face the challenge of the global market.It obviously provides numerous competent advantages for the regional development and cluster participants involved. Like everything has its presupposition, a simplistic low level cluster does not make too much sense for contributing the economy growth or business development.Those advantages above-mentioned of logistics cluster should be based on an efficient and coordinate operation system; it may not be an intensive multi-contracts relationship, nevertheless it should be of high level integrated operation in agglomeration degree, coordination efficiency and external effect. Logistics cluster, like a complex structured ecological system, comprises of the logistics-intensive services providers composing the backbone of cluster, the assistant service groups providing capital and information flow, as well as cluster governance authorities in charge of the management and collaboration within the cluster.Since the multiple actors have involved in the cluster, it certainly needs to form many strengths, such as more mutual trust amongst, wider coherent knowledge and information adoption, easier business cooperation and collaboration, and more employment opportunities.However because of the dynamics of forming the cluster, the challenge of global maritime market fluctuation and non-cooperative game factors, an effective evaluation system of the logistics cluster is necessary.Therefore, it is necessary and of positive significance to develop the evaluation index system to evaluate the agglomeration degree, coordination efficiency and external effect of the logistics cluster; thereupon it can provide more scientific solutions for developing the core competence of the logistics cluster according to the evaluating results. On the other hand, logistics cluster widely agglomerate resources involved; hence the challenges should be how to provide the capability of value-added operations associated with the global value chain and how to make it possible that the participants share the benefits of lower cost and resource collaboration for creative value activities.Nevertheless, a sound evaluation index system can provide scientific basis and data for answering the following questions:  What is the status of internal and external relations of the logistics cluster?Are these relationships in a high level of relevance and dependency?That is the agglomeration of the cluster;  What is the status of the corporation amongst the participants?Is it more smoothly and efficiently running due to the cluster integration?That is the coordination efficiency;  What is the status of the external connections linked with the logistics cluster?How much external recourse can be integrated by the logistics cluster?That is the external effect.Accordingly, hereby the research is concentrated to establish a set of evaluation index system to answer the above questions and moreover to provide solutions being of positive scientific value to form core competence of logistics cluster. Literature Review The concept of cluster was firstly pointed out by Michael Porter [1] of the Harvard Business School (1990).He defined the cluster as a geographic concentration of interconnected businesses, suppliers, and associated institutions in a particular field.In the following research, Michael Porter claims that clusters have the potential to affect competition in three ways:  by increasing the productivity of the companies in the cluster;  by driving innovation in the field; and  by stimulating new businesses in the field (1998) [2]. Reviewed the relevant literatures, most of the research are concentrated on the industry cluster, as to study the reason and mechanism of establishment of industry cluster, i.e.Kaugaman [3], Szilasi and Kalscu [4] are focused in their study on the automobile industry.Some scholars adopted system analysis approaches analyzing the complexity of the cluster system (Wang, Wei et al., 2007) [5] and the innovation activities of logistics cluster (Mu Jing et al., 2013) [6]. In recent years, there are many significant research achievements in logistics cluster.Yossi Sheffi pointed out the view that logistics intensive cluster creates the global competitiveness and regional growth [7].In his research thesis and further he argues the logistics cluster delivering value and driving growth [8].With the former research achievements it has already answered the questions of what it is and how it establishes and be evaluated. Therefore, the research directions in the future should be concentrated on how to develop its competence, how to improve integration and advantages and resources, etc. Targets of Building Core Competence of Logistics Cluster Therefore, based on the study of relevant literatures and practical requirements, this research proposed to focus on four aspects study of core competence of logistics cluster:  measurement of agglomeration level of logistics cluster and association level of cluster external resources by applying the methodologies of Hirschman-Herfindahl index, entropy index and space Gini coefficient.The above mentioned measurement could provide the basic data for identifying the status of Logistics Cluster and making policy decision;  the key influencing factors in successfully developing logistics clusters; hence build the appropriate models of two important relationships amongst Logistics Clusters as a whole, with external environment and internal system;  the obstacles of internal resource integration within the cluster, then, points out the integrated path of value chain as network structure, multivariate extension, coherent information and knowledge sharing and proper government support which is in the form of infrastructures investment, as well as regulation or policy with sustainable holistic thinking;  the methods of widening the value chain by optimal allocation of resources, lower transaction costs, extending the resource leverage to acquire external value, as well as the operation with internet and financing thinking to get more space and time value. Methodology-MAEI Model Based on the above analysis, here we propose a model of MAEI for fostering the core competence of logistics cluster. On the one hand, it clearly elaborates the procedures how to evaluate and more over to enhance the core competence of logistics cluster.The name MAEI stands for four steps, they are, measurement, analysis, extension and integration as the following elaboration in Table 1. On the other hand it proposes the key factors during the course of integrating and extension the value chain. A) Measurement of agglomeration level of logistics cluster As the achievements already obtained by many experts in the field of industrial cluster, there are several methods to evaluate the agglomeration level amongst industrial cluster.According to the targets of building core competence of logistics cluster, here within be involved method should answer three questions as following:  What is the agglomeration degree of logistics industrial in holistic level? What is the orderly degree of different scale enterprises involved in the logistics cluster? What is the balance degree of spatial distribution of logistics cluster?Therefore, it should adopt methods of measurement in three levels that work best for answering the above questions. First, Hirschman-Herfindahl index, HHI [9].Originally, it is a measure of the size of firms in relation to the industry and an indicator of the amount of competition among them.The result is proportional to the average market share, weighted by market share.As such, it can range from 0 to 1.0, moving from a huge number of very small firms to a single monopolistic enterprise.Hereby, it used to measure the agglomeration degree particularly in logistics industrial on holistic point of view.The formula of HHI could be definite as follows: ( ) where S i is the market share of firm i in the market and N is the number of firms.Thus, in a market with two firms that each have 50 percent market share, the Herfindahl index equals 0.50 E index, to some extent, is familiar with HHI.However, because it adopts the reciprocal and logarithm to weight the market share, therefore, it can be better use to measure the orderly degree of different scale enterprises involved in the logistics cluster as the following formula: where S i is the market share of firm i in the market and N is the number of firms.Third, space Gini coefficient [10].Due to Gini coefficient can be used to compare or rank distributions (e.g.probability distribution, frequency distribution or size distribution) can be used as a market concentration criterion.Hereby, the space Gini coefficient is used to weight the balance of spatial distribution of logistics cluster as the following formula: ( ) where S i is the market share index of relevant industrial area i (output, employment, sales, total assets) accounted in the national industrial amount X i is the market share index of area i (output, employment, sales, and total assets) accounted in the national amount.And N is the number of areas. As to the inherent defect of space Gini coefficient, it ignores the negative effects of large scale enterprises to the result.Therefore, G index should be applied with other two indexes above mentioned. B) Analysis of association level of cluster external resources To analyze the association level of cluster external resources is the foundation to indentify the chances of integrating external resources or expanding the correlations alongside the value chain.To make it better work of analysis or evaluation of such association should on three levels:  Gravity of cluster kernel, which is the inherent power to generate external effect. As coming from the mechanical principle of Archimedes's law, resource leverage analysis is used to measure the torque and lever of dynamical resource within the logistics cluster, and finally find out the core resources as so called gravity of cluster kernel which can prolong the leverage in order to pry more external resources as the formula describing: There are two results of prolonging the L1, which is, either decreasing the F1 which means the input of the system declined, or increasing one of F2 and L2 and eventually overall output of the system. Most of the time the energy that can prolong L1 within logistics cluster should be offered though innovating the service contents and expanding the service chains.  Correlation efficient between internal and external of cluster, which is the channel transmitting the energy of outward integration; Data envelopment analysis, DEA methodology [11], efficiency is defined as a ratio of weighted sum of outputs to a weighted sum of inputs, where the weights structure is calculated by means of mathematical programming and constant returns to scale (CRS) are assumed. As such, it is expanded to be used as a ratio of weighted sum of external resource respond to a weighted sum of internal cluster resource and activities, where the ratio reflects the correlation efficient between internal and external of cluster. Evaluation of external environment comprising of political system, economic, social and technology, which is provide the possibility and emphasis of external integration and extension.It is the PEST Analysis used to identifying external environments by analysis of political system, economic, social and technology. C) Extension and integration of value chain The concentration of firms in the same industry, with their similar needs and concerns, gives natural rise to joint activities.These include government lobbying, joint cluster development and joint activities such as procurement. Clusters include, by and large, people with similar backgrounds, language, culture, religion and customs.Thus, it is easier to develop trust, among organizations and people, leading to lower transactions costs among firms whether they are trading partners or horizontal collaborators/competitors. In most cases this trust is based on relationships forged outside the work environment [12]. Large scale companies have their own shortcomings: they are slow to make decisions, they are bureaucratic, and they are risk-averse [13].Consequently, a cluster may be an optimal organizational structure, balancing flexibility and fast decision making on the one hand with the reach and resource availability on the other.In Porter's words "A cluster allows each member to benefit as if it had greater scale or as if it had joined with others formally-without requiring it to sacrifice its flexibility" [14]. The focus of extension of value chain is on a particular type of cluster-a cluster of firms with logistics-intensive operations.This includes mainly three types of companies:  logistics services providers,  companies with logistics-intensive operations, and  the logistics operations of industrial firms. A good way to extend the value chain is to extend the recourse leverage by make use of cluster aggregation resources [15]. Base on all above analysis, hereby we propose the value chain strategy matrix as descriptions in Chart 1, which is definite the strategies of value chain extension in different agglomeration and correlation levels.Moreover, the data and ratio are provided by above formulas in the first two steps. As the description in Chart 1, there are four types of value chain extension strategy:  Concentration type This type of logistics cluster generally means it is of high agglomeration level but limited correlations with external resources. So the extension strategy of such type of cluster should focus on three aspects: Chart 1. Value chain strategy matrix of logistics cluster. First, prolong the internal leverage by innovate the service contents or expand the service chain within cluster; to enhance the internal connection amongst participants by complementary advantages, e.g.provide more comprehensive logistics service through the way of alliance operation. Second, enhance the correlation efficient between internal participants and external potential resource by external environment identification, e.g.financial cooperation opportunity, joint venture either in a valuable project or in sharing regional logistics service resource. Third, indentify the extension opportunity in external environment by analysis of political system, economic, social and technology beyond involved logistics cluster, e.g. an positive policy, tendency of consumption structure, innovation of industrial structure, which both might provide new chance to enhance the correlation with external resource.  Unconcentration type This type of logistics cluster is the weakest cluster, even might not be recognized as a sort of logistics cluster.Therefore, the strategy of such cluster should be to build the concentration by efficiently organizing the participants and resource within the cluster to increase the agglomeration level amongst logistics cluster.  Association type This type of logistics cluster is also weaker than the first one, but the advantage of this type of cluster is better association with external resources, which may become the leverage pry the internal development and enhance cooperation within the logistics cluster. Therefore, the strategy of such cluster is to emphasize the internal cooperation, foster logistics alliance, establish public infrastructure, share logistic information and increase the agglomeration level within logistics cluster.It is both dangerous and prospective to develop this kind of cluster.Because, it may forward to positive direction by enhancing the agglomeration level and form a high quality logistics cluster, whereas it may also forward to the negative direction if it doesn't integrate the internal participants efficiently during the course of better connection with external environment, i.e. decrease the competency in external market and possible chances of cooperation beyond the cluster.  Mature cluster type This is the most ideal model of logistics cluster, mature development prepared to extension by any means.It shows the cluster which is of great core competence has already formed a high level of agglomeration that is also involving in the balanced, orderly developing process.Meanwhile, it also means this cluster has better association with external resources in better external environment. Therefore, the strategy of such type is to give consideration of both balance and efficient and of both integration and extension.That is, as the same time of keeping balance of agglomeration level within the logistics cluster, focus on the efficient of association with external environment.At the same time of integrating the external business with cross cluster participants, extension the supply chain which has been maturely associated within the logistics cluster.Meanwhile, control the developing course should be another challenge at this stage. Discussion and Conclusions As a conclusion, this paper provides a framework for evaluating the agglomeration of logistics cluster by measuring the concentration degree in three different levels and analyzing the association between logistics cluster and external resources.Meanwhile, it also proposes the value chain strategy as the solution of fostering the core competence of logistics cluster by extension and integration of the internal and external resources of value chain.The paper hereby focuses on three aspects of core competence of logistics cluster and forms initial model of evaluation:  First, paper analyzes the key influencing factors to the agglomeration of logistics clusters based on the former relevant research, hence building a comprehensive model and possibly answering the following questions: 1.What is the agglomeration degree of logistics industrial in holistic level? 2. What is the orderly degree of different scale enterprises involved in the logistics cluster? 3. What is the balance degree of spatial distribution of logistics cluster? Second, it analyzes the obstacles of internal resource integration within the cluster, as well as the factors affecting the operating efficiency of logistics cluster even including coherent information, knowledge sharing and proper government support which is in the form of infrastructures investment, as well as regulation or policy with sustainable holistic thinking, in order to compensate the index system with the function of effi-ciency evaluation within the logistics cluster. Third, this research argues the important relationships of logistics clusters as a whole, between external environment and internal systems; hence it will widen the evaluating factors by external effects to evaluate the capability of logistics cluster in extending the resource leverage to acquire external value, as well as the operation with Internet and financing thinking to get more space and time value.
4,319
2015-01-08T00:00:00.000
[ "Business", "Economics", "Engineering" ]
Synaptic plasticity and cognitive impairment consequences to acute kidney injury: Protective role of ellagic acid Objective(s): The goal of the current experiment was to define the efficacy and underlying molecular mechanisms of Ellagic acid (EA) on acute kidney injury (AKI) induced impairment in cognitive and synaptic plasticity in rats. Materials and Methods: Administration of 8 ml/kg glycerol (intramuscular) was used to establish the AKI model. Injured animals were treated by EA (25, 50, and 100 mg/kg, daily, gavage) for 14 consecutive days. To approve the renal injuries and the effects of EA on AKI, creatinine values in serum and urea nitrogen (BUN) values in blood were measured. Cognitive performance was investigated using the Morris water maze test. In vivo long-term potentiation (LTP) was recorded from the hippocampus. Then, the level of IL-10β and TNF-α levels were measured using ELISA kits. The integrity index of the Blood-brain barrier (BBB) was assessed by extravasation of Evans blue dye into the brain. Results: Glycerol injection increased blood urea nitrogen (BUN) and serum creatinine (Scr) levels significantly in the AKI group, and EA treatment resulted in a significant reduction in BUN levels in all concentration groups. Also, a significant reduction in the cerebral EBD concentrations was demonstrated in EA treatment rats. Moreover, the indexes of brain electrophysiology, spatial learning, and memory were improved in the EA administrated group compared with the AKI rats. Conclusion: The current experiment demonstrated the efficacy of EA in hippocampal complication and cognitive dysfunction secondary to AKI via alleviating the inflammation. Introduction Acute kidney injury (AKI) which is prominent as acute renal failure, is a progressive disorder described by rapid assemblage of end components of nitrogen metabolism, such as serum urea and creatinine, or decreased urine outflow, caused by reduction of the kidney's excretory capacity (1)(2)(3)(4). Rhabdomyolysis (RM) is a clinical syndrome that is defined as a massive breakdown of skeletal muscles to the damaging intracellular contents which releases in the systemic circulation. In RM, the components of muscle fibers such as myoglobin (mg), aspartate aminotransferase, creatine phosphokinase, alanine aminotransferase, and lactate dehydrogenase may leak into the systemic blood circulation (5)(6)(7). The major complications of the rhabdomyolysis condition are AKI. In this context, many experiments have focused on the factor which can affect this destructive process. The animal models of renal injury induced by a single intramuscular injection of glycerol in rats is a well-known method to identify the underlying cellular and molecular mechanisms in process of AKI as a secondary complication of RM (5). In the in vivo model of AKI which was established using glycerol, the myoglobin heme caused increased lipid peroxidation in proximal tubules. Furthermore, myoglobin heme stimulates the production of inflammatory mediators, containing cytokines and chemokines, which can activate leukocytes, resulting in tubular necrosis in the kidney cortex (3). During the kidney injury process, the brain might have interacted through intensification of damage induced by inflammatory factors, excerebration of leukocytes, oxidative damage, and dysregulation of membrane channels. Accordingly, episodes of AKI can lead to subsequent progressive kidney and brain injuries (4). AKI could accompany various brain and hippocampal dysfunctions due to alteration of permeability in the bloodbrain barrier. The hippocampus dysfunction as the main structure involved in learning and memory is very sensitive to systemic inflammations like renal injuries, and can finally lead to cognitive impairment in AKI patients (1,8,9). Exacerbation in free radicals and cytokine factors in the brain tissue secondary to AKI has been reported to correlate with neuronal cytotoxicity and apoptosis. Systemic free radical accumulation with BBB impairment can lead to expansion of free radicals in various parts of the brain like the hippocampus (8). Ellagic acid (EA), a natural polyphenolic compound is found in most soft and hard-shelled fruits and plant extracts (10)(11)(12). Studies, in vitro and in vivo, suggested that EA has antioxidant, antibacterial, anti-inflammatory, and anticancer properties (13,14). Furthermore, EA was reported as neuroprotective and improved learning and memory in neurodegenerative disorders in rats (12,13,(15)(16)(17). In view of its potential therapeutic value, it is reported that EA has a protective effect against cisplatin-induced nephrotoxicity (18), kidney protective effect against Carbone tetrachloride (CCl4) induced oxidative damage and inflammation (19), renal protective effect in diabetic rats (20), and protective effect against nicotine-induced oxidative damage and apoptotic changes in rat kidney (21). However, the molecular pathway of AKI-induced brain malfunction and its mechanism basis are poorly understood (22). Therefore, in this study, we examined the memory performance, synaptic plasticity of hippocampal neurons, microvascular permeability, and biochemical changes in the brain after 14 days of treatment with different doses of EA in an animal model of AKI induced by glycerol in rats. Chemicals and drugs Glycerol and EA were procured from Sigma-Aldrich Chemical Co. (St. Louis, MO, USA). The chemicals and reagents which are not mentioned in the present experiment were of analytical grade. Animal care and ethics Sixty adult healthy male rats (Wistar, Age: approximately 3 months and weight: 200-250 g) were housed at controlled temperature (22 ± 2 °C) under a 12-hr light/dark cycle with access to enough food and water. All procedures were done considering the Ethics of Experimental Animal Committee (Ahvaz Jundishapur University of Medical Sciences, Iran) and in accordance with the principles outlined in the NIH Guide for the Care and Use of Laboratory Animals (with ethics code IR.AJUMS.ABHC.REC.1399.051). Experimental design All groups of rats were maintained in standard conditions for 1 week and then deprived of water for 24 hr before the beginning of the study. The animals were divided into five equal groups. 1) Control; the animals which received oral administration of DMSO 5% in saline as a vehicle for two consecutive weeks. 2) AKI; rats received ½ of glycerol concentration (8 ml per kg) using an intramuscular manner (23) and, received an oral administration of DMSO 5% in saline as a vehicle for two consecutive weeks after glycerol administration. 3-5) Treated groups, AKI +EA1, EA2, and EA3, rats in these groups received different doses of EA (25,50, and 100 mg/kg/day orally)for two consecutive weeks after glycerol administration. Assessment of kidney function At the end of the experiments, all rats were sacrificed under deep and irreversible anesthetized with sodium thiopental (neuronal, 80 mg/kg, IP). Blood samples were collected from the heart, then allowed to clot and centrifuged to obtain serum. The biochemical parameters of blood urea nitrogen (BUN) and creatinine (Cr) levels were measured in serum using the standard diagnostic kits (Span Diagnostics, Gujarat, India). Assessment of the permeability in the blood-brain barrier For this purpose, the BBB integrity was assessed via the Evans blue dye (EBD) method. Briefly, five animals from each group were anesthetized and 20 mg/kg of EBD solution (2% Evans blue in normal saline, Sigma, Germany) was injected through the tail vein of the rat. Sixty min later, the thoracic cavity was exposed in anesthetized rats and then perfused with 300 ml of normal saline. This procedure was done through the left ventricle until exiting colorless fluid to eliminate the intravascular EBD. After this step, the brain tissue was collected, weighed, and homogenized using trichloroacetic acid. The collected tissues were incubated freeze (4 °C) for 3 min. Then, the collected supernatant was centrifuged (4000 rpm for 30 min). Finally, the concentration of EBD was assessed by reading the absorbance (620 nm) using the spectrophotometry (Biochrome, Cambridge, UK) method. The level of dye within brain tissue increased considering increasing vascular permeability and enhancement of blood-brain barrier dysfunction (22,24,25). Evaluation of Morris water maze (MWM) indexes Spatial memory function was evaluated using MWM examination which included 4 consecutive acquisition sessions and memory retention or Day 5 probe trial which was done 24 hr after the final session. The MWM examination apparatus included a circular black-painted metal pool (diameter: 150 cm and height: 60 cm) which was filled with 25 ± 1 °C water (Depth: 40 cm). The apparatus was located inside a room and surrounded by various spatial cues. The pools were divided into North, South, West, and East quadrants. The animal's behavior including swim speed, latency, navigation path, and path length were monitored and followed via a video camera, which was positioned directly above the center of the pool. Firstly, the rats learned to identify the platform which was invisibly submerged (diameter: 12 cm and 2 cm underwater surface) and fixed in one of the four quadrants for four 60-sec trials per day. In each experiment, all rats were released randomly from one of the four pool quadrants and remained to find and jump on the platform between each trial for 30 sec. Latency to reach the escape platform was measured as acquisition. In the probe test, 24 hr after the acquisition session (on the 5th day), the platform was removed and the rats were allowed to swim for one 60 sec probe trial to assess memory retention. The escape latency index in the time of spatial learning experiment and also the time during the target quadrant test in percentage were recorded by Ethovision software (ver. 7). The swimming speed was evaluated in all rats to identify the differences in motor ability conditions during swimming (26,27). Electrophysiological studies Electrical activity of the dentate gyrus (DG) area of the brain hippocampus was recorded in anesthetized rats (ketamine (90 mg/kg) mixed with xylazine (10 mg/kg)) at the end of the Morris water maze test. In order to record the field potentials as long-term potential (LTP), the animal's head was fixed in a stereotaxic apparatus and the skull surface was exposed for implantation of the microelectrodes. The Ellagic acid and cognitive impairment following AKI Sarkaki et al. non-electric heating pad was applied to keep normal body temperature (36.5±0.5 °C). One stimulating stainless steel microelectrode (bipolar) was placed into the left perforant path. Also, another tungsten recording microelectrode was implanted into the hippocampal (ipsilaterally) with coordinates of the DG area according to the atlas of Paxinos and Watson, respectively (28). In order to decrease traumatic injuries to the brain tissues, the mentioned microelectrodes were lowered gradually from dura to the PP and were positioned in the high fEPSP area (29). LTP induction The dentate gyrus field potentials were evoked using stimulation of the PP area. At the beginning of the experiments, the test stimulation intensity was applied to elicit afEPSP and recorded at eight various currents intensities (50,100,150,200, 250, 300, 350, and 400 µA). After recording the baseline trace, a high-frequency stimulation procedure was performed to induction of LTP (29). The severity of the HFS stimulus was measured as the evoked slope value of fEPSP and amplitude of population spikes (PSs) in 80% of the maximum value of responses. The voltage differences among the negative deflection (firstly) and a positive wave were measured and reported as the PS amplitude post-tetanic stimulation. Moreover, the fEPSP slope value was documented as the maximum slope among the fEPSP (initial point) and the wave (first positive peak) in order to assess synaptic efficacy. The amplification of the extracellular field potentials was done in ×100 with 0.1 Hz-3 kHz filtration (12,30). ELISA assay on collected tissues In order to determine the cytokines in the hippocampus, 5 rats from each group were deeply anesthetized and perfuse through the heart using normal saline. After this step, the brain tissues were collected, and the hippocampi area was dissected on ice, washed with saline, and then preserved in a freezer (-80 °C) for the consequence of ELISA analysis. 100 mg hippocampi tissues were homogenized using cold PBS plus protease inhibitor cocktail and centrifuged (10000×g, 4 °C) for 20 min. Then, the supernatant samples were collected and the protein concentration was assessed using the Bradford method. Also, TNF-α and IL-10 levels were detected via ELISA kits and reported as picograms per milligrams of protein (pg/mg) (31)(32)(33). Statistical analysis All data were shown as mean ± SEM. The data (MWM test and LTP examinations) was assessed by repeated measure two-way ANOVA followed by Tukey's post-hoc test. One-way ANOVA and Tukey's post hoc test were used for analyzing other data. Values of P<0.05 were considered to be statistically significant. Statistical analyses were done using GraphPad Prism 6 software. Biochemical analysis Glycerol caused renal function reduction, as shown by high concentration levels of creatinine and urea in the serum. The effects of EA on kidney function in rats are shown in Figure 1. Glycerol injection in the AKI group significantly increased serum creatinine (Scr) and blood urea nitrogen (BUN) concentration compared with the control rats (P<0.001; Figure 1A, B). EA treatment resulted in a significant reduction in BUN levels in AKI+EA50 and AKI+EA100 groups in comparison to the rats model of AKI, respectively (P<0.05, P<0.001; Figure 1A, F 4,20 = 13.52). Also, EA treatment resulted in a significant reduction in Scr level in AKI+ EA25, AKI+EA50, and AKI+EA100 groups compared with the AKI group, respectively (P<0.01, P<0.001, and P<0.001; Figure 1B, F 4,20 = 14.08). BBB Permeability The brain tissue content of EBD as an index for BBB integrity assessment is shown in Figure 2. Glycerol injection Spatial learning and memory As the results show in Figure 3A, the latency time in the MWM task was diminished in all experimental groups during four days of acquisition sessions which demonstrated that all rats learned to identify the hidden platform. The analysis result of the latency time in finding the hidden platform demonstrated effective properties of EA treatment (F4,24=41.28; P<0.001) and days (F3,18=29.73; P<0.001). However, the results showed that there were no remarkable effects of days×treatment interaction (F12,72=0.172; P> 0.05) on any training day during the study period. Also, the data showed glycerol injection causes prolongation of latency in finding the hidden platform in AKI rats compared with the control group on days 1, 2, 3, and 4 of the training period (P<0.001); while EA treatment for two continuous weeks reveals a remarkable diminish in the escape latency measurement in the AKI+EA50 (P<0.05) and AKI+EA100 groups (P<0.05 and P<0.001) compared with the AKI group. In the probe-trial test, the time spent (percentage) in the objective quadrant was considered in the assessment of memory in trained rats. As shown in Figure 3B, the AKI rats spent a short time in the target quadrant compared with the control (F4,30=7.296, P<0.001); However, EA treatment in AKI+EA50 and AKI+EA100 groups significantly enhanced the percentage of time spent in the objective quadrant compared with the AKI group (P<0.01). Hippocampus electrical activity The sample traces were obtained from the DG area in the hippocampal region pre and after 400 Hz stimulation as shown in Figure 4A. The results of PS amplitude measurement demonstrate remarkable effects of EAtreatment (F4,16=9.115; P<0.001) and time (F6,168=41.82; P<0.001) ( Figure 4B). Moreover, there were remarkable effects of time×treatment interaction (F24,96=6.404; P<0.001) for PS amplitude recording at various times. Also, glycerol injection significantly decreased PS amplitude in all LTP recording times in the AKI group compared with the control group (P<0.001). Whereas, EA treatment for 14 days significantly enhanced the PS amplitude in the AKI+EA50 (P<0.05 and P<0.01) and AKI+EA100 (P<0.001) groups compared with the AKI group. According to Figure 4C, the results of analysis of the slope values for fEPSP reveal the remarkable effects of treatment with EA (F4,16=14.65; P<0.001) and time (F6,24=12.60; P<0.001). Furthermore, there were remarkable properties in time×treatment interaction (F24,96=5.925; P<0.001) in fEPSP slope in various times. Moreover, the fEPSP slope showed remarkable decreases in the AKI group compared with the control group (P<0.001), while EA treatment for two consecutive weeks significantly increased the fEPSP slope in the AKI+EA50 (P<0.05 and P<0.01) and AKI+EA100 groups (P<0.001) compared with the AKI group. As shown in Figure 4D in a row showed remarkable enhancement in the AUC in the AKI+EA50 (P<0.001) and AKI+EA100 groups (P<0.001) in comparison with the AKI rats. Figure 5A reveals the effects of EA treatment (two consecutive weeks) on the hippocampal levels of TNF-α after glycerol injection-induced AKI (F4,20=9.254; P<0.001). These data showed that glycerol injection significantly increased hippocampal level of TNF-α in the AKI group compared with the control (P<0.001), while, EA treatment in AKI+EA50 and AKI+EA100 groups decreased hippocampal levels of TNF-α compared with the AKI group (P<0.05 and P<0.01). As shown in Figure 5B glycerol injection significantly(F4, 20=11.66; P<0.001)decreased hippocampal levels of IL-10 in the AKI group compared with the control group. But treatment of AKI rats with EA significantly(P<0.01 and P<0.01) increased levels of IL-10 in the AKI+EA50 and AKI+EA100 groups compared with the AKI group. Discussion The current experiment documented that AKI rats had remarkable hippocampal long-term potentiating deficits and memory impairments, which were assessed by electrophysiological and Morris water maze methods. Moreover, BBB permeability as a physiological impairment index was changed in the AKI rats. This study reported that treatment with EA could improve these behavioral and physiological dysfunctions, which may be related to the anti-inflammatory properties of EA. Rhabdomyolysis (RM) is defined as the breakdown of skeletal muscles which is caused by outflow of muscle enzymes and release of toxic compounds into the blood circulation leading to AKI (3,34,35). These compounds are filtered out via glomeruli and increase the risk of development of AKI which might lead to enchaned rate of mortality worldwide. This malcondition could relate to the extra-renal complications which occur secondary to distant-organ involvement with a special and distinct profile of injurious mediators (36)(37)(38). After AKI, neurologic complications are major causes of mortality (39). Accordingly, using safe agents to prevent, treat, or minimize AKI conditions is essential. This experiment was carried out to identify the potential efficacy of EA against cognitive impairment after the RMinduced AKI model in rats via administration of glycerol. The obtained data demonstrate an increase in indexes of renal function, such as Scr and BUN. The increased Scr and BuN levels show the critical role of renal injury markers (6,40). This study showed that EA treatment ameliorated the alterations in kidney function parameters in RM-induced AKI by decreasing Scr and BUN. EA, a natural polyphenolic compound, exhibits various pharmacological and biological activities. However, several studies documented that administration of EA provides remarkable protection against cisplatin-induced nephrotoxicity by decreasing the plasma and urea creatinine levels (18,41,42). In vivo AKI models have shown that the inflammation process is related to intensifying the adverse effects on remote organs such as the brain (43). Increased levels of inflammatory factors have been shown in the hippocampus tissue during the AKI process (9). Therefore, we found that AKI would also lead to brain inflammatory changes. In this study, TNF-α was elevated in the hippocampus in the glycerol group. In addition, glycerol injection caused a decrease in IL-10β levels in the hippocampus. In the treatment group with EA in a concentration-dependent manner, the levels of TNF-α in hippocampal tissue significantly dropped and concentrations of IL-10β increased; which may cause improvement of the cognitive condition and hippocampal LTP in response to the BBB repairmen. These results clearly demonstrated that EA, probably via its antiinflammatory properties, ameliorates the consequences of glycerol-induced AKI. In the brain, several studies on the blood-brain barrier (BBB) documented an important role in the homeostasis of the brain (44). The BBB is defined as a defensive physical and metabolic barrier between the CNS and peripheral circulation that helps to preserve the microenvironment of the brain areas. The structure of BBB is made up of endothelial cells (non-fenestrated) with tight junctions that are responsible for decreases in permeability of the barrier (45,46). It is now believed that disruption of the BBB is the major episode of brain injury after AKI (4,9). In response to the AKI, inflammation will occur due to endothelial dysfunction, oxidative stress, and other multiple risk factors (4). Subsequent to inflammation, release of cytokines occurs through activation of inflammatory cells, astrocytes, microglia, and BBB endothelium (47,48). The chemotactic role of cytokines could lead to alteration in the construction of the endothelial tight junctions, which are vital elements of the BBB (49). Cytokines also activate microglial cells and donate to dysregulation of water channels (4). An increase in BBB permeability leads to changed CNS microenvironment, causing CNS dysfunction (50). Accordingly, in the current experiment, we assessed brain microvascular function by Evans's blue dye extravasations method and showed that AKI leads to enhancement of brain microvascular permeability. Our data are in line with 5 Figure 5. Inflammatory parameters (TNF-α, IL-10) in the hippocampus in AKI rats. A) Hippocampal TNF-α content B) Hippocampal IL-10β content Cont: control, AKI: acute kidney injury, EA: ellagic acid Results were expressed as Mean ± SEM. ***P<0.001 versus the control group; #P<0.05 and ##P<0.01 versus the AKI group the studies which documented BBB derangement in both acute and chronic uremic encephalopathy conditions (9). However, the data of the current animal experiment showed that the BBB function was improved in EA-administrated animals which may mediate by properties of EA against BBB disruption through inhibition of inflammatory pathways. Mashhadizadeh et al. indicated disrupted BBB permeability after TBI was improved with EA (13). Although, numerous studies have documented various properties of EA, including neuroprotective effects, anti-inflammatory properties, and anti-oxidant and free-radical scavenging effects (51,52). The main target mechanism by which EA improved these effects has not been fully identified and seems to need more investigation. As seen in the AKI group in the present study it was demonstrated that the hippocampal may be involved in the AKI process as a critical region responsible for learning and memory which may lead to cognitive and synaptic plasticity dysfunction. However, we reported that the function of memory was significantly reduced in AKI rats which showed MWM performance. This data supports other studies on memory loss due to AKI in animal models (9,53,54). Furthermore, we showed that EA treatment promotes brain plasticity and prevents memory loss; which is related to the brain's hippocampal functions. LTP is a molecular mechanism in memory conditions and also is a long-lasting increase in synaptic plasticity; much focus on LTP has been in the hippocampus neural circuits (55)(56)(57). The hippocampus LTP is a well-documented bioindex for assessing learning and memory function. Our electrophysiological study reported that administration of EA could cause enhancement of PS amplitude, fEPSP slope, and AUC following HFS in the AKI rats. These data showed that EA-treatment leads to synaptic plasticity enhancement through the DG area of the hippocampus with pyramidal neurons. An earlier study showed that EA attenuated synaptic plasticity impairment in traumatic brain injury models in rats (13). It has been reported that an enhanced level of TNF-α leads to inhibition of LTP induction in the CA1 and DG region in rat hippocampus to the pathophysiological values (58,59). Surprisingly, it has been reported that inflammation in peripheral organs as shown in kidney tissue can affect hippocampal synaptic transmission (60). It is suggested that parallel changes in aminomethylphosphonic acid sub-units and NMDA receptors such as GluR and NMDA subtype 2B receptors are involved in this destructive process. Glutamate release enhancement associated with down-regulation of cannabinoid receptors (61), can potentiate release of transmitter secondary to systemic inflammation. Increases in bio-synthesis of systemic pro-inflammatory cytokines can stimulate BBB receptors on sensory afferents or within the circumventricular organs, which mediates the inflammatory cascade in the brain (8). AKI could affect the hippocampus tissue as a major area of cellular inflammation due to synthesis of soluble inflammatory components and destruction of BBB. Disruption of BBB permeability secondary to AKI allows cytokines to penetrate the brain, which leads to edema and inflammation. Moreover, activation of microglia in response to cytokines could increase the process of BBB disruption and brain dysfunction (62), while EA treatment has the potential to ameliorate these changes via its anti-inflammatory efficacy. According to the mentioned results, our study presented pieces of evidence on EA advantages to consider as a useful strategy to diminish the progression of AKI. Conclusion Treatment with EA showed a potential dose-dependent neuroprotective action against AKI-induced cognitive impairment and hippocampal electrophysiological deficit, especially a dose of 100 mg/kg which ameliorated the investigated parameters. The ability of EA to suppress inflammatory mediators makes it a potential candidate for inhibition of AKI progression.
5,570.6
2022-05-01T00:00:00.000
[ "Biology" ]
Investigation of Macroscopic Mechanical Behavior of Magnetorheological Elastomers under Shear Deformation Using Microscale Representative Volume Element Approach Magnetorheological elastomers (MREs) are a class of smart materials with rubber-like qualities, demonstrating revertible magnetic field-dependent viscoelastic properties, which makes them an ideal candidate for development of the next generation of adaptive vibration absorbers. This research study aims at the development of a finite element model using microscale representative volume element (RVE) approach to predict the field-dependent shear behavior of MREs. MREs with different elastomeric matrices, including silicone rubber Ecoflex 30 and Ecoflex 50, and carbonyl iron particles (CIPs) have been considered as magnetic particles. The stress–strain characteristic of the pure silicon rubbers was evaluated experimentally to formulate the nonlinear Ogden strain energy function to describe hyper-elastic behavior of the rubbery matrix. The obtained mechanical and magnetic properties of the matrix and inclusions were integrated into COMSOL Multiphysics to develop the RVE for the MREs, in 2D and 3D configurations, with CIP volume fraction varying from 5% to 40%. Periodic boundary condition (PBC) was imposed on the RVE boundaries, while undergoing shear deformation subjected to magnetic flux densities of 0–0.4 T. Comparing the results from 2D and 3D modeling of isotropic MRE-RVE with the experimental results from the literature suggests that the 3D MRE-RVE can be effectively used to accurately predict the influence of varying factors including matrix type, volume fraction of magnetic particles, and applied magnetic field on the mechanical behavior of MREs. Introduction Magnetorheological (MR) materials are a class of smart materials with the unique ability to change their physical or mechanical characteristics rapidly in less than few milliseconds in response to an external magnetic field.These materials are fabricated by dispersing magnetic particles inside nonmagnetic host matrices.MR fluid is a commonly used MR material which comes with some drawbacks, such as magnetic particle sedimentation, sealing problems, and environmental contamination [1][2][3][4][5].These challenges are not encountered by MR elastomers (MREs), because the magnetic particles are often bonded by a carrier matrix, like rubber.MREs stand as multi-functional materials, exhibiting the capability of dynamically altering their mechanical properties, including stiffness, and damping capacity, in response to an external magnetic field.This characteristic is measured by defining magnetorheological (MR) effect as the ratio of the change in the field-dependent physical or mechanical property to the value of the same property when no field is applied [6][7][8]. MREs comprise three fundamental components: magnetic particles, nonmagnetic elastic matrices, and additives [9,10].Due to the application of a magnetic field during Polymers 2024, 16, 1374 2 of 31 the curing process, it is possible to fabricate MREs with an anisotropic particle-formed microstructure, and when no field is applied during the curing process, prepared MREs have an isotropic particle-formed microstructure [11,12].MREs show great potential to be incorporated into the design of intelligent devices in a variety of engineering disciplines [13][14][15]. Jolly et al. [16] has pioneered the investigation on the mechanical response of anisotropic elastomer composites with embedded CIP under the application of magnetic fields, revealing significant changes in shear modulus of MREs in response to the magnetic field.Davis [17] proposed a phenomenological model to predict the shear modulus of isotropic and anisotropic MREs, with and without the application of the external magnetic field.His study suggested that magnetic particles with 27% volume fraction is an optimal content with respect to MR effect.Berasategi et al. [18] studied silicone-based isotropic and anisotropic MREs containing CIP concentrations ranging from 5% to 30% volume content, indicating changes in storage modulus and loss modulus as CIP content increased.Vatandoost et al. [19] investigated pre-strain effects on compression mode dynamic characteristics of isotropic and anisotropic MREs.Their results revealed that pre-strain has a significantly nonlinear impact on the elastic and loss moduli.Syam et al. [20] conducted a finite element analysis on MREs' behavior at the microscale, using the COMSOL software (v.5.4).However, they used linear material model for silicone rubber as the matrix material.Their results showed increased stiffness in both linear and torsional modes under the application of an external magnetic field.Asadi Khanouki et al. [21] examined isotropic and anisotropic silicone rubber based MREs with various contents of CIP, and proposed a microscale modeling that was validated with the experimental results.Dargahi et al. [22] fabricated different MRE samples in terms of rubber matrix and ferromagnetic particle contents and conducted static and dynamic shear tests on the samples.Their results showed a significant 1672% increase in storage modulus under 0.45 T magnetic flux density. Sun et al. [23] conducted a finite element analysis on the shear deformation of isotropic MREs under the application of an external magnetic field, using the concept of representative volume element (RVE) in 2D configuration, in COMSOL.However, they approximated the nonlinear B-H properties of magnetic particles using a linear model and assumed the relative permeability of CIP to be 100.Inspired by their work, Xu et al. [24] conducted the 3D modeling of isotropic MREs in tensile mode under the application of a magnetic field in COMSOL.They used a linear material model to describe the nonlinear hyper-elastic behavior of the host elastomeric matrix, and the linear magnetic model was used to describe the magnetic properties of CIP, thus ignoring the saturation.Kiarie et al. [25] adopted a 2D RVE approach using COMSOL to predict the magnetic field-induced strain in MREs, and they employed the Mooney-Rivlin nonlinear material model for the host rubber; however, their problem did not include any mechanical load or displacement imposed on the RVE.Li et al. [26] studied the magnetic field-induced shear behavior of MREs using a 2D RVE approach in COMSOL for both isotropic and anisotropic MREs; however, they assumed the magnetic particles (hydroxy iron powder) to have a very high relative permeability of 5000, and the silicone rubber behavior to be like a linear elastic matrix. Hence, due to the prohibitive and expensive experimental procedures to characterize the behavior of MREs under the influence of various factors, the development of a promising analytical model to predict the mechanical properties of MREs offers several benefits over experiments, including controlled simulations, broader scenario exploration, costeffectiveness, and environmental sustainability.To the best of the authors' knowledge and by reviewing the literature, there is a gap in the literature concerning taking the nonlinear behavior of MRE components into account in FE modeling process, and studies so far have faced issues in taking this inherent nonlinearity into account.Therefore, in this research study, the representative volume element (RVE) approach as an FE modeling scheme was effectively utilized to model the shear deformation of MREs under the influence of an external magnetic field, considering material and magnetic nonlinearities into account.An appropriate RVE size was defined for modeling the MRE, and periodic boundary condition (PBC) was imposed on the RVE boundaries.Experiments were then carried out on host Polymers 2024, 16, 1374 3 of 31 rubber samples (silicone rubber) to obtain the stress-strain data.The extracted data was used to formulate the Ogden strain energy to describe the nonlinear hyper-elastic behavior of the rubber material.The other nonlinearity in the MRE is attributed to the magnetic behavior of CIP, which was described through a B-H curve, considering the saturation, instead of using a high relative permeability as used in previous studies. The RVE was then generated in COMSOL Multiphysics (v.6.0) in 2D and 3D configurations, and pure shear deformation was incrementally applied on the RVE, while PBC was imposed on the RVE boundaries.Simultaneously, a homogeneous magnetic field was created in the surrounding air domain perpendicular to the shear direction, and the Maxwell stress tensor was defined on the CIP inclusion.The influence of different factors like CIP volume fraction, magnetic field intensity, and host rubber's mechanical behavior were investigated on the shear behavior of the MRE.The results revealed the significant credibility of the developed 3D model in predicting the MREs' shear behavior, exhibiting a substantial potential to be used for the prediction of MREs' mechanical behavior as a reliable alternative to costly experiments. Representative Volume Element (RVE) A fundamental goal in the field of heterogeneous materials physics is to determine the effective mechanical properties of these materials.Among the proposed techniques, representative volume element (RVE) homogenization stands out as a method that utilizes a statistically homogeneous representation of heterogeneous materials at a microscale to derive their effective properties on a macroscale.Many researchers have suggested different aspects to be considered in the evaluation of the RVE size; however, these definitions share a core idea that RVE is defined as the minimal volume element when compared to the macroscopic dimensions of the structure, yet large enough to include a substantial amount of information about the microstructure, that exhibits an equivalent target attribute or behavior to that of the whole material at a macroscopic scale [27][28][29][30]. The RVE Size for Models Containing Hard Particles To obtain an accurate estimation of the effective properties, it is essential to establish a correlation between the size of the RVE and the various morphological, mechanical, and thermal factors associated with microstructures [31,32].El Moumen et al. [30] adopted a combined numerical-statistical approach in order to investigate the variation in the RVE size with respect to different parameters in the microstructure.The geostatistical parameter, integral range "A" in the context of composite materials, was used to relate the size of the RVE with other microstructure parameters and was defined by previous researchers [33][34][35].The integral range for random microstructures with a volume fraction ϕ may be expressed as follows [32]: For a stationary random function z, the variance of z, D 2 z (V), over the volume V is attainable as a function of the integral range A and the point variance S 2 z [30] as follows: Considering a stochastic microstructure composed of two distinct phases, denoted as F 1 with its distinct real characteristics, namely, z 1 and F 2 with characteristics z 2 , where phase F 1 occupies a volume percentage of ϕ, and phase F 2 occupies a volume fraction of (1ϕ), the point variance S 2 z of the random variable z in the context of a two-phase material may be provided as follows [30]: Given the current context, whereby the mechanical properties are represented as the random variable z, substituting Equation (3) in Equation ( 2), the volume variance, D 2 z (V), can be evaluated as follows: where D 2 z (V) is the variance of the volume V, and A is the integral range.To determine the RVE parameters, the number of realizations n and the absolute error ε abs were used to express D 2 z (V) as follows [26]: It should be noted that the determination of the size of the RVE involves defining the volume at which the number of realizations is equivalent to one [36].Therefore, using Equations ( 1), (4), and ( 5) and considering n(V = V RVE ) = 1, we can write the following: Defining the contrast ratio (c) in mechanical characteristics as c = z 2 z 1 , in which z 2 represents the matrix property (phase F 2 ) and z 1 corresponds to the property of phase F 1 , the representativity of the estimated characteristics in random microstructures may be determined by considering the volume size, with respect to the desired relative error ε r = ε abs z 2 .Hence, using Equation ( 6), the final expression for RVE size would be the following: Equation ( 7) provides a clear relationship between the representative volume element (RVE) of random microstructures and both the volume fraction and contrast ratio, while also accounting for a desirable and fixed relative error. For MREs, the two phases include soft elastomeric phase (F 2 ) impregnated with hard solid spherical inclusions, phase (F 1 ), represented by the micron-sized carbonyl iron powder (CIP).Thus, for MREs, the contrast ratio c which here represents the ratio of the modulus of the rubbery matrix to that of solid inclusions is nearly negligible compared with unity.Therefore, using Equation (7) and assuming c = 0, the RVE size can be determined for different particle volume fractions and the desired relative error.Tuning the aforementioned parameters to a negligible relative error, the equation leads us to an RVE with one CIP inclusion in a cubic rubber matrix, which is the same RVE size presented by Davis [17]. Periodic Boundary Conditions The commonly employed boundary conditions in micromechanics fall into two categories of uniform boundary conditions: Dirichlet, also known as Displacement Boundary Condition (DBC), and Neumann, also known as Traction Boundary Condition (TBC).However, TBC tends to overestimate the effective material properties, while DBC underestimates them.Furthermore, researchers have also developed periodic boundary conditions (PBC) which are typically applied to unit cells in cases where the heterogeneous material exhibits a periodic structure [37][38][39].It has been reported that PBC yields more precise effective modulus estimates compared with other conventional boundary conditions and is less sensitive to the RVE size or inclusion position in the unit cell [40].In the following, PBC in 2D and 3D configurations are briefly discussed. Periodic Boundary Conditions in 2D Considering the periodic structure of the macroscopic body and a square unit cell, for each boundary pair, compatibility is essential in line with the periodicity assumption.This implies that the deformation of each boundary pair is identical, and the stress vectors have opposite signs on each pair [37,38].Smit et al. [41] derived the suitable displacement boundary conditions as follows: where u ij denotes the displacement vector associated with any material point situated on the corresponding boundary Γ ij , while u v i represents the displacement vector attributed to each vertex v i .To eliminate rigid body motions, it is necessary to impose u v k = 0 for any k within the set k ∈ {1, 2, 4}.The micro-macro relations for the total stress and strain tensors can be established as follows: where σ ij (macro) and ε ij (macro) signify the macroscopic total stress and strain tensors, respectively, while σ ij and ε ij represent the corresponding microscopic averages over the surface of the unit cell (i.e., σ ij = 1 S R σ ij dy; S = R dy).Utilizing the averaged elastic constitutive equations in Equation ( 9), the expressions for the affective elastic properties can be obtained as follows: , where E e f f 11 is the effective Young's modulus, and ν e f f 12 denotes the effective Poisson's ratio. Periodic Boundary Conditions in 3D A 3D RVE featuring a periodic microstructure can be considered as a cubic structure which contains both fiber and matrix constituents.This cube is bounded by six surfaces, ensuring that any two parallel surfaces always maintain parallel alignment along either the x-, y-, or z-axes.Each two parallel surfaces are distinguished by assigning an index (POS/NEG) based on their location on the associated coordinate axis.For example, assuming a cubic RVE with the dimension denoted as D, the X POS surface corresponds to the y-z plane situated at the maximum x-axis cubic dimension (i.e., x = D), while the X NEG surface is located at the minimum x-axis cubic dimension (i.e., x = 0). Each of these surfaces consists of nodes, and nodes located on the X POS surface are referred to as X POSnodes .Similar terminology applies to the other five faces.The set of nodes for a specific surface is defined as S np , where n represents the reference frames (X, Y, Z), and p encompasses either the positive or negative faces along a given axis.Consequently, similar to the concept of PBC in 2D configuration, the mathematical expressions governing the imposition of periodic deformation on all nodes in the three dimensions of the 3D RVE domain can be mathematically described as follows [42]: Enforcing periodic boundary conditions requires stress equilibrium across opposite surfaces within the RVE domain.For every surface S np in the 3D RVE, a specific unit outward normal vector is defined as n np .Assuming that the domain is experiencing stress, the condition for stress equilibrium across opposing pairs of surfaces is achieved in the following situations: σ n ZPOS (x, y) = −σ n ZNEG (x, y) where σ is the stress tensor. Ultimately, the volume-averaged stress within the periodically deformed RVE domain can be expressed as follows: where V is the RVE volume, and the volume-averaged stress is denoted as ⟨σ⟩.Under the assumption of global periodicity within the RVE domain, the overall macroscopic stress and the global strain are expressed as ⟨σ⟩ = σ macro and ε macro respectively.ε macro is determined based on the displacements of the retained nodes, which are calculated as follows: Here, u i represents the displacement vector of retained node, i, in relation to its coordinate position, X i , where u i = [u i,x , u i,y , u i,z ].It should be noted that u 1 is restricted to be zero to avoid rigid body motion. The effective properties could be evaluated consequently in the same fashion as explained in 2D.For example, the following are the effective shear moduli obtained from simple shear deformation along XY, XZ, and YZ planes, respectively. , G e f f 13 = τ 13 (macro) where τ and γ are shear stress and shear strain, respectively. Maxwell's Stress Tensor The force acting on a point charge q moving with velocity v in the presence of both an electric field E and a magnetic field B is described by the Lorentz force equation which is a fundamental concept in electromagnetism and is expressed as follows [43]: Using the Maxwell equations in electromagnetism and the Lorentz force, we can introduce Maxwell Stress tensor as follows [43]: where µ 0 and ϵ 0 denote vacuum permeability and permittivity, respectively, and δ ij is the Kronecker delta which is defined as follows: In this study, MREs are only exposed to uniform magnetic fields; thus, the first part of Equation (17), containing the electric field, is eliminated, yielding the Maxwell stress tensor as follows: Governing Equations Considering the basic balance principles of continuum mechanics, including the linear momentum and angular momentum balance principles, there exists the equation of mechanical equilibrium and is as follows [44]: Polymers 2024, 16, 1374 Here, σ represents the stress tensor, ρ is density, f represents the body forces, and v is the velocity.In the case of static or quasi static conditions ( .v = 0), the force balance equation simplifies to the following: When coupling magnetic and elastic behavior, different methods can be used to define body forces and stresses.The deformation of the material due to a magnetic field can be incorporated into the force balance equation in terms of the body force represented by the magnetic force per unit volume, denoted as f m .Assuming that the body force due to the weight is negligible, Equation (21) can be described as follows: Alternatively, Equation ( 22) can be expressed in terms of the total stress tensor T as follows: Here, T contributes to the sum of mechanical and magnetic stress tensors as T = σ + T, where T is defined in Equation ( 19). Characterization of Elastomeric Matrix and Magnetic Inclusion in MREs In order to generate the RVE for predicting the shear modulus of MRE, two datasets are required: the mechanical and magnetic properties of the pure rubber material (the matrix) and carbonyl iron particle (CIP) (the ferromagnetic inclusions).As for the rubber material, silicone rubber was chosen and produced in the laboratory using two different types of silicone rubber, represented as Ecoflex 30 and Ecoflex 50 (Smooth-On, Macungie, PA, USA).In order to fabricate identical samples in terms of shape and dimensions, two rectangular molds of 37 × 6 × 3 [mm] were fabricated using a 3D-printer (Original Prusa i3 MK3S+, Prusa Research, Prague, Czech Republic), as shown in Figure 1a.In pursuit of determining the viscoelastic properties of silicone rubber samples, the two identical rectangular samples with dimensions of 37 × 6 × 3 [mm] underwent pure tensile to failure test with 30 [mm/min] velocity, using an MTS machine (F1505-IM, Mark-10, Copiague, NY, USA), under identical conditions.Figures 2 and 3 illustrate the three significant steps of the conducted test for silicone rubber Ecoflex 30 and Ecoflex 50, respectively.To fabricate the silicone rubber samples, platinum-based silicone rubber from Smooth-On, Macungie, PA, USA was used, comprising two parts to be thoroughly mixed and cured.The two parts denoted as the rubber part (A) and the catalyst part (B), as shown in Figure 1b, were added, and stirred in a 50-50 weight fraction.The primary mixture was then placed in the conditioning vacuum mixer (THINKY: ARV-200, THINKY CORPORATION, Laguna Hills, CA, USA) for 40 s under 2000 rpm to be thoroughly mixed and degassed.The final mixture was then poured into the molds and cured at room temperature for 15 h.Finally, the vulcanized samples were removed from the molds and were ready for conducting the tensile test.Figure 1c shows the fabricated samples. Characterization of the MRE's Elastomeric Matrix Using the Uniaxial Tensile Test In pursuit of determining the viscoelastic properties of silicone rubber samples, the two identical rectangular samples with dimensions of 37 × 6 × 3 [mm] underwent pure tensile to failure test with 30 [mm/min] velocity, using an MTS machine (F1505-IM, Mark-10, Copiague, NY, USA), under identical conditions.Figures 2 and 3 Characterization of the MRE's Elastomeric Matrix Using the Uniaxial Tensile Test In pursuit of determining the viscoelastic properties of silicone rubber samples, the two identical rectangular samples with dimensions of 37 × 6 × 3 [mm] underwent pure tensile to failure test with 30 [mm/min] velocity, using an MTS machine (F1505-IM, Mark-10, Copiague, NY, USA), under identical conditions.Figures 2 and 3 The extracted force-displacement experimental data for silicone rubber Ecoflex 30 and Ecoflex 50 are shown and compared in Figure 4.For the given force, silicone rubber Ecoflex 30 experiences larger displacement compared with Ecoflex 50 due to its lower stiffness.The force-displacement data were then used to obtain stress-stretch data that were subsequently utilized to identify the material parameters of hyper-elastic Ogden material model [45].Compared with other hyper-elastic models such as Neo-Hookean and Mooney-Rivlin, the Ogden model was shown to provide the better prediction [45,46].Ecoflex 30 experiences larger displacement compared with Ecoflex 50 due to its lowe stiffness.The force-displacement data were then used to obtain stress-stretch data tha were subsequently utilized to identify the material parameters of hyper-elastic Ogden ma terial model [45].Compared with other hyper-elastic models such as Neo-Hookean and Mooney-Rivlin, the Ogden model was shown to provide the better prediction [45,46].To characterize the behavior of the rubber material, the strain energy function in the theory of hyper-elasticity, which represents the stored energy in the material during the deformation and is denoted as W, is employed.This energy function is dependent on the principal stretches (λ1, λ2, and λ3) which are stretch ratios defined as deformed length di vided by the original length for the unit fibers oriented along the principal direction [47,48].Taking into consideration that rubber materials can be generally considered in compressible, we have = 1.The principal Cauchy stresses, denoted as (i = 1, 2 3), are intricately connected to the stretches through the derivative of the strain energy function, as expressed by the following equation [46]: Here, the index does not represent a dummy index, and there is no summation over it, and L serves as an unknown Lagrange multiplier, associated with the aforemen tioned incompressibility constraint.For the case of pure tension uniaxial tensile test, we have = = 0, and hence, by expressing the following equation and using Equation ( 24), we can effectively eliminate the unknown Lagrange multiplier L. The Ogden strain energy function can accurately describe the nonlinear behavior o hyper-elastic materials, and for incompressible materials, it takes the following form [46] Here, each and represents material characteristic parameters, to be deter mined using experimental data.For practical application, the summation in Equation ( 26 To characterize the behavior of the rubber material, the strain energy function in the theory of hyper-elasticity, which represents the stored energy in the material during the deformation and is denoted as W, is employed.This energy function is dependent on the principal stretches (λ 1 , λ 2 , and λ 3 ) which are stretch ratios defined as deformed length divided by the original length for the unit fibers oriented along the principal directions [47,48].Taking into consideration that rubber materials can be generally considered incompressible, we have λ 1 λ 2 λ 3 = 1.The principal Cauchy stresses, denoted as σ i (i = 1, 2, 3), are intricately connected to the stretches through the derivative of the strain energy function, as expressed by the following equation [46]: Here, the index i does not represent a dummy index, and there is no summation over it, and L serves as an unknown Lagrange multiplier, associated with the aforementioned incompressibility constraint.For the case of pure tension uniaxial tensile test, we have σ 2 = σ 3 = 0, and hence, by expressing the following equation and using Equation (24), we can effectively eliminate the unknown Lagrange multiplier L. The Ogden strain energy function can accurately describe the nonlinear behavior of hyper-elastic materials, and for incompressible materials, it takes the following form [46]: Here, each µ p and α p represents material characteristic parameters, to be determined using experimental data.For practical application, the summation in Equation ( 26) is confined to a finite number of terms.However, to maintain consistency with classical theory of incompressible isotropic elasticity, these constant parameters must adhere to the following condition: Polymers 2024, 16, 1374 10 of 31 where N is a positive integer, and G is the shear modulus of the material in its undeformed stress-free (natural) configuration, which implies that ∑ N p=1 µ p α p > 0. In the present research study, the three-term Ogden model (N = 3) was adopted due to its better accuracy compared to one-term and two-term Ogden model [45].Using the three-term Ogden strain energy function in Equation ( 26), and considering Equations ( 24) and ( 25), the principal value of the Cauchy stress can be obtained as follows: The extracted data from the pure tensile test was then employed to determine the material parameters in Ogden strain-energy function through least squares (LS) optimization technique.Let us consider a vector Λ = [Λ 1 , Λ 2 , . . ., Λ m ] T , representing a collection of experimental deformation values, and associated vector S = [S 1 , S 2 , . . ., S m ] T , corresponding to stress values, in which m represents the number of datasets. For the given deformation vector Λ, using Ogden material model, the principal Cauchy stress in Equation ( 28) can be expressed as σ (µ p , α p ), in which the material parameters µ p and α p are unknown.It is noted that, for the three-term Ogden model, p = 1 to 3, and thus, the number of unknown material parameters are 6 (i.e., µ 1 , µ 2 , µ 3 , α 1 , α 2 , α 3 ).A least square minimization problem has been subsequently formulated to identify the material parameters in order to minimize the error between experimental and model results.The error function may thus be defined as follows: Now, considering Equation ( 28), the optimization problem can be formulated as follows: Find the design variables: Subjected to: ∑ 3 p=1 (−µ p α p ) < 0. The optimization problem in Equation (30) has been solved using stochastic-based Genetic Algorithm (GA) and hybrid method based on the combination of GA and gradientbased Sequential Quadratic Programming (SQP) method.In the hybrid method, the optimal solution from GA has been fed into the SQP as the initial point in an attempt to accurately identify the global optimum solution.The identified optimal Ogden material parameters using GA and GA + SQP for both Ecoflex 30 and Ecoflex 50 silicone rubbers are provided in Table 1.The basic mechanical and magnetic properties of silicone rubber are also presented in Table 2. Material Material Properties Value Silicone Rubber Density ρ 920 Kg/m 3 ) Poisson ratio ν ∼0.5 (incompressible material) Figures 5 and 6, respectively, show the comparison of stress-stretch response of silicone rubber Ecoflex 30 and Ecoflex 50 samples extracted from experiments with those obtained using the Ogden model based on optimal material parameters identified using GA and GA + SQP.Results clearly show that Ogden material model, with optimal material parameters identified through GA + SQP, provides reasonable agreement with the experimental data. Material Optimization Material Material Properties Value Silicone Rubber Density 920 (Kg/m ) Poisson ratio ~0.5 (incompressible material) Magnetic Relative Permeability 2 Figures 5 and 6, respectively, show the comparison of stress-stretch response of silicone rubber Ecoflex 30 and Ecoflex 50 samples extracted from experiments with those obtained using the Ogden model based on optimal material parameters identified using GA and GA + SQP.Results clearly show that Ogden material model, with optimal material parameters identified through GA + SQP, provides reasonable agreement with the experimental data.Curve-fitted plots for silicone rubber Ecoflex 50 using least-square method using GA and hybrid GA + SQP methods. Magnetic Properties of Carbonyl Iron Particles The magnetic properties of CIPs, in the form of hysteresis B-H curve, were provided by the manufacturer, BASF SE, Ludwigshafen, Germany, as depicted in Figure 7a.Using the experimental B-H data, the following equation can be effectively used to predict B-H Figure 6.Curve-fitted plots for silicone rubber Ecoflex 50 using least-square method using GA and hybrid GA + SQP methods. Magnetic Properties of Carbonyl Iron Particles The magnetic properties of CIPs, in the form of hysteresis B-H curve, were provided by the manufacturer, BASF SE, Ludwigshafen, Germany, as depicted in Figure 7a.Using the experimental B-H data, the following equation can be effectively used to predict B-H response of CIPs up to saturation [49]. Modeling the 2D Isotropic MRE-RVE in COMSOL The 2D MRE-RVE was generated in COMSOL using a simple cube contain CIP inclusion.The mechanical and magnetic data associated with each part (m clusion, and the surrounding air domain) were defined precisely, according to th ous section.In order to validate the model, the first modeling was conducted on rubber Ecoflex 50 with 15% volume fraction of CIP to compare with the experim sults from the literature [21].In which, B and H are magnetic flux density and magnetic field intensity, respectively.B s is the magnetic flux density at saturation, a and b are unknown magnetic parameters, and µ 0 = 4π × 10 −7 N A 2 is the vacuum permeability.Using Equation (31), the following B-H curve, shown in Figure 7b, was interpolated and extrapolated for CIP, using the provided COMSOL Multiphysics plug-in. The identified material properties to be considered in the modeling, along with the optimized parameters for interpolation and extrapolation conducted on the B-H curve of CIP, using Equation ( 31) are provided in Table 3. Modeling the 2D Isotropic MRE-RVE in COMSOL The 2D MRE-RVE was generated in COMSOL using a simple cube containing one CIP inclusion.The mechanical and magnetic data associated with each part (matrix, inclusion, and the surrounding air domain) were defined precisely, according to the previous section.In order to validate the model, the first modeling was conducted on silicone rubber Ecoflex 50 with 15% volume fraction of CIP to compare with the experimental results from the literature [21]. As for the meshing pattern, a user-defined mesh approach was employed to discretize the matrix, CIP, and the surrounding air.This methodology ensures precise control over meshing details, allowing for a finer mesh size in specific regions, such as boundaries, and coarsening where needed, especially within the air domain.A mesh sensitivity analysis was performed to determine the most efficient number of elements, balancing computational cost with the attainment of reasonable results.Results for mesh sensitivity for 2D MRE-RVE with 15% volume fraction of CIP and exposed to magnetic flux density of 0.2 T is provided for Ecoflex 50, in Figure 8 as an example.A relative error between the shear modulus obtained from the MRE-RVE modeling and the experimental results [21] is then defined as follows: × 100.It can be observed that the decrease in the relative error is negligible when the number of elements exceeds 1500. Polymers 2024, 16, x FOR PEER REVIEW 14 of 31 Figure 9 shows the FE model of the 2D MRE-RVE.As depicted in Figure 9a and as previously discussed, the mesh employed in the air domain gets coarser as it recedes the RVE boundaries.This is considered as the minimal mechanical or magnetic loading and displacements expected within this region.In contrast, the mesh is finely dispersed around the inclusion, using four boundary layers to ensure the necessary precision in that region.This is essential due to the concentrated interaction of magnetic and mechanical forces within this area.Figure 9b illustrates the boundary layers surrounding the inclusion.It is noteworthy that a total number of 3348 triangular elements were used to discretize the entire MRE-RVE including the surrounding air domain.Figure 9 shows the FE model of the 2D MRE-RVE.As depicted in Figure 9a and as previously discussed, the mesh employed in the air domain gets coarser as it recedes the RVE boundaries.This is considered as the minimal mechanical or magnetic loading and displacements expected within this region.In contrast, the mesh is finely dispersed around the inclusion, using four boundary layers to ensure the necessary precision in that region.This is essential due to the concentrated interaction of magnetic and mechanical forces within this area.Figure 9b illustrates the boundary layers surrounding the inclusion.It is noteworthy that a total number of 3348 triangular elements were used to discretize the entire MRE-RVE including the surrounding air domain. Shear Deformation of Isotropic MRE-RVE Once the MRE-RVE is constructed, a shear deformation is incrementally applied on the top face of RVE up to 30% shear strain, while the periodic boundary conditions are enforced on the edges.The shear deformation is conducted under magnetic field flux densities ranging from 0 to 0.7 T, applied perpendicular to the shear direction.Figure 10 provides an illustration of the applied magnetic field on the RVE and the distortion of the Shear Deformation of Isotropic MRE-RVE Once the MRE-RVE is constructed, a shear deformation is incrementally applied on the top face of RVE up to 30% shear strain, while the periodic boundary conditions are enforced on the edges.The shear deformation is conducted under magnetic field flux densities ranging from 0 to 0.7 T, applied perpendicular to the shear direction.Figure 10 provides an illustration of the applied magnetic field on the RVE and the distortion of the magnetic field around CIP inclusion, as being absorbed by the inclusion.The induced uniform magnetic flux density inside the inclusion is also obvious in this figure.Maxwell stress tensor is applied on the inclusion boundaries and in combination with the mechanical stress, the total shear stress generated in the RVE is calculated.Figure 11 presents the Maxwell stress distribution at the CIP boundaries, under a magnetic field of 0.4 T, when the RVE undergoes 30% shear strain.Figure 12 shows the shear deformation experienced by MRE-RVE under 30% shear strain, while the periodic boundary conditions are being applied on the RVE boundaries.It is noted that the results are provided for the silicone rubber Ecoflex 50 MRE-RVE, with 15% volume fraction of CIP under the application of 0.4 T magnetic flux density as an example.In Figures 10-12, the smaller square indicates the RVE boundaries, while the bigger one represents the air domain boundaries. rubber Ecoflex 50 MRE-RVE, with 15% volume fracti T magnetic flux density as an example.In Figures 10 RVE boundaries, while the bigger one represents the Finally, the pure shear analysis was conducted to obtain shear stress-shear strain response of MRE-RVE under the application of different magnetic flux densities.Figure 13 represents the homogenized shear stress versus shear strain behavior of the silicone rubber Ecoflex 50 MRE-RVE containing 15% CIP in volume fraction, under an external magnetic field ranging from 0 to 0.7 T. Examination of results in Figure 13 reveals that shear modulus, representing the slope of the shear stress-shear strain curves, substantially increases by increasing the magnetic field intensity.For instance, at nearly 30% shear strain, the generated shear stress increases almost over 60% from nearly 19 kPa to almost 31 kPa by increasing the magnetic flux density from 0 to 0.7 T, respectively.Results also show that the change in modulus decreases as the magnetic flux density increases, confirming the saturation phenomenon.The variation in the shear modulus with respect to the magnetic flux density obtained from the 2D isotropic MRE-RVE, and its comparison with the reported experimental results is shown in Figure 14.Finally, the pure shear analysis was conducted to obtain s sponse of MRE-RVE under the application of different magnet represents the homogenized shear stress versus shear strain be ber Ecoflex 50 MRE-RVE containing 15% CIP in volume fractio netic field ranging from 0 to 0.7 T. Examination of results in F modulus, representing the slope of the shear stress-shear stra creases by increasing the magnetic field intensity.For instance, the generated shear stress increases almost over 60% from near by increasing the magnetic flux density from 0 to 0.7 T, respe that the change in modulus decreases as the magnetic flux den the saturation phenomenon.The variation in the shear modulu netic flux density obtained from the 2D isotropic MRE-RVE, an reported experimental results is shown in Figure 14. RVE boundaries, while the bigger one represents the Results show that the zero-field shear modulus of MRE-SOL FE modeling is 59.9 kPa, which is 10% higher than the modulus of MRE given by experimental results.Results from stantially deviates from experiential results as magnetic flux de T.Moreover, Figure 14 illustrates that the field-induced she reaches saturation at magnetic flux density of nearly 0.65 T as a while that of the experiment keeps increasing up to 0.8 T of a differences between modeling and experimental results are qu Results show that the zero-field shear modulus of MRE-RVE obtained from COMSOL FE modeling is 59.9 kPa, which is 10% higher than the 54.43 kPa zero-field shear modulus of MRE given by experimental results.Results from 2D MRE-RVE model substantially deviates from experiential results as magnetic flux density increases beyond 0.2 T.Moreover, Figure 14 illustrates that the field-induced shear modulus of MRE-RVE reaches saturation at magnetic flux density of nearly 0.65 T as also evident from Figure 13, while that of the experiment keeps increasing up to 0.8 T of applied magnetic field.The differences between modeling and experimental results are quantified in Table 4. Results show that while 2D isotropic MRE-RVE model may provide acceptable results for shear modulus results under lower magnetic field, it cannot capture the magnetomechanical behavior of MREs under higher magnetic fields.For example, the differences between shear moduli from 2D FE modeling and experiment are 10% at 0 T, −11% at 0.4 T, and −20% at 0.7 T. The developed 2D isotropic MRE-RVE FE model was subsequently used to qualitatively investigate the effect of CIP volume fraction on the shear modulus.Figure 15a-f show the results for the shear stress-shear strain response behavior of MREs under different magnetic flux densities for CIP volume fraction ranging from 5% to 40%.Results show that increasing the volume fraction of CIP yields higher field-induced shear modulus.For instance, when increasing the magnetic flux density from 0 to 0.7 T, and under shear strain of 30%, the shear stress increases from nearly 16 kPa to almost 23 kPa (44%) and from 35 kPa to 65 kPa (86%) for CIP volume fraction of 5% and 40%, respectively.As the CIP volume fraction increases, the gap between two subsequent curves in each figure (Figure 15a-f) increases, implying that the influence of magnetic field on shear modulus, and consequently, the MR effect increase as the volume fraction increases.The variation in MR effect with respect to CIP volume fraction is shown in Figure 16.Results suggest that the MR effect increases by increasing the CIP volume fraction.Although the MR effect is supposed to reach a maximum at around ϕ = 27% and then drop [17], the 2D The same procedure of FE modeling used for silicone rubber Ecoflex 50 MRE-RVE, was also conducted on silicone rubber Ecoflex 30 MRE-RVE.The influence of different magnetic flux densities, ranging from 0-0.7 T, was also studied on the shear stress-shear strain response of the silicone rubber Ecoflex 30 MRE-RVEs, containing various CIP content.Figure 17 shows the MR effect with respect to CIP volume fraction for 2D MRE-RVE with different matrix materials (Ecoflex 30 and Ecoflex 50).Results clearly show that MR effect in the MRE-RVE with the softer matrix (Ecoflex 30) reaches a maximum of 154%, while that in Ecoflex 50 MRE-RVE reaches a maximum of 100%, both containing 40% CIP in volume fraction.Although showing a higher relative MR effect in MREs with softer matrix is anticipated due to the experimental data in the literature [21,22], the relative MR effect is supposed to reach a peak at the optimum CIP volume fraction, and then decrease as the volume fraction goes up [17], and the 2D MRE-RVE modeling cannot capture this behavior.The same procedure of FE modeling used for silicone rubber Ecoflex 50 MRE-RVE, was also conducted on silicone rubber Ecoflex 30 MRE-RVE.The influence of different magnetic flux densities, ranging from 0-0.7 T, was also studied on the shear stress-shear strain response of the silicone rubber Ecoflex 30 MRE-RVEs, containing various CIP content.Figure 17 shows the MR effect with respect to CIP volume fraction for 2D MRE-RVE with different matrix materials (Ecoflex 30 and Ecoflex 50).Results clearly show that MR effect in the MRE-RVE with the softer matrix (Ecoflex 30) reaches a maximum of 154%, while that in Ecoflex 50 MRE-RVE reaches a maximum of 100%, both containing 40% CIP in volume fraction.Although showing a higher relative MR effect in MREs with softer matrix is anticipated due to the experimental data in the literature [21,22], the relative MR effect is supposed to reach a peak at the optimum CIP volume fraction, and then decrease as the volume fraction goes up [17], and the 2D MRE-RVE modeling cannot capture this behavior. The difference in the results could be attributed to the incapability of 2D model to capture the whole physical phenomenon.It is noted that, in 2D RVE model, an extruded depth should be assigned to the plane geometry.Thus, the inclusion is in fact considered as a short cylindrical fiber which is different from the geometry of the nearly spherical inclusion in reality. Modeling the 3D Isotropic MRE-RVE in COMSOL As suggested in the previous section, the 2D MRE-RVE was not able to properly capture the coupled magneto-mechanical response of MREs in shear deformation.Hence, the modeling approach has been extended to 3D.Therefore, the 3D MRE-RVE was generated The difference in the results could be attributed to the incapability of 2D model to capture the whole physical phenomenon.It is noted that, in 2D RVE model, an extruded depth should be assigned to the plane geometry.Thus, the inclusion is in fact considered as a short cylindrical fiber which is different from the geometry of the nearly spherical inclusion in reality. Modeling the 3D Isotropic MRE-RVE in COMSOL As suggested in the previous section, the 2D MRE-RVE was not able to properly capture the coupled magneto-mechanical response of MREs in shear deformation.Hence, the modeling approach has been extended to 3D.Therefore, the 3D MRE-RVE was generated in COMSOL in the same fashion as that of 2D modeling.One CIP inclusion was generated and placed inside a simple cube of matrix material and surrounded by a larger cube of air.The mechanical and magnetic data associated with each part (matrix, magnetic particle inclusion, and the surrounding air domain) was also defined precisely, as explained before.To validate the model, we initiated the modeling process for silicone rubber Ecoflex 50 containing 15% volume fraction of CIP.Subsequently, we conducted a comparison with the experimental data reported in the literature [21]. In order to create a complex and customized structure, the RVE underwent a detailed meshing procedure.We decided to utilize the "user-defined mesh" method, similar to the approach used in the 2D configuration.A tetrahedron mesh type is used, as it provides more flexibility for meshing the curved boundaries, here, the spherical magnetic particle. A methodical mesh sensitivity analysis was then systematically performed to reach the optimal mesh pattern, ensuring that computational resources were not needlessly burdened.Results of the relative error between the shear modulus obtained from the 3D MRE-RVE modeling and the experimental results [21], defined as: × 100, for different number of elements are provided in Figure 18.Just as described in the 2D modeling section, we explored various meshing configurations while creating the 3D model.Results in Figure 18 show that the relative error between the shear modulus obtained from 3D MRE-RVE and experiments decreases as the total number of tetrahedron elements increases, indicating the convergence of the shear modulus.Hence, based on this finding and by evaluating the computational cost, we opted for the mesh configuration consisting of 30,492 tetrahedral elements to balance between the computational cost and accuracy. Polymers 2024, 16, x FOR PEER REVIEW A methodical mesh sensitivity analysis was then systematically perform the optimal mesh pattern, ensuring that computational resources were not nee dened.Results of the relative error between the shear modulus obtained fr MRE-RVE modeling and the experimental results [21], defined as: for different number of elements are provided in Figure 18.Just as describe modeling section, we explored various meshing configurations while creat model.Results in Figure 18 show that the relative error between the shear m tained from 3D MRE-RVE and experiments decreases as the total number of t elements increases, indicating the convergence of the shear modulus.Hence, b finding and by evaluating the computational cost, we opted for the mesh co consisting of 30,492 tetrahedral elements to balance between the computation accuracy. A visual representation of the mesh pattern applied in the 3D MRE-RVE provided in Figure 19a.As previously explained, the mesh density in the air d gressively coarsens as it moves away from the RVE boundaries, for the ant minimal mechanical or magnetic loading and displacements in this particular versely, the mesh is finely adjusted in the vicinity of the inclusion, to ensure t precision in that area due to the intensified interaction of magnetic and mecha Figure 19b further illustrates the mesh quality in all regions.A visual representation of the mesh pattern applied in the 3D MRE-RVE modeling is provided in Figure 19a.As previously explained, the mesh density in the air domain progressively coarsens as it moves away from the RVE boundaries, for the anticipation of minimal mechanical or magnetic loading and displacements in this particular zone.Conversely, the mesh is finely adjusted in the vicinity of the inclusion, to ensure the required precision in that area due to the intensified interaction of magnetic and mechanical forces.Figure 19b further illustrates the mesh quality in all regions.As for shear analysis, the RVE was systematically subjected to incremental pure shear deformation, gradually reaching a shear strain of 30%.To maintain consistency, the periodic boundary conditions were imposed along all surface boundaries.Concurrently, a magnetic field was applied perpendicular to the shear direction, spanning a range of magnitudes from 0 to 0.4 T. The visual representation in Figure 20 clearly portrays the magnetic field's interaction with the RVE, particularly highlighting the distortion of the field as it encounters the CIP inclusion.The high induced magnetic flux density within the inclusion is clearly visible.In the same fashion, with the 2D modeling, the Maxwell stress tensor was applied to the boundaries of the CIP inclusion, calculated at each step of the analysis, and simultaneously interpreted as a mechanical load into the modeling in solid mechanics physics, interpreting the whole coupled problem as a solid mechanic problem with periodic boundary conditions applied accordingly.Integrating this stress with the mechanical stress developed by the shear deformation, the overall shear stress generated within the 3D MRE-RVE was determined.Figure 21a,b illustrate the Maxwell stress distribution at the CIP boundaries under a magnetic field of 0.1 T, when the RVE undergoes 30% shear strain. Figure 22 illustrates a visualization of the shear deformation of the MRE-RVE under shear strain of 30% and the shear stress distribution throughout the entire MRE-RVE.The RVE boundaries are consistently depicted by the smaller cube, while the larger cube delineates the boundaries of the air domain.It is worth noting that Figures 20-22 MRE-RVE-Silicone Rubber Ecoflex 50 The pure shear analysis was subsequently conducted on MRE-RVE with rubber Ecoflex 50 as the matrix material under the application of magnetic fl ranging from 0 to 0.4 T. Results for the homogenized shear stress versus she MRE-RVE containing 15% CIP in volume fraction under the application of netic flux densities are shown in Figure 23. MRE-RVE-Silicone Rubber Ecoflex 50 The pure shear analysis was subsequently conducted on MRE-RVE with the silicone rubber Ecoflex 50 as the matrix material under the application of magnetic flux densities ranging from 0 to 0.4 T. Results for the homogenized shear stress versus shear strain for MRE-RVE containing 15% CIP in volume fraction under the application of varied magnetic flux densities are shown in Figure 23.Examination of the results in Figure 23 reveals that, as expected, the shea increases by increasing the magnetic field intensity.In other words, as the mag increases, the MRE-RVE experiences higher shear stress while undergoing amount of shear strain.For example, when undergoing 30% shear strain, the M riences roughly 18 kPa of shear stress when no magnetic field is applied; ho increasing the magnetic field up to 0.4 T, the shear stress reaches almost 30 kPa, a 66% increase.The variation in the predicted field-dependent shear modulus w to the applied magnetic flux density and its comparison with reported experi sults [21] are shown in Figure 24.It is observed that, unlike the 2D MRE-RVE 3D MRE-RVE can accurately predict the field-dependent shear modulus of the M 0.4 T. For instance, the zero-field shear modulus of MRE-RVE obtained from CO 3D modeling is 55.15 kPa, which is only 1.3% higher than the 54.43 kPa zero- Examination of the results in Figure 23 reveals that, as expected, the shear modulus increases by increasing the magnetic field intensity.In other words, as the magnetic field increases, the MRE-RVE experiences higher shear stress while undergoing the same amount of shear strain.For example, when undergoing 30% shear strain, the MRE experiences roughly 18 kPa of shear stress when no magnetic field is applied; however, by increasing the magnetic field up to 0.4 T, the shear stress reaches almost 30 kPa, indicating a 66% increase.The variation in the predicted field-dependent shear modulus with respect to the applied magnetic flux density and its comparison with reported experimental results [21] are shown in Figure 24.It is observed that, unlike the 2D MRE-RVE model, the 3D MRE-RVE can accurately predict the field-dependent shear modulus of the MRE up to 0.4 T. For instance, the zero-field shear modulus of MRE-RVE obtained from COMSOL FE 3D modeling is 55.15 kPa, which is only 1.3% higher than the 54.43 kPa zero-field shear modulus of MRE obtained experimentally. increasing the magnetic field up to 0.4 T, the shear stress reaches almost 30 kPa, indica a 66% increase.The variation in the predicted field-dependent shear modulus with res to the applied magnetic flux density and its comparison with reported experimenta sults [21] are shown in Figure 24.It is observed that, unlike the 2D MRE-RVE model 3D MRE-RVE can accurately predict the field-dependent shear modulus of the MRE u 0.4 T. For instance, the zero-field shear modulus of MRE-RVE obtained from COMSO 3D modeling is 55.15 kPa, which is only 1.3% higher than the 54.43 kPa zero-field s modulus of MRE obtained experimentally.In order to assure that the whole behavior of the MRE is captured accurately in FE modeling, the coefficient of determination ( ) is determined, which is defined as 1 − . represents the sum of squared residuals (the differences between the dicted values and the actual values), and is the total sum of squares, w measures the total variance of the predicted variable. close to one describes a pe In order to assure that the whole behavior of the MRE is captured accurately in the FE modeling, the coefficient of determination (R 2 ) is determined, which is defined as R 2 = 1 − SS res SS tot .SS res represents the sum of squared residuals (the differences between the predicted values and the actual values), and SS tot is the total sum of squares, which measures the total variance of the predicted variable.R 2 close to one describes a perfect agreement between the predicted values and the actual values.R 2 was determined between the results from the modeling and the ones obtained from experiments [21], and is found to be 0.99. We have attempted to evaluate the shear response behavior of the 3D-RVE modeling for higher magnetic flux densities, beyond 0.4 T.However, the model fails due to the complex interaction between the mechanical and magnetic loads.Analyzing the results, we realized that the issue is likely due to the abrupt change in the material properties between an extremely soft rubber and a rigid inclusion, along with the accumulated mechanical and magnetic nonlinearity associated with the stress and material behavior. The 3D MRE-RVE model was then effectively utilized to investigate the influence of CIP volume fraction, magnetic field, and matrix stiffness on the shear deformation response behavior of MREs, under varying magnetic flux densities.Results for shear stress-shear strain response of MREs with matrix Ecoflex 50 concerning different CIP volume fraction, ranging from 5% to 40%, are illustrated in Figure 25a-f. As suggested by the results in Figure 25a-f, the nonlinearity in the stress-strain curves increases by increasing the volume fraction of CIP and by increasing the applied magnetic field.Results also show that increasing the volume fraction of CIP yields a substantial increase in the magnetic field induced shear stress at the given shear strain.For instance, at 30% shear strain and under a magnetic flux density of 0.4 T, the MRE-RVE containing 5% CIP experiences a shear stress of roughly 17 kPa, while the shear stress reaches 110 kPa for MRE-RVE containing 40% CIP.From a broader perspective, as the CIP volume fraction increases, the gap between the stress-strain curves increase, indicating an enhancement in the relative MR effect.However, the most widened gaps are observed in MRE-RVE with 27% CIP, indicating that the maximum relative MR effect occurs at the volume fraction of 27%.To have a better understanding of the influence of CIP volume fraction, the MR effect of the 3D MRE-RVEs with different volume fractions of CIP has been evaluated.It is noted that, in determining the MR effect, the maximum shear modulus was evaluated at magnetic flux density of 0.4 T. The results are shown in Figure 26, suggesting that the relative MR effect initially increases as the CIP content increases, reaching to a maximum level of 92% at 27% volume fraction as predicted before and then decreases with further increasing the volume fraction of CIP.This is in agreement with results reported by Davis [17].To have a better understanding of the influence of CIP volume fraction, the MR effect of the 3D MRE-RVEs with different volume fractions of CIP has been evaluated.It is noted that, in determining the MR effect, the maximum shear modulus was evaluated at magnetic flux density of 0.4 T. The results are shown in Figure 26, suggesting that the relative MR effect initially increases as the CIP content increases, reaching to a maximum level of 92% at 27% volume fraction as predicted before and then decreases with further increasing the volume fraction of CIP.This is in agreement with results reported by Davis [17]. Silicone Rubber Ecoflex 30 MRE-RVE The same procedure of FE modeling used for silicone rubber Ecoflex 50 was conducted on MRE-RVE with silicone rubber Ecoflex 30 as the matrix m influence of different magnetic flux densities, ranging from 0 to 0.4 T, was also the shear stress-shear strain response of the silicone rubber Ecoflex 30 MRE taining various CIP content.Figure 27 Silicone Rubber Ecoflex 30 MRE-RVE The same procedure of FE modeling used for silicone rubber Ecoflex 50 MRE-RVE was conducted on MRE-RVE with silicone rubber Ecoflex 30 as the matrix material.The influence of different magnetic flux densities, ranging from 0 to 0.4 T, was also studied on the shear stress-shear strain response of the silicone rubber Ecoflex 30 MRE-RVEs containing various CIP content.Figure 27 Silicone Rubber Ecoflex 30 MRE-RVE The same procedure of FE modeling used for silicone rubber Ecoflex 50 MRE-RVE was conducted on MRE-RVE with silicone rubber Ecoflex 30 as the matrix material.The influence of different magnetic flux densities, ranging from 0 to 0.4 T, was also studied on the shear stress-shear strain response of the silicone rubber Ecoflex 30 MRE-RVEs containing various CIP content.Figure 27 MR effect can be better understood in Figure 29.Results show that the relative MR effect for Ecoflex 30 MRE-RVE reaches its peak at CIP volume fraction of nearly 35%, while the Ecoflex 50 MRE-RVE experienced its maximum relative MR effect at 27% CIP content (Figure 26). Figure 30 illustrates the comparison of the results for MR effect with respect to CIP content for MRE-RVE with Ecoflex 50 and Ecoflex 30 as the host matrices.Results show that the maximum MR effect obtained from the softer matrix (Ecoflex 30) was observed to be nearly 166% at 35% volume fraction of CIP compared with nearly 92% MR effect at 27% volume fraction CIP for silicone rubber Ecoflex 50.It can be observed that the maximum relative MR effect in MREs with the softer matrix is noticeably higher than that of MREs with silicone rubber Ecoflex 50, which has also been confirmed by other studies [21,22]. x FOR PEER REVIEW 27 of 31 at 27% CIP content (Figure 26). Figure 30 illustrates the comparison of the results for MR effect with respect to CIP content for MRE-RVE with Ecoflex 50 and Ecoflex 30 as the host matrices.Results show that the maximum MR effect obtained from the softer matrix (Ecoflex 30) was observed to be nearly 166% at 35% volume fraction of CIP compared with nearly 92% MR effect at 27% volume fraction CIP for silicone rubber Ecoflex 50.It can be observed that the maximum relative MR effect in MREs with the softer matrix is noticeably higher than that of MREs with silicone rubber Ecoflex 50, which has also been confirmed by other studies [21,22]. Conclusions In this study, an FE model based on the representative volume elemen proach was proposed to model the shear deformation of MREs under the infl external magnetic field, while taking all the nonlinearities into account.Experim carried out on host rubber samples (silicone rubber) to formulate the highly Ogden strain energy to describe hyper-elastic behavior of the rubbery matrix netic behavior of CIP was described through a nonlinear B-H curve.The appro size for modeling the MREs was defined, and the MRE-RVE was generated in Multiphysics in 2D and 3D configurations.The MRE-RVE underwent increm shear deformation, while periodic boundary conditions (PBCs) were imposed boundaries.Simultaneously, a homogeneous magnetic field was applied perpe the shear direction, and the Maxwell stress tensor was defined on the CIP inc study focused on isotropic MREs, investigating the influence of varied magnet sities, CIP content, and the host rubber's hyper-elastic behavior on the shear m Conclusions In this study, an FE model based on the representative volume elemen proach was proposed to model the shear deformation of MREs under the infl external magnetic field, while taking all the nonlinearities into account.Experim carried out on host rubber samples (silicone rubber) to formulate the highly Ogden strain energy to describe hyper-elastic behavior of the rubbery matrix netic behavior of CIP was described through a nonlinear B-H curve.The appro size for modeling the MREs was defined, and the MRE-RVE was generated in Multiphysics in 2D and 3D configurations.The MRE-RVE underwent increm shear deformation, while periodic boundary conditions (PBCs) were imposed boundaries.Simultaneously, a homogeneous magnetic field was applied perpe the shear direction, and the Maxwell stress tensor was defined on the CIP inc study focused on isotropic MREs, investigating the influence of varied magnet sities, CIP content, and the host rubber's hyper-elastic behavior on the shear m the MREs.The MR effect behavior of the MRE-RVEs was also studied and com Conclusions In this study, an FE model based on the representative volume element (RVE) approach was proposed to model the shear deformation of MREs under the influence of an external magnetic field, while taking all the nonlinearities into account.Experiments were carried out on host rubber samples (silicone rubber) to formulate the highly nonlinear Ogden strain energy to describe hyper-elastic behavior of the rubbery matrix.The magnetic behavior of CIP was described through a nonlinear B-H curve.The appropriate RVE size for modeling the MREs was defined, and the MRE-RVE was generated in COMSOL Multiphysics in 2D and 3D configurations.The MRE-RVE underwent incremental pure shear deformation, while periodic boundary conditions (PBCs) were imposed on the RVE boundaries.Simultaneously, a homogeneous magnetic field was applied perpendicular to the shear direction, and the Maxwell stress tensor was defined on the CIP inclusion.The study focused on isotropic MREs, investigating the influence of varied magnetic flux densities, CIP content, and the host rubber's hyper-elastic behavior on the shear modulus of the MREs.The MR effect behavior of the MRE-RVEs was also studied and compared.The analysis started with a comprehensive study on the 2D MRE-RVEs, generated as a simple square with one circular inclusion inside, surrounded by air.The MRE-RVE underwent incremental pure shear deformation up to 30%, and the impact of magnetic flux densities ranging from 0 to 0.7 T was investigated.The results showed that the shear modulus in the MRE-RVE keeps a positive correlation with the magnetic flux density and the CIP content.Comparing the 2D results for Ecoflex 50 with the experimental results in the literature [21] revealed that although the 2D modeling can predict the MRE's behavior within ±20% difference with the experimental results, and shows the saturation effect, it cannot accurately predict the variation in MR effect with respect to CIP volume fraction.The relative MR effect in the 2D MRE-RVE keeps increasing by increasing the CIP volume fraction without showing any peak MR effect.These differences between the results obtained from the modeling and the experiments could be attributed to the incapability of the 2D model to capture the physical shape of the inclusions. Subsequently, 3D MRE-RVE was developed in the COMSOL environment.The modeling was conducted on a simple cubic MRE-RVE with one spherical inclusion inside, and the whole RVE was surrounded by a larger cube of air.The study was conducted on MRE-RVEs concerning varied magnetic flux densities (ranging 0-0.4 T), CIP content, and different host elastomers (silicone rubber Ecoflex 30 and Ecoflex 50).As expected, the positive correlation between the shear modulus and the magnetic flux density as well as the CIP content was suggested by the results, in both silicone rubber based MREs.Comparing the 3D results for Ecoflex 50 with the experimental results in the literature [21] revealed that the results hold a perfect agreement with the experiments, offering a coefficient of determination (R 2 ) of 0.99.Exploring the MR effect in the 3D MRE-RVE showed that the relative MR effect in the MRE-RVE with the softer matrix material (Ecoflex 30) is higher than that of Ecoflex 50 MRE-RVE.The 3D modeling was also able to predict the MR effect variation with respect to CIP volume fraction accurately.The results suggested that the relative MR effect in Ecoflex 30 MRE-RVE keeps increasing up to 35% volume fraction of CIP, peaking at 166%, while the MR effect in MRE-RVE with Ecoflex 50 reaches a maximum of 92% at 27% CIP content.Results suggest that using a softer matrix material leads to a higher relative MR effect with higher optimum volume fraction. Figure 1 . Figure 1.(a) The identical molds fabricated by the 3D printer, (b) parts A and B for fabricating silicone rubber Ecoflex 30 to be mixed and cured, and (c) the cured final samples (30 indicates Ecoflex 30, and 50 refers to Ecoflex 50). Figure 1 . Figure 1.(a) The identical molds fabricated by the 3D printer, (b) parts A and B for fabricating silicone rubber Ecoflex 30 to be mixed and cured, and (c) the cured final samples (30 indicates Ecoflex 30, and 50 refers to Ecoflex 50). 1 . Characterization of the MRE's Elastomeric Matrix Using the Uniaxial Tensile TestIn pursuit of determining the viscoelastic properties of silicone rubber samples, the two identical rectangular samples with dimensions of 37 × 6 × 3 [mm] underwent pure tensile to failure test with 30 [mm/min] velocity, using an MTS machine (F1505-IM, Mark-10, Copiague, NY, USA), under identical conditions.Figures2 and 3illustrate the three significant steps of the conducted test for silicone rubber Ecoflex 30 and Ecoflex 50, respectively. Figure 1 . Figure 1.(a) The identical molds fabricated by the 3D printer, (b) parts A and B for fabricating silicone rubber Ecoflex 30 to be mixed and cured, and (c) the cured final samples (30 indicates Ecoflex 30, and 50 refers to Ecoflex 50). Figure 2 .Figure 3 . Figure 2. (a) Silicone rubber sample (Ecoflex 30) assembled on the MTS machine prior to the tensile test, (b) sample after the final steps of tensile test, and (c) the failed sample. Figure 2 . Figure 2. (a) Silicone rubber sample (Ecoflex 30) assembled on the MTS machine prior to the tensile test, (b) sample after the final steps of tensile test, and (c) the failed sample. Figure 1 . Figure 1.(a) The identical molds fabricated by the 3D printer, (b) parts A and B for fabricating silicone rubber Ecoflex 30 to be mixed and cured, and (c) the cured final samples (30 indicates Ecoflex 30, and 50 refers to Ecoflex 50). Figure 2 .Figure 3 . Figure 2. (a) Silicone rubber sample (Ecoflex 30) assembled on the MTS machine prior to the tensile test, (b) sample after the final steps of tensile test, and (c) the failed sample. Figure 3 . Figure 3. (a) Silicone rubber sample (Ecoflex 50) assembled on the MTS machine prior to the tensile test, (b) sample after the final steps of tensile test, and (c) the failed sample. Figure 4 . Figure 4.The extracted raw data of the conducted pure tensile to failure test for silicone rubber Ecoflex 30 and Ecoflex 50. Figure 4 . Figure 4.The extracted raw data of the conducted pure tensile to failure test for silicone rubber-Ecoflex 30 and Ecoflex 50. Figure 5 . Figure5.Curve-fitted plots for silicone rubber Ecoflex 30 using least-square method using GA and hybrid GA + SQP methods.Figure5.Curve-fitted plots for silicone rubber Ecoflex 30 using least-square method using GA and hybrid GA + SQP methods. Figure 5 . 31 Figure 6 . Figure 5. Curve-fitted plots for silicone rubber Ecoflex 30 using least-square method using GA and hybrid GA + SQP methods.Figure 5. Curve-fitted plots for silicone rubber Ecoflex 30 using least-square method using GA and hybrid GA + SQP methods.Polymers 2024, 16, x FOR PEER REVIEW 12 of 31 31 )Figure 7 . Figure 7. (a) B-H curve for CIP, provided by the manufacturer (BASF SE, Ludwigshafen, G and (b) extrapolated B-H curve of CIP.Vertical axis represent magnetic flux density (B) in T the horizontal axis is the field intensity in A/m. Figure 7 . Figure 7. (a) B-H curve for CIP, provided by the manufacturer (BASF SE, Ludwigshafen, Germany), and (b) extrapolated B-H curve of CIP.Vertical axis represent magnetic flux density (B) in Tesla, and the horizontal axis is the field intensity in A/m. Figure 9 . Figure 9. (a) The mesh pattern of MRE-RVE (the grey square containing the circle) surrounded by the air domain (the purple square) and (b) the boundary layers (the grey discretized circles) implemented to enhance precision around the inclusion (the blue circle).The cut-out section is magnified by the scale of 3.5. Figure 10 . Figure 10.The magnetic field distortion around the CIP inc indicates the RVE boundaries, and the big square is the ai bar depicts the magnetic flux density in Tesla. Figure 11 .Figure 10 . Figure 11.The Maxwell stress distribution generated at t density of 0.4 T, at 30% shear strain.The color definition b Figure 10 . Figure 10.The magnetic field distortion around the CIP inc indicates the RVE boundaries, and the big square is the air bar depicts the magnetic flux density in Tesla. Figure 11 . Figure 11.The Maxwell stress distribution generated at t density of 0.4 T, at 30% shear strain.The color definition ba Figure 11 . Figure 11.The Maxwell stress distribution generated at the CIP boundaries under magnetic flux density of 0.4 T, at 30% shear strain.The color definition bar depicts the Maxwell stress in Pa. Figure 11 .Figure 12 . Figure 11.The Maxwell stress distribution generated at th density of 0.4 T, at 30% shear strain.The color definition ba Figure 12 . Figure 12.The shear deformation of MRE-RVE under 30% shear strain, while the periodic boundary conditions are applied on the RVE boundaries.The small square frame shows the RVE boundaries before deformation and the deformed RVE is colored.The color definition bar depicts the displacement magnitude in µm. Figure 14 . Figure 14.Comparison of shear modulus versus magnetic flux density obtained from COMSOL FE modeling of 2D isotropic MRE-RVE with the experimental results [21] for silicone rubber Ecoflex 50-MRE with 15% volume fraction of CIP. Figure 16 . Figure 16.MR effect behavior in 2D modeling of silicone rubber (SR) Ecoflex 50 MRE-RVE with respect to CIP volume fraction. Figure 16 . Figure 16.MR effect behavior in 2D modeling of silicone rubber (SR) Ecoflex 50 MRE-RVE with respect to CIP volume fraction. Figure 17 . Figure 17.Comparison of the MR effect behavior in 2D isotropic MRE-RVEs with different matrix materials (Ecoflex 50 and Ecoflex 30) with respect to CIP volume fraction. Figure 17 . Figure 17.Comparison of the MR effect behavior in 2D isotropic MRE-RVEs with different matrix materials (Ecoflex 50 and Ecoflex 30) with respect to CIP volume fraction. Figure 19 . Figure 19.(a) The mesh pattern of MRE-RVE (the blue cubic matrix containing the green spherical inclusion) surrounded by the air domain (the grey cube) and (b) the mesh quality in all regions with the color bar representing the quality of mesh on the scale of 0 to 1. feature the silicone rubber Ecoflex 50 MRE-RVE with a 15% volume fraction of CIP, subjected to a magnetic field of 0.1 T. Figure 22 Figure 22 illustrates a visualization of the shear deformation of the MRE-RVE unde shear strain of 30% and the shear stress distribution throughout the entire MRE-RVE.Th RVE boundaries are consistently depicted by the smaller cube, while the larger cube de lineates the boundaries of the air domain.It is worth noting that Figures 20-22 feature th silicone rubber Ecoflex 50 MRE-RVE with a 15% volume fraction of CIP, subjected to magnetic field of 0.1 T. Figure 20 . Figure 20.The magnetic field distortion around the CIP inclusion inside the 3D RVE, red arrow represent the magnetic field intensity and direction (0.1 T, upward), the color definition bar de scribes the magnetic flux density (T) in the air, and MRE-RVE domain, referring to the hypothetica cut out surface in the middle of the model. Figure 20 .Figure 21 . Figure 20.The magnetic field distortion around the CIP inclusion inside the 3D RVE, red arrows represent the magnetic field intensity and direction (0.1 T, upward), the color definition bar describes the magnetic flux density (T) in the air, and MRE-RVE domain, referring to the hypothetical cut out surface in the middle of the model.Polymers 2024, 16, x FOR PEER REVIEW 22 of 3 Figure 21 .Figure 21 . Figure 21.The Maxwell stress distribution generated at the CIP boundaries under magnetic flux density of 0.1 T, at 30% shear strain: (a) x-z plane view and (b) spatial 3D view.The arrows represent the Maxwell stress, and the color definition bar depicts the Maxwell stress in Pa. Figure 22 . Figure 22.The shear deformation of MRE-RVE under 30% shear strain, while the perio conditions are being applied on the RVE boundaries, subjected to a magnetic field smaller cubic frame shows the RVE boundaries before deformation and the deforme ored.The color definition bar depicts the Tresca stress in Pa. Figure 22 . Figure 22.The shear deformation of MRE-RVE under 30% shear strain, while the periodic boundary conditions are being applied on the RVE boundaries, subjected to a magnetic field of 0.1 T. The smaller cubic frame shows the RVE boundaries before deformation and the deformed RVE is colored.The color definition bar depicts the Tresca stress in Pa. shows the shear stress-shear strain behavior of Ecoflex 30 MRE-RVEs with 15% volume fraction of CIP, under varied magnetic flux densities ranging from 0 to 0.4 T. Results in Figure27clearly suggest that Ecoflex 30 MRE-RVE exhibits lower stiffness and less nonlinear behavior compared with Eoflex 50 MRE-RVE in Figure23.Moreover, a closer look at the gaps between two subsequent curves in both Figures23 and 27reveals that, in Ecoflex 30 MRE-RVE, the stress-strain curves are more widened than those in Ecoflex 50 MRE-RVE, indicating a more pronounced relative MR effect obtained from the MRE with softer matrix material under identical conditions. shows the shear stress-shear strain behavior of Ecoflex 30 MRE-RVEs with 15% volume fraction of CIP, under varied magnetic flux densities ranging from 0 to 0.4 T. Results in Figure 27 clearly suggest that Ecoflex 30 MRE-RVE exhibits lower stiffness and less nonlinear behavior compared with Eoflex 50 MRE-RVE in Figure 23.Moreover, a closer look at the gaps between two subsequent curves in both Figures 23 and 27 reveals that, in Ecoflex 30 MRE-RVE, the stress-strain curves are more widened than those in Ecoflex 50 MRE-RVE, indicating a more pronounced relative MR effect obtained from the MRE with softer matrix material under identical conditions. Figure 27 . Figure 27.MRE-RVE (silicone rubber (SR) Ecoflex 30) shear stress-strain behavior under the application of different magnetic fields.Similar to Ecoflex 50 MRE-RVE, the effect of volume fraction of CIP on the shear stressshear strain response of Ecoflex 30 MRE-RVE has also been investigated, and results are shown in Figure 28a-f.Results suggest an escalation in the shear modulus and the relative MR effect when the CIP content goes up.The effect of volume fraction of CIP on the relative Figure 30 . Figure 30.Comparing MR effect versus CIP volume fraction obtained from 3D isotrop for silicone rubber Ecoflex 30 and silicone rubber Ecoflex 50. Figure 30 . Figure 30.Comparing MR effect versus CIP volume fraction obtained from 3D isotrop for silicone rubber Ecoflex 30 and silicone rubber Ecoflex 50. Figure 30 . Figure 30.Comparing MR effect versus CIP volume fraction obtained from 3D isotropic MRE-RVE for silicone rubber Ecoflex 30 and silicone rubber Ecoflex 50. Table 1 . The optimized parameters gained through curve fitting the experimental data with the Ogden strain energy function. Table 2 . Material properties of silicone rubber. Table 2 . Material properties of silicone rubber. Table 3 . Material properties of air and CIP. Table 4 . Comparing the results of 2D MRE-RVE and the experiments.
17,425.8
2024-05-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Model-independent determinations of the electron EDM and the role of diamagnetic atoms We perform model-independent analyses extracting limits for the electric dipole moment of the electron and the P,T-odd scalar-pseudoscalar (S-PS) nucleon-electron coupling from the most recent measurements with atoms and molecules. The analysis using paramagnetic systems, only, is improved substantially by the inclusion of the recent measurement on HfF+ ions, but complicated by the fact that the corresponding constraints are largely aligned, owing to a general relation between the coefficients for the two contributions. Since this same relation does not hold in diamagnetic systems, it is possible to find atoms that provide essentially orthogonal constraints to those from para\-magnetic ones. However, the coefficients are suppressed in closed-shell systems and enhancements of P,T-odd effects are only prevalent in the presence of hyperfine interactions. We formulate the hyperfine-induced time-reversal-symmetry breaking S-PS nucleon-electron interaction in general atoms in a mixed perturbative and variational approach, based on electronic Dirac-wavefunctions including the effects of electron correlations. The method is applied to the Hg atom, yielding the first direct calculation of the coefficient of the S-PS nucleon-electron coupling in a diamagnetic system. This results in additionally improved model-independent limits for both the electron EDM and the nucleon-electron coupling from the global fit. Finally we employ this fit to provide indirect limits for several paramagnetic systems under investigation. I. INTRODUCTION Electric dipole moments (EDMs) provide a competitive means to search for new physics (NP), complementary to strategies like direct searches at hadron colliders, but also to other indirect searches, for instance using flavour-changing processes. The exceptional sensitivity is due to the combination of experimental precision with a tiny Standard Model (SM) background. 1 Experimental tests for EDMs involve typically rather complex systems like atoms or molecules. The discovery of a finite EDM in any of these systems would be a major discovery, independent of its specific source. However, reliably interpreting these measurements in terms of fundamental parameters of a given NP model requires precise knowledge of their relations. These are established proceeding via a series of effective field theories, rendering a large part of the analysis model-and system-independent, see e.g. Refs. [1][2][3][4][5][6][7] for recent reviews. The corresponding complex matrix elements on the atomic, nuclear and QCD levels often involve large uncertainties, which sometimes prohibit to fully exploit the experimental information, see Refs. [5,8] for recent detailed discussions. This article presents a new method for the rigorous calculation of the coefficient of the scalar-pseudoscalar nucleon-electron (S-PS-ne) interaction in diamagnetic systems. For this contribution so far only rough estimates exist, due to the fact that it vanishes to leading order in the electromagnetic interaction, even in the presence of an external electric field. In this paper we consider Mercury (Hg) which provides the strongest experimental limit on an EDM so far [9]. The determination of this coefficient provides a competitive limit on the (NP-induced) strength of the corresponding interaction. It is also of special interest for the model-independent extraction of the electron EDM: in principle, paramagnetic systems can be used to obtain both coefficients, taking into account potential cancellations [10,11]; however, a problem arises from the fact that all paramagnetic systems constrain a similar combination of these two contributions [10]. Diamagnetic systems generally give independent constraints, thereby improving the model-independent extraction of both coefficients significantly [11]. Our results can therefore be used to constrain different classes of NP models, requiring less restrictive assumptions. This article proceeds as follows: In the following section we present a method for the direct calculation of S-PS-ne enhancements in closed-shell atoms and molecules. Section III describes its application to the Hg atom, and in section IV we investigate the phenomenological consequences of the present study. In the final section we conclude and discuss the implications of our findings for future work. II. THEORETICAL FRAMEWORK The calculation of the dominant contribution induced by the S-PS-ne interaction in diamagnetic systems requires the inclusion of the hyperfine interaction on top of the corresponding calculation in paramagnetic systems, since its expectation value vanishes to leading order in a closed-shell atom, due to a vanishing spin density near its nucleus [12,13]. The nuclear current at the origin, corresponding to the magnetic moment of the nucleus, polarizes the closed atomic shells and leads to non-zero values. In a traditional setup this would require a three-fold expansion in the S-PS-ne interaction, the external electric field and the hyperfine interaction. Instead, we here start from a 0th-order electronic-structure problem where H (0) is the atomic Dirac-Coulomb Hamiltonian including the perturbation due to a homogeneous external electric field E ext , with the nucleus placed at the origin:Ĥ where the indices j, k run over N electrons, Z is the proton number (N = Z for neutral atoms), and α, β are standard Dirac matrices. We use atomic units (a.u.) throughout (e = m 0 = = 1). Since we solve Eq. (1) variationally (i.e., by diagonalization), the effect of the external electric field in ψ (0) K is taken into account to all orders in perturbation theory. These states are technically electronic configuration interaction (CI) vectors [14]. The first-order perturbed wavefunction due to the magnetic hyperfine interaction can be written as where µ = gI is the nuclear magnetic moment, g the nuclear g-factor, m p the proton mass and I the nuclear spin. The minus sign in Eq. (4) relates to the charge of an electron in a.u. The hyperfine Hamiltonian can also be written aŝ H where A is the rank 2 cartesian hyperfine interaction tensor and J is the total electronic angular momentum. It is, therefore, generally a sum of nine terms that due to µ := I, M I = I|μ z |I, M I = I and µ ∝ I reduces toĤ The required matrix elements are defined as follows: where k is a cartesian component and the nuclear magnetic moment enters in units of the nuclear magneton µ N = 1 2cmp (in a.u.). For evaluating the S-PS-ne enhancement in the atom we use the effective Hamiltonian operator [15] where G F is the Fermi constant, A the nucleon number, C S the dimensionless S-PSne coupling constant, ρ the normalized nuclear charge density, and γ µ are standard Dirac matrices. Given the smallness of this interaction, even compared to the hyperfine interaction, higher-order perturbative corrections are clearly negligible. Given, furthermore, the CP-conserving nature of the hyperfine interaction, the energy shift of a given atomic state indicating CP violation can to leading order be written as The atomic EDM in terms of the S-PS-ne interaction is a function of the polarizing external electric field E ext , and so where the approximation holds in the linear regime which is assured by external fields chosen significantly smaller than the internal ones and we have introduced α C S , the atomic S-PS-ne enhancement factor. In the present case E ext (Hg) = 0.00024 a.u. This leads to shifts of the energies ε (0) K (see Eq. (1)) on the order of 10 −6 a.u. for Hg. CI vectors are consequently optimized such that the energies ε (0) K are converged to at least 10 −9 a.u. We now focus on the evaluation of the normalized expectation value, part of the expression on the right-hand side of Eq. (8),   (10) For convenience, we use in the following also the S-PS-ne enhancement S (in analogy to the electron EDM enhancement R and not to be confused with the nuclear Schiff moment, also denoted S in the literature), defined as In order to facilitate comparison with the literature, we note that the states ψ (0) K can be considered as wavefunctions perturbed to infinite order by E, and so the expression in Eq. (9) contains terms of third order of the type plus higher-order contributions in E, where ψ (0) N is now an unperturbed eigenstate of the plain atomic Dirac-Coulomb Hamiltonian without external electric field. The terms in Eq. (12) are just the equivalent of the electron EDM contribution via magnetic hyperfine interaction to an atomic EDM, as given by Flambaum and Khriplovich in reference [15], Eq. (17). These third-order terms, declared important but left untreated in reference [18], are taken into account in the present approach. Moreover, the higher-order contributions in E are included automatically in the present approach. III. NE-SPS ENHANCEMENT IN ATOMIC MERCURY For our zeroth-order atomic wavefunctions the quantum number M J , corresponding to the projection of the total angular momentum onto the axis defined by the external electric field, is an exact quantum number and characterizes an irreducible representation of the axial double point group. Since the external perturbation is small, the quantum number J is still approximately valid and we denote CI states in the approximate Russell-Saunders picture as M L J,M J , where M is the spin multiplicity. The S-PS-ne interaction Hamiltonian in Eq. (6) is rotationally invariant; as a consequence, M J |Ĥ S-PS-ne |M J = 0 for M J = M J , which reduces the perturbation sum in Eq. (9) to states from the irreducible representation M J = 0, a computational advantage which we exploit. Applying the framework developed in the last section to Mercury, a consistent finding in all our calculations is that among the 35 energetically lowest-lying excited states of symmetry M J = 0 only three states contribute sizably to the perturbation sum Eq. (9) determining α C S , namely ψ This finding can be understood qualitatively analyzing the product of matrix elements in Eq. (9): For contributions of the type the off-diagonal S-PS-ne matrix element is large due to the parity-odd excitation 6s → np characterizing the excited state, and the off-diagonal hyperfine matrix element is non-negligible due to sp-mixing via the external electric field. For the other leading type of contribution, the off-diagonal S-PS-ne matrix element is now two orders of magnitude smaller than in the above case -for obvious reasons related to symmetry -, but the off-diagonal hyperfine matrix element becomes almost three orders of magnitude larger than for the previous mechanism. This is explained by the fact that the excited state 3 S 1 exhibits a non-vanishing spin-density near the nucleus. Results from many-body models of different sophistication are compiled in Table I. The S-PS-ne enhancement is largely converged when at least the 12 lowestlying M J = 0 states are included in the perturbation sum, since then the three main contributors are covered. Double excitations into the remaining virtual space (SDT12-X-SD12); M20: S8-SDT12-X-SD20. DZ and TZ denote Dyall's Gaussian atomic basis sets [20,21] including 1f,1g valence-and core-correlating exponents (DZ) and 2f,4g,1h valence-and core-correlating and valence-polarizing exponents (TZ), resulting in a total of 24s,19p,12d,8f,1g for DZ and 30s,24p,15d,11f,4g,1h functions for TZ. The mean deviation concerns the difference of the calculated excited-state energies from experiment [22]. The Hg nucleus is described by a Gaussian charge distribution [23] with exponent ζ = 1.4011788914 × 10 8 . It is furthermore important that the extent of the active spinor space is sufficient, as can be seen from the results for different values of X, the parameter defining the atomic functions constituting the space into which triple excitations are allowed. The remaining virtual spinors up to the cutoff threshold are allowed to be up to doubly occupied, in order to include dynamic electron correlation effects for all states described to lowest order by the structure of the active space. Correlation effects between 5s, 5p and valence electrons are tested through the model including 20 electrons and are seen to be small. For the purpose of estimating the contribution from higher-lying excited states we use a larger basis set, denoted QZ and consisting of 34s,30p,19d,13f,4g,2h functions. Due to computational demand the model M12 is limited to X-SDT12 with X set to the value 7p7s8p9p8s10p9s with reference to Table I. This means that correlation effects are largely neglected for a large set of small contributions, ≈ 100 states with M J = 0. We observe that only two notable contributions occur, and only in the energetically lower half, indicating that the contributions as expected fall off as energy and principal quantum number of the involved states increase. With the resulting enhancement correction ∆S(QZ), where S is defined in Eq. (11), our final value is obtained as follows: The uncertainty of this value is estimated by linearly adding the errors from the energy denominator (6.2%, "mean deviation" in Table I), and uncertainties from atomic basis set (3.5%), outer-core correlations (1.5%), and higher excitation ranks (5%, estimated from comparable previous calculations of S-PS-ne enhancements, see Refs. [24,25]). To this uncertainty of 16% on the base value S(TZ) we add an uncertainty of 30% times the relative weight (0.24) of the correction ∆S(QZ), i.e.,7.2%, resulting in a total uncertainty of 23% for α C S , which we consider very conservative. Note that adding the individual terms in quadrature, as commonly done in the literature, would result in an uncertainty of 11%. From these considerations, we finally obtain from Eq. (13) the S-PS-ne interaction constant An indirect determination of α C S is obtained via the coefficient of the (P,T)-odd tensor interaction, using the phenomenological relation [13,15,26] where σ C T ≡ N =n,p C N T σ N ( . . . denoting the expectation value over a nuclear state with spin I), µ A denotes the magnetic moment of the atom's nucleus (in units of the nuclear magneton), and the coefficients C N T parametrize the tensorial (P,T)-odd electron-nucleon interaction, To further facilitate the comparison with other works, we note that the coefficient of the tensor interaction is typically parametrized via d A = 10 −20 C C T σ C T e cm, implying Table II. We note that effects of interelectron correlations reduce C C T by about a factor of 1/2. Due to relations (15) and (17) these effects are expected to be qualitatively similar for the coefficient α C S . In our result from the direct calculation electron correlation effects among the outermost 20 electrons of the Hg atom have been taken into consideration. There are two main sources for a potential difference between our value and the Coupled Cluster (CC) results via the phenomenological relation: 1) Our correlation model differs from the correlation models used in the CC calculations. 2) The phenomenological relation employs a uniform nuclear charge density whereas in our calculations a more realistic Gaussian charge distribution is used (see Table I) [35]. Since correlation effects tend to reduce the absolute value of α C S and our value is already about 15% below the CC results, it is reasonable to assume that no major correlation effects have been missed in our final computational model. The present difference is furthermore within the expected precision of this relation. IV. PHENOMENOLOGICAL CONSEQUENCES In order to explore the phenomenological consequences of our results, we follow two different strategies: (i) The common method to limit the corresponding Wilson coefficients assuming the absence of cancellations, i.e. setting all other contributions to zero. (ii) Limiting both C S and the electron EDM d e model-independently, i.e. allowing for cancellations between the two. This is achieved by combining information from the Mercury system with that of paramagnetic ones, following Ref. [11], using the experimental results in Table III. The key point in this strategy is that Mercury constrains a linear combination of d e and C S that is approximately orthogonal to the one constrained from paramagnetic systems, specifically ThO. This observation can be used to constrain C S and d e , following a three-step argument: Molecule 1. The EDMs of paramagnetic systems are to good approximation dominated by contributions from d e and C S [42][43][44]. 2 While C S depends in general on the system under consideration, the combination that enters heavy atoms and molecules is to good approximation universal [11]. C S cannot be neglected model-independently: while NP models exist where the electron EDM clearly gives the leading contribution, this is not true in general. In Two-Higgs-Doublet models (2HDMs) for instance, the dominating Barr-Zee diagram for the electron EDM avoids a second small mass factor in addition to m e , but as a two-loop diagram competes with a tree contribution to the S-PSne coupling that is suppressed by a light-quark mass and contains additional small factors like gauge couplings [8]. Schematically, we have m u,d,s × tree vs. 2. Both contributions can in principle easily be taken into account, once two experiments with comparable sensitivity are available. The problem is that most of the constraints from paramagnetic systems are essentially parallel, so that typically fine-tuned solutions exist, where electron EDM and S-PSne contributions both oversaturate the experimental limit, but cancel to large extent in the measured observables. This leads to a situation where the modelindependent approach yields a limit on the electron EDM that is about a factor of 10 weaker than the naive limit obtained when setting the S-PS-ne coupling to zero. This situation can be resolved by measurements on systems with different slopes, for example with relatively light atoms like Rb and very heavy ones like Fr. The recent measurement [36] already improves the situation significantly, as shown below. 3. In diamagnetic systems, there are many contributions to a potential EDM; assuming the presence of only electron EDM and S-PS-ne contributions here is clearly not a good description of, e.g. , the Mercury EDM. However, the different hierarchy in this case can be used to turn the argument around: In diamagnetic systems both contributions are not enhanced, but strongly suppressed, because they yield a non-vanishing contribution only in combination with the hyperfine splitting. The sensitivity of Mercury to the electron EDM is about 3 × 10 8 weaker than in ThO. The sensitivity to other contributions, like quark (C)EDMs, the theta term, and even tensor electron-nucleon couplings is much higher. This is why it is conservative to assume that theseoften neglected -contributions saturate the experimental limit. The conditions that have to be met for the resulting limit to be invalid are consequently very specific: • The individual electron EDM and S-PS-ne contributions to the relevant paramagnetic systems would have to be larger than the experimental limits, but cancel in all of them sufficiently well. • The electron EDM and S-PS-ne contributions to Hg would also have to be larger than the experimental limit, despite the massively different sensitivity. • Since in the latter case a cancellation between the two contributions in Hg is not possible simultaneously with the paramagnetic systems, other contributions, that are each individually expected to be much larger than those from the electron EDM or S-PS-ne couplings, would have to combine in such a way that the net effect on the Hg EDM is again smaller than the experimental limit. It is not impossible that all these things happen simultaneously, but since several cancellations on very different levels and in very different systems are necessary, we consider the limit resulting from our procedure conservative. Assumptions are made only on a subleading level, while in the literature it is very common to make them at the leading level, i.e. simply neglecting the S-PS-ne coupling. For convenience we provide below also the results without this assumption, i.e. when using the data from paramagnetic systems, only. Note that the calculation presented here will remain useful even if the procedure outlined above should become unnecessary because of measurements in paramagnetic systems providing sufficiently precise and non-parallel constraints. Ultimately the goal should be a global analysis separating as many sources for EDMs as possible, see Ref. [34] for a first attempt. Should both d e and C S be determined/limited from paramagnetic systems alone, the impact of the Mercury measurement on the remaining sources will increase, given a sufficiently precise determination of the corresponding coefficients. Starting with strategy (i), i.e. assuming C S to give the only contribution to the Mercury EDM, we obtain from Ref. [9] and Eq. (14) This value is significantly larger than the one given in Ref. [9], for two reasons: Heckel et al. used an indirectly obtained value for α C S [27], where moreover electron correlation effects have largely been neglected, which is much larger than our result on the absolute (and also larger than newer indirectly obtained results), and presumably used only the central value of that result. It is also significantly larger than the values obtained from ThO (|C S | ≤ 0.7 × 10 −8 (95% CL)) and HfF + (|C S | ≤ 1.8 × 10 −8 (95% CL)); However, as we will see below, the Hg result nevertheless improves the global fit significantly. We perform global fits to the available data in Table III, using the theoretical inputs given in Table IV. The molecular measurements are typically expressed in terms of the angular frequency ω M , which can for our purposes be written as where E eff the effective electric field, Ω = J e · n is the projection of the total electronic angular momentum J e on the molecule-fixed internuclear axis n,ẑ is the laboratory-frame z axis defined by the direction of the external electric field, A M and Z M are the nucleon and the proton number of the heavy nucleus in the molecule M , respectively. The fit results are visualized in Fig. 1 used here is called W T,P in Ref. [45], while W S in that reference denotes the product A/ZW T,P appearing in Eq. (19). b For discussions regarding this value, see also Refs. [53,54]. Note that the global fit is not affected by this discussion. c See also Ref. [55] global fit to all systems. The fit to only paramagnetic systems is massively improved by the HfF + measurement: before this measurement it extended essentially over the whole green area. Our result for Mercury is seen to additionally improve the fit, reducing the model-independent limits for both quantities significantly. This is due to the constraint being essentially orthogonal to those from the paramagnetic systems: we obtain for the paramagnetic systems a range α M,A C S /α M,A de ∈ [0.4, 1.5] × 10 −20 e cm, while for Mercury we obtain conservatively α Hg C S /α Hg de < −0.9×10 −20 e cm. The latter ratio will be more precisely determined once the coefficient for the electron EDM in Hg is known better, which is work in progress; here we assumed an uncertainty of 100%, given the unreliable estimate. This improvement will also improve the determination of d e and C S . In Table V we give the numerical results of both fits (global and paramagnetic only), including the effective correlations between the results for d e and C S , as well as the corresponding upper limits. While the individual constraints from Hg are weaker than those extracted from ThO and HfF + , its inclusion in the global fit results in model-independent limits about a factor of two stronger than those from the paramagnetic systems alone. V. CONCLUSIONS AND OUTLOOK We performed global fits to the available data constraining the electron EDM and the S-PS-ne nucleon-electron coupling entering heavy atoms and molecules, using up-to-date calculations of the atomic and molecular structures. The inclusion of the recent result on HfF + ions improves drastically the fit to paramagnetic systems, only. As pointed out in Ref. [11], diamagnetic systems can be used to improve this fit additionally; while the corresponding contributions are heavily suppressed in this case, diamagnetic systems have the advantage of constraining in some cases combinations orthogonal to those accessible in paramagnetic systems. As an illustration we performed the first direct calculation of the coefficient of the S-PS-ne coupling in Mercury, including the effect of electron correlations. In combination with the recently improved experimental limit for this system we obtain limits on both the electron EDM and the S-PS-ne coupling of about a factor of two stronger than from paramagnetic systems alone, see Table V. Having a model-independent determination of both quantities determining the EDMs of paramagnetic systems in hand, we proceed to evaluate the impact on ongoing searches. The global fits imply non-trivial upper limits for every paramagnetic system that is not effectively constraining the fits in Fig. 1. These limits, given in Table VI, indicate the necessary precision for a given system to contribute significantly to the global fit or the fit to paramagnetic systems, only (given in parentheses). A significant result above these limits would indicate an experimental problem. A measurement below the limit from the fit to paramagnetic systems, but above the one from the global fit, could in principle also indicate the contrived situation with a series of cancellations, described at the beginning section IV. In the future, it is to be expected that measurements in paramagnetic systems alone will yield sufficiently precise results to limit or determine the two contributions discussed here by themselves. In that case our calculations will serve to improve the model-independent determination of hadronic contributions to diamagnetic EDMs in the context of a global fit extending over the whole set of P,T-odd interactions. VI. ACKNOWLEDGMENTS This research was supported by the DFG cluster of excellence "Origin and Structure of the Universe". The authors are grateful to the Mainz Institute for Theoretical Physics (mitp) for its hospitality and its partial support during the completion of this work.
5,915
2018-02-06T00:00:00.000
[ "Physics" ]
Numerical Method for Inverse Laplace Transform with Haar Wavelet Operational Matrix Wavelets have been applied successfully in signal and image processing. Many attempts have been made in mathematics to use orthogonal wavelet function as numerical computational tool. In this work, an orthogonal wavelet function namely Haar wavelet function is considered. We present a numerical method for inversion of Laplace transform using the method of Haar wavelet operational matrix for integration. We proved the method for the cases of the irrational transfer function using the extension of Riemenn-Liouville fractional integral. The proposed method extends the work of J.L.Wu et al. (2001) to cover the whole of time domain. Moreover, this work gives an alternative way to find the solution for inversion of Laplace transform in a faster way. The use of numerical Haar operational matrix method is much simpler than the conventional contour integration method and it can be easily coded. Additionally, few benefits come from its great features such as faster computation and attractiveness. Numerical results demonstrate good performance of the method in term of accuracy and competitiveness compare to analytical solution. Examples on solving differential equation by Laplace transform method are also given. | operational matrix | numerical inversion | Inversion of Laplace transform | Haar wavelet | ® 2012IbnuSina Institute. All rights reserved. http://dx.doi.org/10.11113/mjfas.v8n4.149 Equation Chapter (Next) Section 1 INTRODUCTION Laplace transforms is known to be an important tool in solving mathematical equations that arise in engineering problem.Since its discovery by a French mathematician, it has been widely applied and continuously researched by scholars from various fields.Those scholars had put through enormous amount of efforts in finding its inverse function numerically and analytically.This is because finding the inverse of Laplace transform is considered to be a difficult task due to its limitation in the inversion table of inverse Laplace transform,in the sense that it couldn'tcater most of the engineering problems which always associated with complexity of mathematical equation. The objective of this paper is to propose a numerical inversion of Laplace transform using Haar operation matrix.The proposed method in this paper is an extension work of J.L. Wu et al (2001) that covers the whole time domain in finding inversion Laplace transform numerically using Haar wavelet operational matrix for integration.J.L. Wu et al. has proposed a new unified method to derive the operational matrix of any orthogonal functions for integration within the interval of 0 1 t ≤ < .We derive the Haar operationalmatrix based on Wu et al. works but extending it using generalised block pulse function operational matrix for integration [2,7] series for 0 t τ ≤ < .Before Haar wavelet operational matrices were used to find inversion of Laplace transform, there are other literatures that used other orthogonal functions as well.In 1977, C. F. Chen et al have been using Walsh operational matrices for solving various distributed-parameters systems such as heat conduction and percolation problem [8].Later, a more rigorous approach has been taken by Wang Chi-Hsu to derive the generalised block pulse operational matrices [7].According to Wang, inversions of Laplace transform for rational and irrational transfer function illustrated by using generalized block pulse operational matrices is proven to be more accurate compare to previous work by Chen [8]. MATHEMATICAL REVIEW Equation Chapter 2 Section 1 2.1 Haar Wavelet Function Equation Section (Next) An analytic function ( ) f t can be expandedin a series 0 ( ) ( ) where ( ) n t ϕ is the basis in the Hilbert space 2 ( ) L R and n a is coefficient of the series.The coefficients can be obtained as follows, ( ) ( ) which is convenient as it will fit the expansion of Haar For example, if we have a function ( ) n n t t ψ = , we could expand the function using power series expansion such as Taylor series expansion.Same goes to a function with sinusoidal basis, we could use Fourier series expansion.In this work an orthogonal function namely Haar wavelet function is considered.The set of this function is a group of square waves in intervals of [0, ) τ and defined as below where 1 23 ( 1 2 ) , (( 1 2) 2 ) , ( 2 ) and the resolution J is a positive integer.While j and k denoted the integer decomposition of the index i , for example 2 1 h x is defined as a co nstant and called scaling function, while 1 ( ) h x is called mother wavelet function or fundamental square wave.All the others following Haar wavelet functions are generated from mother wavelet function, 1 ( ) h t with translation and dilation process.( ) 2 (2 ) where 2 1, 0, 0 2 Haar wavelet function also is an orthogonal function, so that it holds the property as below 0 ( ( ), ( )) ( ) ( ) 0 The orthogonal set of the first four Haar function ( 4) m = in the interval of (0 1) t ≤ < can be shown in Figure 1 Haar Series Expansion Haar wavelet function is not continuous.As for Haar series expansion, any function ( ) x t can be decomposed into Haar series and can be written as 0 ( ) ( ) If the function ( ) x t may be approximated as a piecewise constant then the sum in equation (2.8) may be truncated after m terms and defined within interval 0 t τ ≤ < , then it becomes, where the Haar coefficient i c are determined by 0 ( ) ( ) [ ] Taking the collocation points as following 2 1 , 1, 2, , 2 It is defined that the m square Haar wavelet matrix, m For instance, the fourth Haar wavelet matrix ( 4) m = , 4 H in the interval of 0 1 t ≤ < can be represented in matrix form as below.) ) ) ) ) ) ) Haar wavelet is an orthogonal functions and it can be shown that ) by this method it is convenient to find the coefficient without performing the integration as equation (2.10) (2.17)Where x is a vector of a function ( ) x t at the collocation point as equation (2.13). Integration of Haar Wavelet Function and its Operational Matrix Let consider the integration of a Haar wavelet function ( ) where m Q is the generalised Haar operational matrix for integration of Haar wavelet function, ( ) where ( ) m B t is the block pulse function [2] 1 2 1 ( ) 0 elsewhere which defined on the interval (0, ] τ thus equation (2.18) can be written as It is known that the integration of block pulse function can be calculated as below 0 ( ) ( ) where m α F is taken from generalize blockpulse operational matrix for integrationwith 1 and (0 Besides that the generalised Haar operational matrix for integration, m Q also can be obtained from recursive formula by Chen Hsiao et.al [5] aftersome modifications were made to cover the interval of [0, ) τ .The generalised Haar operational matrix from recursive formula can be calculated by equation as below. Riemann-Liouville Fractional Integral and Haar Wavelet Function It is known that for integer n , the iterated integration with ( ) H t with integral of order 0 α > .Some modification is necessary to accommodate with expression in finding inversion of Laplace transform later.Firstly, we consider the fractional integral of Haar wavelet scaling function, 0 ( ) h t of order 0 α > and equation (2.32) is then become, ( 1) 33) by cross multiplying the above equation, yields [ ] NUMERICAL ANALYSIS OF INVERSION LAPLACE TRANSFORM The Laplace transform of a function ( ) x t , denoted by ( ) X s is defined by an integral function equation We know the Laplace transform of integral is as below The integration in equation (3.2) and equation ( 2.18) are corresponding to the multiplication of 1 s in s domain and Haar operational matrix for integration m Q in t domain respectively.Thus we could replace the 1 s factor to the generalised Haar operational matrix, m Q .Assuming that the irrational transfer function has a form of ( ) where 0 1 α ≤ < and truncated to ( ) n n∈  .By cross multiplying equation (3.3), we have ( ) Then perform inverse Laplace transform of equation(3.4), at both side yields Taking the collocation points as equation (2.13), factorize , Multiplying both sides with 1 2 ( ) 3) by replacing 1 s with the generalised Haar operational matrix, m Q . Example 1 Consider the irrational transfer function as By using this method, firstly, find expression of ( ) X s in terms of 1 s and denoted as ( ) Then, replace each terms of 1 s in equation (3.12) by the In the case of 16 m = and 4 τ = , the result is shown in Figure 2.  ( ) ( ) In the case of, 1 a = , 16 m = and 1 τ = , from equation (3.10), we obtain ( ) The exact solution is and the result is shown in Figure 4 Fig. 4 Comparison between the exact solution and present numerical results for 32 m = Example 4 Consider the irrational transfer function as Fig. 1 Fig. 1 First four Haar function 13) Lastly, by equation (3.10) the inversion of Laplace transform can be calculated by the below equation. 1 s with generalised Haar wavelet operational matrix, m Q . is shown in Figure5. Fig. 5 Fig. 5 Comparison between the exact solution and present numerical results for 32 m =
2,275
2014-10-16T00:00:00.000
[ "Mathematics" ]
Correction of the X-ray wavefront from compound refractive lenses using 3D printed refractive structures The knife-edge imaging-based wavefront-sensing technique is used for wavefront error characterization of Be X-ray lenses with and without corrector plate. Different 3D printed corrector plates are proposed to achieve pseudo-perfect compensation of wavefront errors of Be X-ray lenses. Introduction Phase error correction in X-ray optics is a fast-evolving area of enabling technology to generate pseudo perfect optics. The correction introduced by a suitable scheme converts an aberrated optics to pseudo-perfect optics which otherwise prevents achieving diffraction-limited focusing. A few schemes such as active bimorph mirrors (Mimura et al., 2010), refractive correctors (Sawhney et al., 2016;Seiboth et al., 2017), invariable-multilayer deposition (Matsuyama et al., 2018), diffractive wavefront correction (Probst et al., 2020) and layer stress controlling method (Cheng & Zhang (2019) have been demonstrated as tools for phase error corrections of different X-ray optical elements. Refraction-based correctors are thin, easy to insert into the beam path, do not change the optical axis and are straightforward to align. X-ray LIGA fabricated SU-8 wavefront correctors were used in the wavefront error compensation of X-ray mirrors (Laundy et al., 2017) and X-ray LIGA fabricated lenses in one-dimensional (1D) focusing geometry. A silica refractive phase plate manufactured by the laser ablation process was instrumental in reducing the phase error of two-dimensional (2D) focusing Be CRLs (Seiboth et al., 2017). In each case, the wavefront error is reduced due to the use of suitable correctors, and our group recently demonstrated r.m.s. wavefront error compensation down to the order of /100 . Nano-and micro-fabrications have played a pivotal role in the development of novel micro X-ray optical elements which led to a significant advance in achieving nano-and micrometre-size focused X-ray beams (Li et al., 2020;Dhamgaye et al., 2014;Lyubomirskiy, Boye et al., 2019;Yan et al., 2014). Highly sensitive X-ray optical measurements, especially sensitive wavefront error measurements, were possible due to microfabrication of 1D and 2D X-ray gratings (Liu et al., 2018;Weitkamp et al., 2005;Rutishauser et al., 2012). Lithography techniques including Si etching, X-ray lithography and laser ablation were used in the development of nano-focusing lenses and wavefront corrector plates. Additive manufacturing or three-dimensional (3D) printing technology is developing rapidly and is revolutionizing many key areas of industries and research. The 3D printer is based on the two-photon polymerization process (Photonic Professional GT2 datasheet, https://www.nanoscribe.com/fileadmin/Nanoscribe/Solutions/ Photonic_Professional_GT2/DataSheet_PPGT2.pdf, Nano-Scribe GmbH), which is capable of patterning arbitrary 3D shapes with micrometre or nanometre resolution. This printer was employed in many state-of-the-art device developments (Dietrich et al., 2018) and was recently used in X-ray optics developments (Sanli et al., 2018;Petrov et al., 2017;Lyobomirskiy, Koch et al., 2017). The same 3D printer is used for the development of corrector plates. X-ray optical elements, X-ray mirrors based on reflection, X-ray lenses based on refraction and X-ray zone-plate/multilayer Laue lenses based on diffraction principles are used for micro-or nano-focusing of the X-rays (Ice et al., 2011). The refractive index (n) in the X-ray region for X-rays with energy E is where 1 À is a real term, is the index of refractive decrement (10 À5 to 10 À7 ) and is an imaginary term that causes absorption (10 À7 to 10 À9 ). The real part of n is slightly less than unity in the X-ray region for all materials, thus the shape of the X-ray lenses is concave, in contrast to convex used in the visible region. Due to weak refraction power, multiple X-ray lenses are used in series by compounding the refraction power of the lenses to achieve a reasonable focal length. Such X-ray lens assemblies are known as compound refractive lenses (CRLs) (Snigirev et al., 1996). Parabolic-shaped 2D focusing X-ray lenses are fabricated in Be or Al by the mechanical punching method, and 1D focusing lenses in Si, diamond or polymer are manufactured by lithography techniques. Lens fabrication errors result in a deviation of the X-ray pathlength in the lens from the ideal parabolic function. This causes a perturbation of the X-ray wavefront which, when propagated to the focal plane, degrades the focus. With respect to Be CRL fabrication, factors such as mechanical punching (two punching tools with angular or spatial error), density variation, or variation in the chemical composition of the material are responsible for the origin of wavefront errors. The intensity distribution or wavefront errors of given optics are measured by a suitable wavefront-sensing technique. Zernike polynomial fitting is a useful tool in diagnosing visible optics wave aberrations over a circular or annular aperture. Zernike polynomials expansion is used over the wavefront error map in quantifying optics aberrations present in X-ray optics (Celestre et al., 2020;Seiboth et al., 2016;Zhou et al., 2018). An imperfect optics produces blurred images of a source, and the performance improvement of optics by aberration compensation schemes can be expressed in terms of reduction in the coefficient of classical primary (Seidel) optics aberrations closely represented by low-order Zernike polynomials. Recently (Seaberg et al., 2019), a 3D printed phase plate in IP-S resist was used to correct wavefront errors of 20 Be lenses, and wavefront analysis was carried out using three wavefront reconstruction techniques for X-ray free-electron laser (XFEL) sources. This paper describes the use of the knife-edge imaging-based wavefront-sensing technique to determine wavefront errors from two different stacks of Be lenses. This wavefront sensing technique is described in our previous work . The optical characterization of a rotationally invariant profiled polymer corrector plate manufactured by 3D printing was carried out at the Diamond Test beamline. After correction with the phase plate, the r.m.s. wavefront error of 98 Be lenses showed a reduction by a factor of six. The knife-edge imaging-based wavefrontsensing technique was originally developed to measure 1D wavefront error profiles but in the present studies it was adapted to measure the full rotational variant wavefront error profiles of the Be CRL. The previously reported study (Seiboth et al., 2017) used a rotationally invariant corrector and showed a reduction of spherical deformation of the Be lenses. The present study reports the existence of a range of lower-and higher-order optics aberrations in Be CRLs including spherical aberration, astigmatism and coma. We highlight, particularly, that it is impossible to correct all optics aberrations of X-ray lenses with a rotationally invariant corrector when rotationally variant wavefront errors are present. A case study of the effect of rotationally invariant corrector plates versus rotationally variant corrector plates on the corrected wavefront error is described and evaluated in terms of Zernike polynomials. The r.m.s. wavefront error is used to characterize the aberration level from the optics. X-ray lenses differ from visible-light lenses in having weak refraction and strong absorption. This limits the numerical aperture to of order $ 10 À3 . We present a modified form of the r.m.s. wavefront error with weighting due to transmitted intensity. Optical characterization setup Diamond's Test beamline B16 was used for at-wavelength characterization of the X-ray lenses and 3D printed corrector plate (Sawhney et al., 2010). A typical experimental setup used for the wavefront error measurement of CRLs is shown in Fig. 1. The monochromatic beam from a Si(111) double-crystal monochromator was focused by X-ray lenses and observed on an X-ray detector placed at a distance of $ 1-2 m downstream of the lens's focus. A corrector plate mounted on an alignment stage was positioned in front of the Be CRLs. The Be CRLs (fabricated by RX Optics), knife-edge (fabricated by X-ray LIGA at ANKA synchrotron) and X-ray detectors (Mini-FDS from Photonics Science and PIPS diode) were mounted on stable rigid platforms. A CRL consisting of N = 98 individual Be bi-concave parabolic-shaped lenses, 200 mm radius of curvature at the apex and theoretical focal length 673 mm (image distance q = 696 mm) at 15 keV was installed. We will refer to this CRL set as CRL1. A 2D pixel area detector Mini-FDS of pixel size 6.45 mm was used to record the images as a function of knife-edge position. X-ray transmission of the corrector plate was measured using a PIPS diode. A second set of X-ray lenses with N = 24 (referred to here as CRL2) was characterized in the same setup with revised positions of knife-edge and detectors from the centre of the CRL. The wavefront error measurement involved recording the X-ray intensity at the pixel detector as a knife-edge is translated across the focal plane intersecting the focus. The polar coordinates geometry used for knife-edge imaging-based wavefront sensing is shown in Fig. 1(b). The 1D wavefront error along the diameter of the lenses, i.e. the central line of the shaded area inclined at an angle, was measured by orientating the knife-edge at the same angle and collecting intensity data of the recorded image from a narrow strip of pixels tilted at the same angle. The wavefront error measured along the diameter was resolved into two radial functions (0 to r) separated by 180 around the polar axis, i.e. two radial wavefront profiles at 45 and 225 for a 45 knife-edge orientation. The centre of the lenses is located on the detector as the position of maximum intensity transmission. A wavefront error that is constant as a function of radial distance of the entire polar angles is called a rotationally invariant wavefront error. Similarly, a wavefront error that varies as a function of entire polar angle as well as radial distance is called a rotationally variant wavefront error. The optimum performances of the 3D printed corrector plate were analysed by comparing the r.m.s. wavefront errors of Be lenses with and without a corrector plate and comparing focused beam sizes in two orthogonal planes near to the optics focal plane. We define the r.m.s. wavefront error for the lenses over its aperture of radius R 0 weighting with the transmitted intensity as where the wavefront error w(r, ) is weighted by the X-ray intensity I(r) which, for uniform incident intensity over the lens aperture I 0 , is given by the linear absorption coefficient [(E)], where R L is the radius of curvature of the lens, E is the X-ray energy, N is the number of the lenses, and the wavefront error w(r, ) is defined on an aperture 0 r R 0 and 0 2. The wavefront error over a circular aperture can be expressed as a series of Zernike polynomials as functions of the normalized radial position r/R 0 and radial angle 0-2. These are a complete set of basis functions that are orthogonal over a circle of unit radius and are commonly used to represent optical aberrations (Born & Wolf, 1999). The Zernike polynomials in the Noll notation which uses a single index j are defined as (Noll, 1976) cosðmÞ; for even j; m 6 ¼ 0; ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2ðn þ 1Þ p R m n ðÞ sinðmÞ; for odd j; m 6 ¼ 0; where m is the azimuthal frequency, n is the radial degree, = r/R 0 and R m n ðÞ ¼ The index j is the mode ordering number which is expressed in terms of n and m. The Zernike polynomial modes (Z j ) expansion of the Be lens arbitrary wavefront error is expressed as wðR 0 ; Þ = P j Zj Z j ð; Þ where Zj is the Zernike coefficient for each Z j obtained from The Zernike polynomial Python library provided by Fan (2019) is used for fitting lens wavefront errors and determining Zernike coefficients. Corrector plate design Measurement of the figure error distribution in the Be CRLs is required for the design of a corrector plate. An ideal coherent wavefront from the source at 47 m upstream was considered at the entrance of the Be CRLs. For ideal lenses, an emerging wavefront at the exit of the lenses will be a converging spherical wavefront radius centred on the focus. In reality, the emerging wavefront from the Be CRLs is distorted by variation of lens thickness from the ideal parabolic profile caused by imperfect manufacturing. Other factors such as impurity in the lens material or non-uniform pressed lens material during manufacturing leading to density variations contribute to the origins of wavefront errors of the optic. A knife-edge imaging technique is used for the first time for the investigation of figure error distribution in Be CRLs. This technique reproduces measurements for the particular optics, and wavefront errors recorded for Be lenses are found on a similar order as measured by the other techniques, e.g. ptychography or speckle tracking (Seaberg et al., 2019). The 1D measured wavefront errors along vertical and horizontal lines are shown in Fig. 2 for four different polar angles as a function of radial position. The lenses are randomly oriented in the casing and show different wavefront error functions at different polar angles. An invariant wavefront profile around the polar axis is evident for the polar angles 90 and 270 (green solid and dashed lines in Fig. 2) which was measured whilst the knifeedge was stepped along the horizontal line. However, we considered an average error profile calculated from the error profiles measured at different polar angles (solid blue line) and the same error is converted into the design of a rotationally invariant corrector. An optical path length difference (Áw) is introduced by a material of thickness (t) with phase error (), where t = =2 and is the refractive decrement of the X-ray refractive index (n) given in equation (1). The ratio (E)/(E) can be used as a selection criterion for choosing a corrector plate material with higher ratio of refraction power to X-ray absorption. Thus, low-atomic-number materials are preferred over higher-atomic-number materials. Materials such as Be, Al, Si, diamond and polymers composed of carbon, hydrogen and oxygen are commonly used for micro X-ray optical elements. A polymer-based corrector plate is used in the present study and its thickness required for compensation wavefront error is calculated using equation (7). A typical 3D printable polymer IP-S of thickness difference Át = 10 mm will produce a phase advance 2Át/ and will introduce an optical path difference of 11.74 pm [molecular formula C 14 H 18 O 7 , density = 1.2 g cm À3 (Lyubomirskiy, Koch et al., 2019)]. An estimated thickness profile of the IP-S corrector in a 3D symmetry is shown in Fig. 2(b). Many 3D printers based on fusion deposition modelling or stereolithography produce structures with feature size >> 1 mm with a high degree of porosity in the fabricated structure. A nanoscribe 3D printer is an ideal tool for 3D printing of the corrector plate (Nano-scribe GmbH, Germany). It is capable of printing arbitrary features with sub-micrometre precision in three dimensions. The surface finish of the printed structure is $ 20 nm which is good for the normal-incidence optics used in the X-ray region (Photonic Professional GT2 datasheet, https://www.nano scribe.com/fileadmin/Nanoscribe/Solutions/Photonic_Profess ional_GT2/DataSheet_PPGT2.pdf). The design of the corrector plate was prepared in AutoCAD and converted into a 3D CAD-step file. A corrector plate was fabricated using Nanoscribe Photonic Professional GT2 (Nanoscribe GmbH, Germany) at Lancaster University, UK. The dip in lithography mode was used to print the 3D design in IP-S (Nanoscribe GmbH). The laser source used for printing was a femtosecond Ti-sapphire type (800 nm, 80 MHz, 50 fs). IP-S was drop-cast onto ITO (75-100 2 ) coated glass N1.5 thickness coverslips (Diamond Coatings Ltd, Halesowen, UK). The resist was exposed from bottom to top using a femtosecond laser pulse focused in voxel by 25Â objective with laser power 55% and writing speed 200 000 mm s À1 . Patterned IP-S resist was developed in PGMEA for 20 min, rinsed in IPA for 5 min and dried using N 2 enriched air. Rotationally invariant wavefront errors measurement and its compensation The effectiveness of the corrector plate in the wavefront error compensation depends on various factors such as repeatability of wavefront measurements between successive beam times, the stability of the optics/beam, design of the corrector plate, fabrication errors in the corrector plate and alignments of the corrector plate to the CRL optic axis. A rotationally invariant 3D printed corrector was placed upstream to the Be CRL as shown in Fig. 1 for the figure error corrections of the Be CRL. The wavefront errors of Be CRL1 were measured and the repeatability in the measurements was confirmed by comparing the measurement with that made during the design of the corrector plate. Good alignment of the centre of the corrector plate relative to the lens optical axis in a beam path is critical in achieving optimum compensation results. With the corrector plate position in the nearly plane wavefront before the focusing lenses, the correction is insensitive to the correctors' longitudinal position. The lateral position of the phase plate is more important, with good alignment to the axis of the lens being required. To achieve this, the phase plate was stepped laterally within the lens aperture with coarser 5 mm and finer 1 mm step size and the corresponding r.m.s. wavefront error was determined using equation (2). The best lateral positions for the corrector plate are achieved by minimizing the r.m.s. wavefront error in the respective planes. An average of the CRL1 wavefront errors measured at four different polar angles before and after the corrections is shown in Fig. 3. The r.m.s. wavefront error [equation (2)] is found to be 14.4 pm before the correction and 2.4 pm after the correction which is an improvement by a factor of six. The expected performance of a designed corrector plate is shown as 'after correction (calculated)' in Fig. 3 which is obtained by subtracting the wavefront error values used for the design of the corrector plate (dashed magenta) from the corresponding error values measured for CRL1 before correction (blue). The r.m.s. wavefront error difference between the designed corrector (discussed in Section 2) and the fabricated corrector is <1 pm. This difference is due to various contributions such as infidelity in corrector fabrication, alignment/stability of optics and repeatability in the wavefront measurements. X-ray absorption by the corrector was calculated by measuring the PIPS diode photocurrent for direct beam and placing the corrector plate in the beam path. The transmission of the corrector plate was found to be $ 99%. A clear improvement in the focus profiles in the vertical and horizontal direction was observed after the introduction of the corrector [Figs. 4(a) and 4(b)]. The focus profiles, before and after corrections, are measured at the same focal distance from the centre of the CRL. The corrector plate has improved the vertical (hori-zontal) focus size to 0.9 mm (2.5 mm) from 2.3 mm (3.7 mm) due to the aberrated wavefront. The focus size of CRLs at a bending-magnet source is limited by the size of the demagnified source. A type of wavefront aberration exists in CRL1 before and after the corrections were quantified using Zernike polynomials expansion up to order n = 16. Fig. 5 shows the amplitude of the first 36 Zernike coefficients and coefficients corresponding to higher-order spherical aberrations only (Z37, Z56, Z79, Z106 and Z137) as the values of the remaining coefficients of the higher orders are either small or zero. Zernike polynomial coefficients Z1 to Z4 are not aberrations but they describe the surface positioning. Z1 is constant over the whole aberration map and therefore not considered. The misalignment of optics is expressed in the system tilts Z2 and Z3 along two orthogonal planes and term Z4 defines defocusing. The major optics aberrations observed in the Be CRLs were due to primary (Z11), secondary (Z22), tertiary (Z37) and higher-order spherical aberrations. These spherical deformations were well corrected after the introduction of the corrector plate. The defocus term (Z4) observed was caused by the displacement of the knife-edge from the focal plane in the direction along the optical axis. This study does not show a contribution from non-spherical aberration terms. The r.m.s. wavefront error is given as the sum of squares of all Zernike coefficients. The r.m.s. calculated by considering all Zernike coefficient values except (Z1-Z4) is 14.2 pm before correction and 2.7 pm after correction. These values match well with the ones calculated using equation (2). Rotationally variant wavefront errors measurement and its compensation We extended our investigation to another set of lenses: CRL2 (N = 24). We investigated the polar-angle-resolved wavefront error distributions by making wavefront measurements with the knife-edge rotated in angles about the optical Rotationally invariant wavefront errors of the Be CRLs, before correction, after correction and wavefront error profile by rotationally invariant corrector plate. The r.m.s. wavefront error calculated in each case is given in square parentheses. axis to obtain the radial wavefront error over the polar angle from 0 to 360 . The intensity recorded in a 2D pixel detector was processed only for those pixels that lie along a line inclined at a rotated angle. Unfortunately, the knife-edge scan data is not complete for CRL1 as the measurement script failed twice during the experiments. An average radial wavefront error calculated over a complete radial profile was used for missed measurements at the polar angles (135-165 and 315-345 ). Figs. 6(a) and 6(b) show polar plots of the wavefront errors in both CRLs before correction. The wavefront errors of both CRLs are close to being invariant but show anisotropic wavefront error distributions in the polar angles. The distributions are not radially concentric but approximately oval, rotated at 45 and 90 for CRL1 and CRL2, respectively. An analytical approach was considered to evaluate the performance of the rotationally invariant corrector plate in compensating for the rotationally variant wavefront errors of CRL1 and CRL2. The remaining wavefront errors after correction by the rotationally invariant corrector plates are shown in Figs. 6(c) and 6(d). The uncorrected wavefront errors of both CRLs were found in a similar range. We noticed no per-lens wavefront error accumulation -otherwise peak-to-peak wavefront errors of CRL1 would be four times higher than for CRL2 over the whole lens aperture. This observation is true near the optical axis of the lenses where maximum transmission of the X-rays is observed. Any rotation of the individual lens in the lens casing may be averaging figure errors and such averaging is apparent more in CRL1 compared with CRL2. The wavefront error surfaces shown in Figs. 6(a)-6(d) were fitted with Zernike polynomials, and corresponding amplitudes of Zernike coefficients are shown in the bar chart in Fig. 7. To avoid areas of non-measurements in the fitting and obtain a good fit, a radial distance (R 0 ) of AE 186 mm for CRL1 and AE 305 mm for CRL2 from the centre of the wavefront error map was chosen. The strength of various optics aberration expressed by Zernike polynomials expansion before and after corrections shows the existence of lower and higher orders of spherical and non-spherical optics aberrations. As discussed in the previous section, here too spherical aberrations of both CRLs are compensated well by the rotationally invariant corrector plate. However, non-spherical aberration terms (including astigmatism, coma, etc.) and higherfrequency terms (trefoil, tetrafoil, pentafoil, hexafoil, etc.) remained uncorrected. Astigmatism in CRL2 contributes significantly to the remaining optics aberration which cannot be ignored for obtaining diffraction-limited focusing. The Figure 5 Zernike polynomial fitting over measured and averaged wavefront errors of CRL1 before and after correction (measured and calculated corrector plate contribution). Figure 7 Zernike polynomial fitting over measured rotationally variant wavefront errors of CRL1 and CRL2 before and after correction (calculated rotationally invariant corrector plate contribution). We propose two possibilities (case 1 and case 2) for correction of X-ray optics aberrations in CRLs using customized corrector plates. In the first case, a corrector plate is fabricated with a thickness profile in two dimensions that fully corrects the wavefront over the full aperture of the lens. In the second case the spherical terms are corrected using a radially invariant corrector and an additional in-line corrector plate used to correct selected radially variant higher-order terms in the Zernike expansion, such as astigmatic terms. 4.2.1. Case 1. This has the advantage that complete correction can be achieved with a single corrective element; however, alignment becomes more difficult, as in addition to transverse alignment the corrective optic must also be aligned in rotation angle about the optical axis. It is also necessary to measure the full 2D wavefront error in order to design the profile of the corrector. We have worked out the designs of such rotationally variant corrector plates exclusively for CRL1 and CRL2. The designs are shown in Figs. 8(a) and 8(b) and they can be fabricated by 3D printing. Such corrector plates are planned to be manufactured in the near future and characterized using CRL1 and CRL2 at the Diamond Test beamline. The proposed rotationally variant corrector plate can be extended for the wavefront corrections of CRLs made from Al or polymer materials. An exclusive 3D corrector plate is feasible to build on the same chip in line with the nano-focusing lenses fabricated by either LIGA or semiconductor manufacturing techniques. The 3D correctors made in IP-S polymer are useful at a bending-magnet source, but this polymer degrades quickly in the higher-intensity beams of undulator or XFEL sources. However, searching for a robust material for the corrector plate is necessary to deploy rotationally invariant/variant corrector plates with X-ray optics at beamlines operational on diffraction-limited storage rings or XFELs. 4.2.2. Case 2. In CRL2, the low-order astigmatism (Zernike polynomials Z 5 and Z 6 ) contributes almost 50% of the total remaining aberrations after the correction introduced by the radially invariant corrector plate. A customized second corrector plate dedicated to the compensation of low-order astigmatism can be designed in the following way. The wavefront error due to astigmatism is and in the Cartesian form where Z5 and Z6 are Zernike coefficients for Zernike mode Z 5 and Z 6 and their values are extracted from fitted data (Fig. 7), and a and b are constants. The above wavefront definition creates a parabolic surface in the xy plane and it is possible to manufacture using a 3D printer. Fabrication of a sequence of correctors would allow a degree of adaptability to be incorporated into the correction. Table 1 summarizes the coefficients of the lower-order Zernike polynomials that closely represent classical aberrations, for Be CRLs wavefront errors before and after corrections ['Corrector1', rotationally invariant corrector plate; 'Corrector2', rotationally variant corrector plate, as defined in equation (9)]. The r.m.s. wavefront error of the optics is reduced from 24.0 pm to 13.3 pm with the first-order correction plate and finally to 4.69 pm ($ 0.06) with the secondorder correction plate. For primary aberrations of CRL2 excluding the piston, tilt and defocus terms, the obtained r.m.s. value is $ 1 pm. Conclusions The knife-edge imaging wavefront-sensing technique was successfully used in X-ray lenses wavefront error measurements and the optical characterization of a 3D printed corrector. The use of a rotationally invariant 3D printed wavefront corrector plate in wavefront errors compensation of 98 Be X-ray lenses was demonstrated. The r.m.s. wavefront error of rotationally invariant wavefront aberrations in Be CRLs was reduced by 84% after the introduction of a rotationally invariant 3D printed corrector. Zernike polynomials analytical fitting is useful in the quantification of optics aber- rations before and after correction wavefront errors. All orders of spherical aberrations are found corrected after the insertion of a rotationally invariant corrector plate but it is apparent that significant non-spherical aberrations still remain. Thus, a rotationally invariant corrector plate is unable to completely compensate optics aberrations CRLs. The knifeedge imaging technique was adapted to measure the full 2D wavefront errors of two X-ray lenses sets CRL1 and CRL2. The Zernike polynomial fitting of measured wavefront error maps of CRL1 and CRL2 showed the existence of lowerand/or higher-order rotationally invariant and variant optics aberrations. We have therefore specified wavefront corrector plates which could approach complete compensation of the wavefront errors. The role of the present 3D printer technology is important in achieving the precision manufacturing of rotationally variant corrector plates. This is a possible way to tackle optics aberrations in X-ray optics and achieving r.m.s. wavefront error compensation below 0.07. The present framework of wavefront measurement and corrections is useful in X-ray optics being used at the third-and fourthgeneration synchrotron facilities and XFELs. Table 1 Primary optics aberration in terms of Zernike coefficients for a Be CRL before and after corrections introduced by corrector1 (rotationally invariant corrector) and corrector2 (rotationally variant corrector).
6,594
2020-10-19T00:00:00.000
[ "Physics" ]
On nonlinear effects in multiphase WKB analysis for the nonlinear Schrödinger equation We consider the Schrödinger equation with an external potential and a cubic nonlinearity, in the semiclassical limit. The initial data are sums of WKB states, with smooth phases and smooth, compactly supported initial amplitudes, with disjoint supports. We show that like in the linear case, a superposition principle holds on some time interval independent of the semiclassical parameter, in several régimes in term of the size of initial data with respect to the semiclassical parameter. When nonlinear effects are strong in terms of the semiclassical parameter, we invoke properties of compressible Euler equations. For weaker nonlinear effects, we show that there may be no nonlinear interferences on some time interval independent of the semiclassical parameter, and interferences for later time, thanks to explicit computations available for particular phases. The potential V = V (x) is supposed real-valued, smooth, and at most quadratic: Typical examples are V = 0, V linear (V (x) = E • x), V harmonic (V (x) = ω 2 |x| 2 ), V ∈ S(R d ), or any sum of such potentials.As initial data, we consider the sum of WKB states: α j (x)e iϕj (x)/ε , with γ 0. The value of γ measures the size of the initial data, and thus the importance of nonlinear effects in the semiclassical limit ε → 0. The case γ = 0 is supercritical in terms of WKB analysis: the evolution of the phase describing the rapid oscillation is given by an eikonal equation which involves the leading order amplitude, and a standard application of the WKB asymptotic expansion leads to systems which are not closed, no matter how many correcting terms are considered (see e.g.[5,Chapter 1] or [11]). This work was supported by Centre Henri Lebesgue, program ANR-11-LABX-0020-0.A CC-BY public copyright license has been applied by the author to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission. The case N = 1, referred to as monokinetic case, is well understood for short time, as we recall below, in the sense that the asymptotic behavior of u ε as ε → 0 is described precisely, locally in time on some interval independent of ε.The large time behavior is, in general, unknown; the one-dimensional case, with V = 0, is an exception, since it is completely integrable, see e.g.[16,27].Consider the case γ = 0.When V ≡ 0, the leading order asymptotic description involves the compressible Euler equation (1.4) ∂ t ρ + div (ρv) = 0, This equation is quasilinear, while (1.2) is semilinear (the nonlinear term is viewed as a perturbation when solving the Cauchy problem).In Section 2, we recall how to justify, in this case, the existence of a WKB approximation of the form u ε (t, x) = a(t, x) + εa 1 (t, x) + . . .+ ε k a k (t, x) e iφ(t,x)/ε , for all k 0, for some T > 0 independent of ε.We choose to measure errors in L 2 ∩ L ∞ in the spatial norm, in order to avoid to introduce ε-dependent norms when derivatives are involved, due to rapid oscillations.This time T can be taken as the lifespan of the smooth solution to the Euler equation (1.4) with suitable initial data.When N 2, the new question arising is the nonlinear interaction of the WKB states.As the problem is supercritical, even a formal computation is a delicate issue: if we plug an approximate solution of the form Remark 1.2 (Infinitely many states).The case N = ∞ may also be addressed, under suitable assumptions on the growth in space of the phases φ j compared to the size of the support of α j , as j → ∞.More precisely, as will be clear from the proof of the main result, we can consider the case N = ∞ provided that we may find cutoff functions χ j so that or at least in a weaker form if φ 0 ∈ H s (R d ) for some s > 2+d/2.Another constraint, in this case, is that we have to find a common lower bound for the lifespan of all the approximate solutions (φ j , a j ) considered below, an aspect which is obvious when N is finite, since we consider the minimum of a finite set. 1.2.Main results.The nonlinear evolution of each initial WKB state will play a crucial role: Under our assumptions, for fixed initial data, we know that: • If d 3, the equation is energy-subcritical, and for fixed ε > 0, there exists a unique solution , and it is smooth.See e.g.[8]. , the equation is energy-critical: the above conclusion is known to remain when V = 0 ( [29]), when V is an isotropic quadratic potential ( [17]), or when V is harmonic at infinity ( [15]).• If d 5, the equation is energy-supercritical: only a local in time smooth solution is known to exist by classical theory.In the cases where the global existence of a smooth solution is not known, the local existence time might go to zero as ε → 0, so the existence of a smooth solution on a time interval independent of ε > 0 is already a nontrivial step.The description of the solutions u ε j as ε → 0 on some time interval [0, T j ] independent of ε was evoked above, and is recalled in Sections 2 (case V = 0) and 3 (V satisfying (1.1)).Our main result is the following nonlinear superposition principle: , and initial data satisfying Assumption 1.1. There exists where u ε j is the solution of (1.5).Let us discuss this result in the supercritical case γ = 0, as it is the case where Theorem 1.3 may be the most surprising.The result follows from a detailed WKB analysis, as well as a property of finite speed of propagation for the compressible Euler equation, discovered initially in [21].The key feature of our setting is the compact, disjoint supports of the initial amplitudes α j .In the case V = 0, as long as WKB analysis is valid for each u ε j in (1.5), u ε j remains supported in (essentially) supp α j up to O(ε ∞ ): all the amplitude terms of the WKB expansion (at leading order, as well as correctors at arbitrary order) remain compactly supported, and amplitudes associated with u ε j1 and u ε j2 , respectively, with j 1 = j 2 , do not interact.In the case V ≡ 0, u ε j remains supported in supp α j evolving according to the classical flow generated by V , up to O(ε ∞ ).In other words, we recover the same phenomenon, regarding the evolution of supports, as in the linear case (see e.g.[22,28]), even though the régime associated to (1.2) is strongly nonlinear (of course Theorem 1.3 is trivial in the linear case, as u ε ≡ u ε j ).In particular, the initial modes cannot interact at a "visible" order before WKB analysis for at least one of the u ε j 's ceases to be valid, that is, before the solution of the corresponding Euler equation (1.4) breaks down (see however Section 5 for a discussion on the influence of our proof strategy on this statement).Recent progress on this precise question, [24,25,3] (see also [23] for a relation with the nonlinear Schrödinger equation), suggests that the expected scenario is rather that of an implosion: the conclusion of Theorem 1.3 might remain valid even after WKB has ceased to be valid.Remark 1.4 (Wigner measures).Since the proof of Theorem 1.3 relies on WKB analysis, it also implies the characterization of Wigner measures.Recall that the Wigner transform of u ε is defined by The position and current densities can be recovered from w ε , by A measure µ is a Wigner measure associated to u ε (there is no uniqueness in general) if, up to extracting a subsequence, w ε converges to µ as ε → 0 (see e.g.[12,19]).In the context of Theorem 1.3, each wave function u ε j has a unique Wigner measure, and the sum of these Wigner measures is the Wigner measure of u ε .For instance, if V = γ = 0, where (ρ j , v j ) solves (1.4) with initial data (ρ j , v j ) |t=0 = (|α j | 2 , ∇(χ j ϕ j )), and )) is (any function) such that χ j ≡ 1 on the support of α j .See Section 5 for the dependence of this statement upon χ j . The next result shows that in the weakly nonlinear case γ = 1, some explicit information is available, in the sense that indeed, nonlinear interferences are negligible on some time interval [0, T 0 ] with T 0 > 0 independent of ε, while nonlinear interferences occur later. ) with disjoint supports, such that the following holds.There exist T 1 > 0 and T 0 ∈ (0, T 1 ) independent of ε, such that the solution to and The proof of Proposition 1.5 relies on explicit computations available in this weakly nonlinear case, and the fact that for linear oscillations, no caustic appears in the case of a single WKB state: the nature of nonlinear interferences is shown in Section 6, and consists of nonlinear phase modulations.In an appendix, we give an alternative argument illustrating another type of nonlinear interferences at leading order, consisting of the creation of a new mode (when d 2): starting from three WKB states, a fourth one, associated with a new phase, may appear by resonant interaction. 1.3.Content.In Section 2, we recall the WKB construction introduced in [14] for the case γ = 0, and emphasize the finite speed of propagation which appears in our framework.In Section 3, we explain how to adapt the previous approach to the case where V satisfies (1.1), and address the case γ > 0. In Section 4, we complete the proof of Theorem 1.3.Section 5 clarifies the role of the cutoff functions used in the proof of Theorem 1.3.Propositions 1.5 is established in Section 6.In an appendix, we propose an alternative proof of Proposition 1.5, in the case d 2 with N = 3, showing that there are several sorts of nonlinear interferences in the weakly nonlinear case. The monokinetic case without potential In this section, we consider (1.2)-(1.3) in the monokinetic N = 1, and in the supercritical case γ = 0, with slightly different notations for future reference: (2.1) In view of the setting of this paper, we assume We first consider the case V ≡ 0, then introduce the main ideas that make it possible to incorporate a subquadratic potential V . We recall the main steps to the construction introduced in [14] (see also [5,Section 4.2]).The idea introduced in [14] consists in writing the solution to (2.1) as with a ε complex-valued and φ ε real-valued, solving The key remark is that this leads to a symmetric hyperbolic system, perturbed be a skew-symmetric term.The hyperbolic system appears when considering the unknown Considering the gradient of the first equation in (2.3), the system can be written To be precise, the system is made symmetric thanks to the constant symmetrizer Once v ε is known, one recovers φ ε by integrating in time the first equation in (2.3), Assuming that a 0 , ∇φ 0 ∈ H s (R d ) for s large (we will always assume a 0 , φ 0 ∈ C ∞ 0 (R d ) in the forthcoming applications), the limit ε → 0 leads to an asymptotic expansion of the form The leading order term is obtained by simply setting ε = 0 in (2.4): (2.5) Working with the intermediary unknown v = ∇φ, we get a system of the form and we infer the following result from [21]: (2.5).Moreover, (φ, a) remains compactly supported for t ∈ [0, T * ], and The first part of the statement is a consequence of classical theory for symmetric hyperbolic systems (see e.g.[2,20]).The property stated that initial compactly supported condition lead to a zero speed of propagation is due to the structure of this hyperbolic system, and is well understood from the simplest model of the Burgers equation Suppose we have a smooth solution on some time interval [0, T * ].In particular, We have directly, for all (t, x) Gronwall lemma then shows that if u 0 (x 0 ) = 0, then u(t, x 0 ) = 0 for all t ∈ [0, T * ], hence the zero speed of propagation for smooth solutions.As the matrix A(U, ξ) is linear in U , the result follows in the setting of (2.5).Note that to prove this zero speed of propagation, we do not invoke the symmetry of A: it was used in order to get Sobolev estimates (which ensure that U ∈ L 1 ([0, T * ], W 1,∞ )), but only the fact that it is (at least) linear in U is used at this stage.We then have, for the same T * as in Proposition 2.1: There exists T * > 0 independent of ε ∈]0, 1] such that for all s 0, there exists C = C(s) such that To infer the pointwise description of u ε at leading order, we must in addition know φ ε up to o(ε), which is achieved by considering the linearization of (2.5).At the next step of the WKB expansion, we find that where the first corrector (φ (1) , a (1) ) solves the system: + ∇φ • ∇φ (1) + 2 Re aa (1) = 0, ∂ t a (1) + ∇φ • ∇a (1) + ∇φ (1) • ∇a |t=0 = 0, for some function F k which is a polynomial in its arguments, without constant term, and whose precise expression is unimportant here.The left hand side is always the linearization of the left hand side of (2.5) about (φ, a), and the right hand side depends on previous correctors.We infer (see [14,5]), by induction: Let T * > 0 given by Proposition 2.1.For all k 1, there exists a unique solution (φ 2 to the above system, and for all s 0, there exists C = C(k, s) such that In addition, if supp a 0 , supp φ 0 ⊂ K, then (φ (k) , a (k) ) remains compactly supported for t ∈ [0, T * ], and The support property is a consequence of the same argument as in the proof of Proposition 2.1.Using the embedding we also deduce from the above error estimate the bound, for k 1: with the convention (φ (0) , a (0 ) = (φ, a).The standard form of WKB expansions, , is then obtained by setting a = ae iφ (1) , a 1 = a (1) e iφ (1) + iaφ (2) e iφ (1) , etc. Remark 2.4 (Higher order nonlinearities).If instead of (2.1), one considers with σ 2 an integer, then the justification of WKB analysis requires a different approach.We refer to [1,9] for two different proofs, which show that the conclusions of the propositions stated in this section remain valid. Remark 2.5 (Focusing nonlinearity).If instead of (2.1), one considers a cubic focusing nonlinearity, then the analogue of (2.5) is no longer hyperbolic, but elliptic.Working with analytic initial data (φ 0 , a 0 ) is then necessary in order to solve (2.5) ( [18,26]), and this is a framework where nonlinear WKB analysis is fully justified ( [11,30]).However, analyticity is incompatible with an initial compact support.On the other hand, in the weakly nonlinear case γ = 1 (and more generally if γ 1), it is possible to justify WKB analysis with a focusing nonlinearity and compactly supported initial data (see e.g.[4] or [5,Chapter 2]). The monokinetic case with a potential In this section, we first recall some elements of WKB analysis in the linear case.We then show how this case can be merged with the analysis presented in the previous section, when γ = 0. We sketch how the case of a weaker nonlinearity, 0 < γ < 1.To conclude, we briefly discuss the weakly nonlinear régime γ = 1, and more generally the situation γ 1. 3.1.Linear case.The eikonal equation associated to , that is, without initial rapid oscillation, reads: In this subsection, we assume that ϕ 0 is smooth and at most quadratic, in the same sense as in (1.1).This eikonal equation is solved by introducing the classical trajectories, solving As V is at most quadratic, from (1.1), the above system has a unique, global, smooth solution, and in addition uniformly in y ∈ R d , for any matricial norm on R d×d .Therefore, the Jacobi determinant J t (y) = det ∇ y x(t, y), remains non-zero and bounded on some time interval [0, T ] with T > 0. Since we also have, by uniqueness in ordinary differential equations, ∇φ eik (t, x(t, y)) = ξ(t, y), for any smooth solutions to (3.2), the global inversion theorem implies the following result (see also [5, Proposition 1.9]): Lemma 3.1.Let V satisfying (1.1), and ϕ 0 satisfying the same properties.There exists T > 0 and a unique solution In the linear case, the leading order amplitude is given by the linear transport equation Following the classical trajectories, this transport equation becomes trivial, since A(t, y) := J t (y)a (t, x(t, y)) satisfies ∂ t A = 0. 3.2.Supercritical case: γ = 0. We consider the same framework as in the previous section, now with a potential: As noticed in [4], it is possible to adapt the above WKB analysis in the presence of an external potential satisfying (1.1) by simply mixing the standard approach followed in the linear case (see e.g.[28]) and Grenier's method. 3.2.1.Introducing the nonlinearity.As noticed in [4], the approach presented in the case V = 0 for the nonlinear case can be adapted by changing the representation (2.2) to u ε (t, x) = a ε (t, x)e iφ eik (t,x)/ε+iφ ε (t,x)/ε , where φ eik solves (3.2) with ϕ 0 ≡ 0, and requiring The new terms compared to (2.3) involve φ eik , and since φ eik is at most quadratic in space, it turns out that they can be estimated like (semilinear) perturbative terms (using commutator estimates for the transport part).The natural limit for (3.9) when ε → 0 is given by The following result is a consequence of [4]: 2 , and for all s 0, there exists C = C(s) such that 1), and T , φ eik given by Lemma 3.1. There exists The correctors φ (j) , a (j) j 1 as obtained in the same fashion as in Section 2. The only difference is that the operator ∂ t is replaced by Finite speed of propagation: following the classical trajectories. In order to prove that if a 0 ∈ C ∞ 0 (R d ), the solution to (3.1) remains compactly supported in the support of a 0 transported by the classical flow (3.3), it is standard to introduce the following change of unknown function (e.g.[28,5]): A(t, y) := J t (y)a (t, x(t, y)) , where a solves the transport equation as given by WKB analysis.Indeed, using (3.4), we easily check that A is constant in time, ∂ t A = 0. Correctors (a (k) ) k 1 in the (linear) WKB analysis solve the equation with the convention a (0) = a.Setting we infer that supp A (k) (t, •) ⊂ supp a 0 , ∀t ∈ [0, T ], ∀k 0, where T is given by Lemma 3.1.Thus, for t ∈ [0, T ], up to O(ε ∞ ), u ε remains compactly supported, in the support of a 0 transported by the classical flow. In the nonlinear case, we check that the same argument remains valid.Consider φ eik solution to (3.2), and (φ, a) solving (3.7).The natural adaptation of the above computation consists in showing that if φ 0 , a 0 ∈ C ∞ 0 (R d ), the new unknown (ψ, A), defined by (3.8) A(t, y) := J t (y)a (t, x(t, y)) , ψ(t, y) := φ (t, x(t, y)) , enjoys a zero speed of propagation.Note that in view of Proposition 3.2, we already know that φ, a ∈ C([0, T * ], H ∞ (R d )), so it suffices to check that (ψ, A) solves a system for which the argument presented on the toy model of Burgers equation in Section 2 remains valid.Introducing whose determinant is by definition J t (y), we find: We do not express the right hand side of the last equation in terms of (ψ, A): differentiating the first equation with respect to y, the bounds stated in Proposition 3.2 make it possible to infer an inequality of the form Integrating in time the equation solved by ψ, we conclude to the zero speed of propagation for (ψ, A).Arguing like in Section 2 for the correctors, we have: ) with supp φ 0 , supp a 0 ⊂ K.There for any t ∈ [0, T * ], where T * is given by Proposition 3.2, where ψ and A are related to φ and a through (3.8).The same is true for the correctors (ψ (k) , A (k) ) k 1 corresponding to the next terms (φ (k) , a (k) ) k 1 in the asymptotic expansion in (3.9).Remark 3.4 (Special potentials).In the case where V is linear in x or isotropic quadratic, explicit formulas allow to bypass the above arguments. , we recover exactly (2.1).Otherwise, a (smooth) time dependent factor has appeared, which obviously does not change the conclusion of Propositions 2.1 and 2.3.The case of a potential with the opposite sign is obtained by changing ω to iω in the formulas.See e.g.[5,Section 11.2] and references therein regarding these changes of unknown functions.For such potentials, the classical trajectories given by (3.3) are computed explicitly, and we can check directly the conclusions of Proposition 3.3. Weaker nonlinearity. We now consider the case 0 < γ < 1.This case is still a supercritical case as far as WKB analysis is concerned, in the sense described in the introduction: a "natural" asymptotic expansion of the solution u ε still involves a system of equations which is not closed.As noticed in [4], this intermediary case can be handled like the case γ = 0, by replacing (3.9) with (3.9) The matrices A j and S now depend on ε, in an explicit way, and the asymptotic expansion of (φ ε , a ε ) involves more terms.Let N = [1/γ], where [r] is the largest integer not larger than r > 0: N new intermediary terms appear compared to the case γ = 0, where the estimate holds in L ∞ ([0, T ], H s ) for any s > 0. This can be seen by setting φε = ε −γ φ ε : the leading order term is given by    The leading order amplitude solves the same transport equation as in the linear case, and it is readily observed that the analogue of Proposition 3.3 remains valid, up to adapting the hierarchy of equations. 3.4.Weakly nonlinear and linearizable cases.We now assume γ 1.As in [4] (or [5, Chapter 2]), we present a strategy for any γ 1, and emphasize the fact that the value γ = 1 is specific.In this setting, the coupling between phase and amplitude changes dramatically: rapid oscillations are described by φ eik only, and the analysis consists in expanding the amplitude a ε = u ε e −iφ eik /ε in powers of ε: Like above, the case when γ > 1 is not an integer requires a special asymptotic expansion, and we do not discuss this case.When γ > 1, the leading order amplitude satisfies the same transport equation as in the linear case.When γ = 1, it satisfies Following the classical trajectories, that is resuming the change of unknown function (3.8), this equation reads In particular, ∂ t |A| 2 = 0, and the nonlinear effect in a consists of a phase selfmodulation.In particular, the support of A(t, •) is independent of t ∈ [0, T ].The same is true for all correctors in the asymptotic expansion, as can be checked easily. Separation of states We complete the proof of Theorem 1.3, by proving the nonlinear superposition. We set Then a 0 , φ 0 ∈ C ∞ 0 (R d ), φ 0 is real-valued, and We can then resume the analysis from the monokinetic case as presented in Sections 2 and 3, with the same notations.Let φ eik be given by Lemma 3.1 (it does not depend on the initial data, but only on V ). 4.1.Supercritical case.When γ = 0, the WKB analysis for each u ε j , solution to (1.5), involves the following system: To simplify the discussion, suppose first that V = 0, hence φ eik = 0.Each solution to (4.1) remains smooth on some time interval [0, T j ] for some 0 < T j T , and, on this time interval, enjoys a zero speed of propagation.As a consequence of Proposition 2.1, we have since nonlinear terms containing two indices j 1 = j 2 involve two functions whose supports are disjoint.Also, for all k 1, the correctors satisfy we obtain Theorem 1.3 in the case V = 0.In the case where V is not trivial, we just have to resume the above arguments by replacing the functions (φ, a) (possibly with indices and/or superscripts) with (ψ, A), as defined by the change of unknown function (3.8), which involves only V (see (3.3)), and not the initial data. 4.2.Other cases.When γ > 0, we have seen that the leading order amplitude is the same as in the linear case, up to a phase modulation.Leading order oscillations are given by φ eik , where we now set ϕ 0 = φ 0 in (3.2).The features used in the supercritical case then remain, regarding the evolution of the support of the terms involved in WKB analysis. For any such function χ, we have u ε |t=0 = a 0 e iχφ0/ε .However, the eikonal equation now depends on χ, as (3.2) becomes As recalled in Section 3.1 (in the case φ 0 = 0), the solution is constructed, locally in time, via the classical trajectories, or, equivalently, through characteristic curves. As V is smooth, the slope of characteristic curves at time t = 0 is uniformly bounded on the support of a 0 .By finite speed of propagation, there exists T (χ) > 0 such that φ eik does not depend on χ for t ∈ [0, T (χ)].In practice, the introduction of χ may shorten the time interval of validity of WKB analysis, as we now illustrate.Let d = 1, V = 0, and φ 0 (x) = x 2 /2.The solution to the eikonal equation (without cutoff χ) is given explicitly by This is a case where there is no singularity for t 0 (but a caustic reduced to one point at t = −1).Indeed, the classical trajectories, solving ẋ(t, y) = ξ(t, y), x(0, y) = y ; ξ(t, y) = 0, ξ(0, y) = φ ′ 0 (y) = y, are given by x(t, y) = (1 + t)y, obviously inverted, for all t 0, as and the leading order amplitude in WKB analysis is given by For χ a (usual) cutoff function as above, χφ 0 has two humps: in the presence of χ, y → x(t, y) ceases to be invertible for all t 0 (φ ′ eik solves the Burgers equation), but for short time (independent of ε, but depending on χ), a(t)e iφ eik (t)/ε does not depend on χ. Supercritical WKB analysis for the nonlinear Schrödinger equation. In the case addressed in Section 2, the above eikonal equation is replaced by (2.5).By considering the gradient of the phase instead of the phase, the Burgers equation (in the case of WKB analysis for the linear Schrödinger equation without potential) is replaced by the symmetrization of the Euler equation.Like above, finite speed of propagation implies that the introduction of a cutoff function in the initial phase does not alter the solution to (2.5) on some time interval [0, T (χ)], for some T (χ) > 0 possibly depending on χ.This time is of course independent of ε, as ε is absent from (2.5).This is why in Remark 1.4, the Wigner measure does not depend on the χ j 's, even though its construction seems to depend on these cutoff functions: the time of validity that we can prove may, on the other hand, depend on the choice of these cutoff functions. We conclude this discussion by an illustration similar to the one given in the previous subsection.Let a 0 ∈ C ∞ 0 (R d ), and assume that for s > d/2+1, a 0 H s (R d ) is sufficiently small.Suppose also that v 0 = ∇φ 0 satisfies: , and there exists δ > 0 such that for all x ∈ R d , dist(Sp(∇v 0 (x)), R − ) δ, where we denote by Sp(M ) the spectrum of a matrix M .Then it follows from the main result in [13] that (2.5) has a global (in the future) solution where v is the unique, global smooth solution to the (multidimensional) Burgers equation We may for instance consider φ 0 (x) = |x| 2 /2 (see the previous subsection), and then v(t, x) = x 1 + t . On the other hand, if φ 0 is multiplied by a cutoff function χ, then the initial data in (2.5) belong to C ∞ 0 (R d ): it follows from [21] that the corresponding solution develops a singularity in finite time.Like in the previous subsection, the introduction of the cutoff χ reduces the lifespan of the solution involved in WKB analysis but, for short time, does not alter the asymptotic description of the solution u ε . Weakly nonlinear case In this section, we prove Proposition 1.5.Instead of (1.2)-(1.3),we consider the weakly nonlinear case, When d 2, the creation of new WKB terms is possible by resonant interactions, provided that N 3, as recalled in the appendix.The one-dimensional cubic case is special, as there are no nontrivial resonances, see [6].In order to present an argument including the cubic one-dimensional case, we propose a proof which does not use the creation of, e.g., a fourth term out of three. Consider linear phases, The first part of Proposition 1.5 is simply a restatement of Theorem 1.3 in this case.To prove the appearance of nonlinear interferences, we will not consider cutoff functions, and rely on explicit computations.WKB analysis in the monokinetic case N = 1 leads to the hierarchy The eikonal equation is solved explicitly, As ∆φ = 0, the initial amplitude α is transported along the vector k with a phase self-modulation: In the case N = 2, no new WKB term is created, but interactions between the two modes lead to a modification of the phase modulation.As computed in [6, Section 3], we find In addition, we have for any T > 0 (independent of ε); see Corollary 5.13 and Theorem 6.5 in [6]. The leading order nonlinear interactions between the two modes correspond to the integrals in time in the exponentials.For small time though, the integrals are zero on the support of the transported amplitudes: in other words, there exists T 0 > 0 independent of ε such that for t ∈ [0, T 0 ], in agreement with the conclusion of Theorem 1.3 (up to the order of precision).The conclusion in Proposition 1.5 then follows from the property: This is possible as soon as the transport of the support of α 1 meets the support of α 2 , as transported in the above integral.Let α ∈ C ∞ 0 (R d ) supported in the ball centered at the origin, of radius 1, and set where (e 1 , . . ., e d ) is the canonical basis of R d .Setting k 2 − k 1 = λe 1 for λ > 0, we see that the above property is satisfied for some T 1 > 0. We also remark that T 1 → 0 as λ → ∞. Appendix A. Weakly nonlinear case and creation of a new term In this appendix, we prove that in the weakly nonlinear case, if d 2, then nonlinear interactions may lead to the creation of a new WKB term, which is a stronger phenomenon than that used in the proof of Proposition 1.5.Consider α j (x)e iϕj (x)/ε , with now N = 3, and d 2. The one-dimensional cubic case is special, as there are no nontrivial resonances, see [6].Again, we consider linear phases, Recall ( [10], see also [6,Lemma 2.2]) that the resonant set is defined by is characterized as follows: (k j , k ℓ , k m ) ∈ Res(n) when the endpoints of the vectors k j , k ℓ , k m , k n form four corners of a nondegenerate rectangle with k ℓ and k n opposing each other, or when this quadruplet corresponds to one of the following two degenerate cases: (k j = k n , k m = k ℓ ) or (k j = k ℓ , k m = k n ).Note that we always have (A.1) {(j, j, n), ((n, j, j), a j ≡ 0} ⊂ Res(n), where a j is the amplitude associated with the phase In order for the nonlinearity to create a term associated with a phase φ 4 , out of three phases associated with wave numbers k 1 , k 2 and k 3 , we must have This resonant condition is equivalent to the following conditions: and the endpoints of k 1 , k 2 and k 3 are not aligned (the case of alignment corresponds to the set on the left in (A.1)); this is possible with pairwise different k 1 , k 2 , k 3 and k 4 ∈ {k 1 , k 2 , k 3 } provided that d 2, see [6] (or [5, Section 2.6]).For instance if d = 2, we can choose, for λ > 0, k 1 = λ(1, 1), k 2 = λ(1, 0), k 3 = λ(0, 1), hence k 4 = (0, 0). In higher dimension, we simply complete each vector by zero coordinates.Then a new term, associated with the phase φ 4 may be created by nonlinear resonance.Because of the geometric characterization of resonances, no other term can be created apart from this one, since we have completed a rectangle.The creation is effective only if the associated amplitude does not remain zero.The equation for the corresponding amplitude is If we assume that the mode 4 is not effectively created, that is a 4 ≡ 0, then the inclusion (A.1) is actually an equality, and |a k | 2 a j + i|a j | 2 a j , j = 1, 2, 3. hence a j (t, x) = α j (x − tk j )e −iSj (t,x) , j = 1, 2, 3, for some explicit real-valued phase, whose expression is irrelevant here (see [6,Section 3.1] for the formula).Given any T > 0, we may choose α 1 , α 2 , α 3 compactly supported, with disjoint supports, k 1 , k 2 , k 3 like above, so that a 2 ā1 a 3|t=T /2 ≡ 0. This shows that the term a 4 is actually created, in the sense that a 4 does not remain trivial on [0, T ].The error estimate proved in [7] (see also [5,
8,421
2023-09-07T00:00:00.000
[ "Mathematics" ]
A physiologically based pharmacokinetic model for 2,4-toluenediamine leached from polyurethane foam-covered breast implants. Physiologically based pharmacokinetic (PBPK) modeling was used to assess the low-dose exposure of patients to the carcinogen 2, 4-toluenediamine (2,4-TDA) released from the degradation of the polyester urethane foam (PU) used in Meme silicone breast implants. The tissues are represented as five compartments: liver, kidney, gastrointestinal tract, slowly perfused tissues (e.g., fat), and richly perfused tissues (e.g., muscle). The PBPK model was fitted to the plasma and urine concentrations of 2,4-TDA and its metabolite 4-AAT (4-N-acetyl-2-amino toluene) in rats given low doses of 2, 4-TDA intravenously and subcutaneously. The rat model was extrapolated to simulate oral and implant routes in rats. After adjusting for human physiological parameters, the model was then used to predict the bioavailability of 2,4-TDA released from a typical 4.87-g polyester urethane foam implant found in a patient who weighed 58 kg with the Meme and had the breast implant for 10 years. A quantitative risk assessment for 2,4-TDA was performed and the polyester urethane foam did present an unreasonable risk to health for the patient. ImagesFigure 1Figure 2Figure 3Figure 4 http://ehpneti.niehs.nih.gov/doc/1998/106p393-400lmdabstra" html Polyurethanes have been used in breast implant covers, pacemaker leads, hemodialyzer potting material, and other medical applications. The degradation of polyurethanes in vivo has long been a concern due to both the release of potentially harmful materials as well as mechanical failure. The composition of the polyurethane degradation products depends on the original formulation of the polymer. The polyester urethane (PU) foam cover of the Meme breast implant (or Replicon; Surgitek Corporation, Racine, WI) was made from a polyester resin and a 80:20 mixture of 2,4-toluene diisocynate and 2,6-toluene diisocyanate (2,4-TDI, 2,6-TDI) (1). One of the degradation products of TDI is 2,4toluenediamine (2,. Data in experimental animals showed that the PU foam cover degraded within 6-12 months of implantation (2). This result was consistent with the clinical observations in patients with Meme breast implants (3,4). Our own in vitro studies indicated that 2,4-TDA was slowly released over time by hydrolytic degradation when PU foam samples were incubated under mild physiological conditions (5,6). The amount and rate of 2,4-TDA production observed in vitro varied with the conditions of PU foam exposure (7)(8)(9). The carcinogenicity of 2,4-TDA was studied extensively in mice, rats, and other species (10)(11)(12)(13)(14)(15)(16)(17)(18)(19). Exposure of patients to this rodent carcinogen, one of the degradation products of Meme breast implants, is a potential health risk (20)(21)(22)(23)(24). Physiologically based pharmacokinetic (PBPK) modeling uses physiological parameters such as organ volumes, blood flows, and excretion flows, and chemical specific parameters such as tissue solubility and biotransformation rates. This type of modeling uses these physiological parameters to complete a material balance on the component of interest in the body. An important feature of this type of model is predicting the absorption, distribution, metabolism, and elimination of the simulated chemical. Animal data using one uptake route can be extrapolated to another route and/or from animal species to humans with minor changes to the PBPK model. The extrapolative value of this approach in human health risk assessment has long been used for many drugs and chemicals (25)(26)(27)(28). The objectives of this study were to develop a PBPK model to simulate the fate of low-dose exposure of 2,4-TDA from implant degradation and assess the potential health risk in patients with Meme PU foam-covered silicone breast implants. The PBPK model was initially developed to fit experimental data obtained in rats injected intravenously (iv) and subcutaneously (sc) with low doses of 2,4-TDA, 0.54 mg/kg and 0.44 mg/kg, respectively (29). Once the PBPK model adequately simulated the fate of 2,4-TDA and its metabolite by different routes of exposure, the model was scaled up to a human PBPK model using appropriate physiological and pharmacokinetic parameters. The model was solved numerically using an adaptive grid Runge-Kutta method in the Mathcad PLUS 6.0 software package (MathSoft, Cambridge MA). Using sensitivity analysis, we evaluated the rat model and determined the ratelimiting steps that changed with doses, routes, and species. Statistical validation of the fit of the experimental rat results to the PBPK model simulation results was performed as described by Krishman and Anderson (25). The kinetic behavior of 2,4-TDA and its metabolite were derived from the implant PBPK model for both rats and humans. The simulated results were compared to the available independent data for validation (10-13,22,2,). Finally, a quantitative risk assessment was performed to assess the potential cancer risks in patients with Meme silicone breast implants. Methods PBPK model The model consists of five tissue compartments: two excreting compartments [kidney and gastrointestinal (GI) tract], slowly perfused tissues (e.g., fat), richly perfused tissues (e.g., muscle), and metabolizing tissue (liver). The model is very similar to styrene and methylene chloride PBPK models (27,28). The hydrolytic PU foam degradation rate constant, Kd, was zero order with an experimentally determined value of 88 ng 2,4-TDA/g foam/day as previously reported (5,6). The degradation of the PU foam in vivo was assumed to produce only one product, 2,4-TDA (29). The PBPK model for 2,4-TDA accounts for hepatic metabolism, which is known to produce an active metabolite (18,19). Although there were at least eight known metabolites of 2,4-TDA in the liver, 2,4-TDA was assumed only to have one metabolite, 4-acetylamino-2-aminotoluene (4-AAT), in this model for simplicity (19,29). The rate of production of 4-AAT was controlled by an adjustable selectivity in the model (19). The selectivity was defined as the fraction of metabolites of 2,4-TDA that were converted to 4-AAT. The value of the selectivity was set according to the published fraction of metabolites that were 4-AAT (0.32) (1,9. The metabolite 4-AAT was also distributed, metabolized to an unknown product, and excreted (11). 2,4-TDA was modeled to be an equilibrium system between protein-bound and free 2,4-TDA, with the ratio of bound to free kept constant so that 9.7% of the 2,4-TDA was free, as shown in the initial experimental rat plasma samples (29). As shown in Figure 1, blood of uniform composition exiting the mix point flowed through the implant into the arterial circuit. The uniformly mixed arterial blood was then distributed to other compartments in the body. The blood flow rate to each compartment was determined from published physiological measurements (26). The distribution coefficient (D), which related to the blood/tissue partition coefficient (Jf, was defined as the ratio of the concentration of 2,4-TDA or metabolite in the tissue to its emergent venous blood concentration. The 2,4-TDA (and 4-AAT) distribution coefficients and metabolism rate constants were obtained from previously published radioactive distribution studies in rats and mice (11-13). Venous blood exits each compartment at equilibrium with the tissue in that compartment. Unlike the arterial circuit, the blood exiting each compartment in the venous circuit did not have a uniform composition of 2,4-TDA or its metabolite. Before the blood was returned to the implant and arterial circuit, it was completely mixed at the mix point. All of the 2,4-TDA was metabolized in the liver, and the metabolites were then returned to the venous blood for circulation to the other compartments for subsequent Venous absorption, further metabolism, or excretion. Metabolites or 2,4-TDA can be excreted in either the urine or the feces (19). The complete model equations are discussed in the Appendix. Table 1 lists the range of the parameters used in the model for both rats and humans (26). Experimental data. Three sets of independent experimental data were used to calibrate the model (29). After iv injection with 0.52 mg/kg 2,4-TDA, blood samples were collected from five female CDF (F-344)/Crl rats at various time intervals (0.05, 0.15, 0.25 hr, etc.) up to 48 hr after dosing. Both the 2,4-TDA and 4-AAT plasma results were measured by a validated GC/MS analytical method with a detection limit of 0.1 ng/ml. These results were used to fit the metabolism parameters for both 2,4-TDA and 4-AAT in the model. As discussed previously, distribution coefficients were assigned from radioactive distribution studies (11,12,16). Urine was obtained for analysis after female CDF (F-344) CRL BR rats (n = 5) received a single sc injection of 2,4-TDA (0.44 mg/kg). The mean fraction of 2,4-TDA excreted unchanged was 0.22% (range 27-51 ng/ml). The corresponding concentration of urinary 4-AAT recovered was less than 1% (range 103-188 ng/ml) within 48 hr after dosing. These results were used to set the urine/feces excretion distribution coefficients. In another study, a 14C-labeled foam was synthesized from 14C-TDI comparable to the foam used in the manufacture of Meme breast implants; this 14C-labeled foam was implanted sc in young female rats at a dose comparable to 80 mg/kg. Radioactivity was evaluated in urine, feces, and selected tissues at various intervals post implantation (1, 4, 12, and 24 hr, and 7, 15, 21, 28, and 56 days). At 56 days, the cumulative excretion of radioactive material amounted to 2% (urinary 0.9% and presence of radioactive material in rat urinary and fecal excreta at 56 days indicated a slow and continued degradation of implanted PU foam in vivo. These results were used to verify our fitted excretion data as discussed above and as an independent check of the tissue distribution parameters. Results Calibration using iv bolus andsc administration ofa low dose of2,4-TDA in the rat. The fitted experimental plasma concentrations following iv injection of a 0.52-mg/kg dose 2,4-TDA are shown in Figure 2. 2,4-TDA plasma levels decreased rapidly following iv administration in rats. Although this dose would be expected to produce a blood concentration of6,500 ng/mi in the rat, the maximum initial plasma concentration was 633 ng/ml, indicating that 2,4-TDA was rapidly bound to plasma proteins confirming previous observations (22. The blood flow parameters; and compartment volumes were set from published physiological parameters; the tissue distribution coefficients were obtained from published radioactive distribution studies as discussed earlier (11)(12)(13)26). Even with these constraints, the model was robust enough to accurately correlate the experimental results using only the metabolism parameters for both 2,4-TDA and 4-AAT. From the material balance results, the area under the serum concentration time curve (AUC) of 2,4-TDA from time zero to infinity was determined to be 62.176 ng/hr/ml, the total body dearance (Cl) was 8,363 ml/hr, and the volume of distribution (7Vd) was 1,379 ml/kg (30). The elimination half life (tl2) of free 2,4-TDA in the plasma was 0.074 hr due to its extensive binding to plasma proteins (22). The protein bound 2,4-TDA is in equilibrium with the free 2,4-TDA, which is biologically active. The extent that protein binding influences the delivery of free 2,4-TDA to the target organ may change in certain human disease states (30). After the first 6 hr, the plasma levels of 2,4-TDA dropped to a very low value below the analytical detection limits. As shown, most of the administered dose of 2,4-TDA was rapidly metabolized, and very little was excreted or absorbed (0.22%). The urinary concentration of 4-AAT reached nearly 500 ng/ml, well above detection limits, even though less than 0.3% (weight) of the original dose was excreted. Thus, the predicted cumulative concentrations of 4-AAT excreted in the urine were well fitted to the urine analysis data obtained from the rat subcutaneous study (29). Kinetics of2,4-TDAfrom implant simulation in the rat. The simulated implant results in the rat are shown in Figures 3 and 4. Unlike the intravenous cases, the implant provided a continuous low-level dose of 2,4-TDA to the tissues. The predicted plasma steady state concentration (C ) of 2,4-TDA was only 3.4 x 10-5 ng/ml. Therefore, we do not recommend the use of free plasma 2,4-TDA to monitor the breakdown of PU foam. The plasma level of 4-AAT was also too low to be detected. Table 2 summarizes the calculated pharmacokinetic parameters of free 2,4-TDA and 4-AAT following three different routes of administration of a low dose in the rat. Kinets of2,4-TDAfrom implant simulation in the human. The physiological and biochemical parameters that were used to adjust the rat model to a human model are summarized in Tables 1 and 3. The tissue distribution coefficients, which were assumed to be the same for both rats and humans, were obtained from the literature (11-13). The metabolism parameter Vm. was extrapolated by scaling on the basis of (body weight)0 7, where 45.27 was the scaling factor between rats and humans [scaling factor = (58/0.250)071 (27). The results of the human PBPK simulation are shown in Table 2. The model predicted that the C. of free 2,4-TDA and 4-AAT in plasma of a patient with 4.87 g Meme breast implants and body weight of 58 kg would be 1.55 x 104 ng/ml and 2.58 X 104 ng/ml, respectively (values for the rat would be 3.40 x 10-5 ng/ml and 5.77 x 10-5 ng/ml, respectively). The lower serum concentration values in the rat were a direct result of higher metabolism because the distribution coefficients and relative excretions were the same in these calculations. In the human, this corresponds to a plasma level of 16 x 104 ng/ml of total 2,4-TDA (free + conjugated; 9.7% free). Similar to the rat implant we do not recommend using free plasma 2,4-TDA to monitor the breakdown of PU foam from breast implants. However, urinary 4-AAT may be a better biomarker (0.309% or 0.96 nglml). Table 2 also shows a large volume of distribution indicating high tissue uptake, low clearance, and small AUC in the human implant. Kinetics of2,4-TDAfollowing low-dose ingestion in the rat. The model was also used to predict the pharmacokinetics 2,4-TDA introduced by feeding. For this simulation, 0.52 mg/kg 2,4-TDA was delivered to the GI tract and was slowly absorbed into the blood by a first order rate constant of 0.015 min-1. This rate absorbed about 99% of the 2,4-TDA in about 90 min (16). Along with the results for the iv bolus and the implant, the plasma 2,4-TDA and 4-AAT concentrations are shown in Figures 3 and 4, respectively. At a given low dose, the model predicted lower plasma concentrations of free 2,4-TDA and 4-AAT following the oral route compared to iv bolus due to slower absorption rate. However, these concentrations persisted for a much longer period of time, so the area under the plasma concentration time curve was equivalent (AUC = 58.95 ng/hr/ml) compared to exposure from the iv route. The 4-AAT urinary concentration reached a maximum of 240 ng/ml after about 1 hr, a very comparable level to the sc result because the 2,4-TDA was introduced gradually in both cases. The cumulative urinary level of 4-AAT was determined to be less than 0.3% (weight) of the original dose following an oral bolus dose of 2,4-TDA. Sensitivity analysis. The PBPK model for 2,4-TDA, as shown in the Appendix, is complex and has several adjustable parameters that give it the flexibility to correlate a large range of possible scenarios. Because the blood volume, blood flow rate, tissue volume, and excretion flow rates are set from the physiological measurements, the parameters such as distribution coefficients and the metabolism rate constants, which were obtained from published animal data (11)(12)(13)16), were subsequently fitted to experimental results. Such restraint limited the model to reasonable predictions with physiological implications. The model predictions are sensitive to many parameters. The model itself is based on the physiology ofthe system being studied to keep it realistic. By using a sensitivity analysis, the effect of a change in a parameter on the model prediction can be identified (25). With all models, large changes in some parameters have almost no effect on the model prediction. Also, small changes in some parameters have drastic changes in model predictions. Thus, a sensitivity analysis can be used to identify parameters that are critical to be measured by experimental studies. The parameters that are not critical to the model prediction do not require the same degree of experimental effort to quantify for experimental validation of the model. The usefulness of the PBPK approach is to use a mathematical model to predict the results that are difficult or impossible to obtain experimentally (25). Thus, the point of the PBPK model is not to predict results at all points in the human or animal body and compare the results to experimental data at all the points, but rather a few key results are predicted, e.g., plasma concentrations and excretion rates, and these are used for a simple validation (25). The physiology is used to predict the other unmeasurable points based on the best available data (blood flows, organ volumes, etc.). This simple validation will give substantially more predictive power than simple oneor two-compartment pharmacokinetic models, which are often validated with the same type ofvery limited data. Sensitivity analysis identified a few critical model parameters. Both of the metabolism parameters, Vam and Km. which reflect the activity of specific liver enzymes to biotransform 2,4-TDA, were found to be the controlling parameters effecting the excretion and accumulation of 2,4-TDA and its metabolites in each compartment. This is consistent with the mode of action of 2,4-TDA based on the literature (10-19). A 100% increase in Vmax (or decrease in Km), will result in a 10% decrease in the accumulation of 2,4-TDA in each compartment, a minimal change in its excretion, but nearly a 50% decrease in free plasma concentration. A 50% decrease Volume 106, Number 7, July 1998 * Environmental Health Perspectives 1,000 100 -in V.. (or increase in K), will also only result in a 10% increase in accumulation in each compartment, a minimal change in excretion, and a large time lag in the plasma concentration reduction. In general, as D (distribution coefficient) increases, the accumulation of 2,4-TDA and its metabolites will increase in that compartment while accumulations in other compartments and excretions will decrease to compensate. As expected, because the richly perfused tissue (r) has the largest organ volume as well as the highest percentage of the cardiac output, the value of the distribution coefficient (Dr) also has a significant effect on the model predictions. For example, a 100% increase in Dr will result in nearly 16% reductions in the amount of 2,4-TDA or its metabolites accumulated in the other compartments and a minimal effect on excretion. Similarly, a 50% decrease in Dr will also result in nearly 8% increases in accumulation and a minimal effect on excretion. Similar changes in DL (liver), D,, (slowly perfused tissue), Dk (kidney), and DGI (GI tract) will result in very minor changes in other compartments. These other distribution coefficients have only minor (1-2%) effects on the tissue distributions overall; thus, even large errors in their estimated values will not change the overall predictions of the model. This relative importance of Dr sensitivity is due to a combination of both organ volume and blood flow. The larger the combination of blood flow and organ volume to a tissue compartment, the more impact changes in the equilibrium distribution coefficient will have on the model output. The uniqueness of the slowly perfused tissue response (e.g., fat tissue) compared to other compartments is the time lag. This compartment has the highest ratio of tissue volume to blood flow rate; thus, changes in the distribution coefficient in this compartment manifest their effects on a slower time scale (2-3 hr), compared to changes in other compartments that manifest-themselves in about 10 min. The kidney and GI tract are unique since they are the only compartments in which excretion occurs. Although changes in the distribution coefficients will have only minor effects on excretion, the values of Durine and Dfeces along with the flow rates of urine and feces, have a very significant effect on excretion. However, solubility limitations and physiology of the rat put limits on how high or how low one can make the flow rate of urine and feces. Statistical analysis. For statistical validation (25) of just how well the experimental rat results fitted the PBPK simulation results, the log (base 10) transformed 2,4-TDA rat blood data were shown to follow the same linear model as the log (base 10) transformed 2,4-TDA PBPK model predictions, over the first half hour, i.e., no statistically significant difference for either slope or intercept between the transformed data and the transformed predictions, even at the 10% significance level. For the 4-AAT, the residuals or differences between the log (base 10) transformed 4-AAT rat blood data and the log (base 10) transformed 4-AAT PBPK model predictions from 0.05 to 0.5 hr were shown to have a zero mean, no statistically significant difference from zero even at the 40% significance level, and a 90% tolerance interval with 90% coverage for individual residuals of 43% and 202% for the rat data as percent of model prediction. Discussion In the safety evaluation of the PU foam-covered breast implants, it is difficult to assess the carcinogenic potential of the Meme breast implants relative to 2,4-TDA, a mutagen and carcinogen known to be released from the device (20)(21)(22)(23)(24). The data available for this purpose are generally high-dose feeding (4.7-10.6 mg/kg) animal data (14)(15)(16)18,151). Therefore, we need pharmacokinetic models that rely on realistic physiological parameters and chemical-specific parameters to be able to extrapolate between high and low doses, different routes of administration, and from animal to human (25). The objective of this paper is to use the PBPK model for 2,4-TDA in rats to predict the long-term distribution, excretion, and metabolism of the low-dose infusion cases that are characteristic in humans with breast implant degradation. The PBPK model for 2,4-TDA adequately simulated the experimental 2,4-TDA and 4-AAT rat plasma data (Fig. 2). The fit of the experimental rat results to the PBPK model simulation results was validated statistically as shown earlier (25). Table 2 shows the pharmacokinetic data of 2,4-TDA between three simulated routes of exposure in rats. The data obtained from the mass balance studies shed light on how the implant route produces a large volume of distribution (1Vd = Dose/ Cmax = 54,900 ml/kg). Such a large Vd reflects high tissue uptake and accumulation of the initial dose of free 2,4-TDA and metabolites in target tissues such as the liver, kidney, and fat ( Table 2). As shown, the simulated data obtained in this study agree well with the distribution studies using radiolabeled 2,4-TDA in animals, which indicated the highest accumulation of 2,4-TDA and metabolite in the richly perfused tissue, slowly perfused tissue, GI tract, kidney, and liver (11-13,16). The model also predicts a slow excretion of2,4-TDA from an implant (5% excreted in urine in the first week and 0.94% excretion in the feces) compared to an iv bolus and oral administration at a given dose, confirming the experimental results obtained following implantation with "4C 2,4-TDAlabeled-foam (80 mg/kg) in which approximately 2% of the total radioactivity was excreted after 56 days (29). As shown in Table 2, the PBPK model predicts that the continuous low-dose infusion of 2,4-TDA produced by PU foam hydrolysis from breast implants produces a very small AUC (2 x 104 ng/hr/ml). The very small AUC is due to both a slow rate of degradation of the implant, which is assumed in this model to be similar to the rate obtained in vitro (88 ng 2,4-TDA/g of foam per day under mild simulated physiological conditions) (5,6). In addition, most 2,4-TDA produced from the breakdown of the PU foam cover is extensively bound to plasma protein (9.7% free). The plasma protein binding of 2,4-TDA was obtained experimentally in the rat iv study (90.3% bound). The model-predicted results in this study confirmed that carcinogenic degradation products that are produced from implanted PU foam are stored in target organs and small amounts are excreted in animals and humans (2,(20)(21)(22)(23)(24)251). Although there is no data available at this time to show a cause and effect relationship between the use of this PU foam and production of cancer in humans, the lifetime duration of exposure of the device, the high tissue uptake, and slow dearance of 2,4-TDA from breast implants are important factors contributing to the potential hazard of 2,4-TDA released from the PU foam-covered breast implants in humans (30). Indeed, in distribution studies, the levels in every organ tested at 24 hr was reported to be higher in the rat than in the mouse, indicating more toxicity with slower dearance and larger distribution in rats; this is consistent with the higher cancer rate in rats than in mice (11-13). At a given low dose, the iv bolus and the oral administration have about the same clearance and equivalent AUC as shown in Table 2. In the oral administration, however, a sustained level of 2,4-TDA or metabolite was found in the plasma and tissues, indicating longer residence time (longer tl12) and larger tissue distribution (Vd-=12,570 ml/kg) than the iv bolus administration (16). 2,4-TDA persists longer in the body following oral administration of a small dose (0.5 mg/kg), confirming the results obtained by Timchalk et al. (16). Therefore, the model-predicted data confirm earlier findings that suggest the susceptibility to carcinogenic effects of 2,4-TDA is also dependent on the route of administration that affects 2,4-TDA kinetic behavior and thus the dose to target organs (16). Figure 4, the predicted steady state plasma concentration W,,) of free 2,4-TDA in the rat is very low (only 3.4 x 10-5 ng/ml), too low to be detected by the current state of the art analytical instrumentation (detection limits <0.1 ng/ml). However, the pharmacokinetic data derived from this PBPK model can be used to determine the total daily dose and to estimate exposure. The standard pharmacokinetic equation that relates infusion rate, steady state plasma concentration, and dearance of a chemical is as follows: As shown in This equation predicts the rate of release of free 2,4-TDA from the implants in vivo by assuming that the release rate is equivalent to a constant slow infusion. Therefore, in the rat model, Release Rate = C x Clearance = 3.4 x 10-5 ng/ml x 9,320 ml/hr/kg = 0.317 ng/hr/kg = 0.317 x 24 hr/day/1,000,000 ng/mg = 7.60 x 10-mg/kg/day. As such, the PBPK model provides a quantitative estimate of exposure that relates both plasma steady state level and dearance rate. Estimated human health risks from breast implants. Essentially, to estimate human risks, we have to derive the human exposure from the release rate of 2,4-TDA from the breast implants using the results of the human PBPK simulation shown in Table 2. As with the rat, the dearance is calculated using the model by dividing the AUC by the total administered dose (30). The model predicts that the C. of plasma free 2,4-TDA is 1.55 x 10-4 ng/ml (9.7% free) in a patient with 4.87 g PU foam in the Meme breast implants and a body weight of 58 kg. This corresponds to a plasma level of 16 x 10-4 ng/ml of total 2,4-TDA (free + conjugated). Thus, the release rate of 2,4 TDA from the foam is as follows: Release Rate = 16 x 10-4 ng/ml x 2,175 ml/hr/kg = 3.48 ng/kg/hr = 3.48 x 24 hr/day/1,000,000 ng/mg = 83.52 x 10-mg/kg/day. This corresponds to a lifetime average daily dose (LADD) of 11.93 x 10-mg/kg/day, assuming that the lifetime of the implant was 10 years (10/70). Therefore, Upper Confidence Limit on Risk = Potency Factor x LADD = 0.21 (mg/kg/day)Y1 x 11.93 x 10-6 = 2.50x 10-6 = 1 in 400,000. A potency factor (or slope factor) of 0.21 (mglkg/day)-1 was used for 2,4-TDA (23), which results in an estimated excess cancer risk of 1 case in 400,000. After 2,4-TDA was found as a degradation product of the meme breast implants (2-9), a risk assessment was completed (23,24). A small increase in lifetime risk of breast cancer (5 in 10 million) was reported, based on a limited in vitro rate of release of 2,4 TDA (23,24). Recently, based on the measured levels of TDAs in the blood of patients with the Meme PU-covered breast implants, Sepai et al. (22) estimated an increased risk oflifetime breast cancer of 149 cases in 1 million, which is approximately 60 times more than the value we obtained in this report using a PBPK model. We believe that the discrepancy comes from Sepai et al. (22) calculating the risk using the postimplant plasma levels of total 2,4-TDA (4.4 ng/ml) obtained in patients with the Meme breast implants; these levels were 1,000 times higher than the C. plasma level of 2,4-TDA obtained in this model. Simply by increasing the rate of the in vitro degradation that we used in our simulations by a factor of 1,000, we were able to reproduce the C. reported by Sepai et al. (22). No other changes in parameters are needed. As indicated earlier, to model the release of 2,4-TDA in the rat and human, we used the low rate of degradation of the PU foam obtained in vitro using phosphate buffered saline, pH 7.4, at 37°C (88 ng/glday) in our calculation. The higher plasma levels of total TDA that Sepai et al. (22) obtained in patients indicate that the PU foam degrades faster in vivo (2)(3)(4). Although the susceptibilities of the PU foam to water, buffer, oxygen, and enzymes at physiological conditions are known (2-9), the actual leach rate of 2,4-TDA in vivo is not well documented. Another factor that would affect the bioavailabiity of 2,4-TDA is plasma protein binding, which was lower in the clinical study reported by Sepai et al. (22). Thus, PBPK models are limited by available biochemical data such as leaching rate and protein binding, etc; however, they are effective tools for evaluating the safety of 2,4-TDA released from implants. In general, the risk assessment process has often relied on a number of assumptions that make it imprecise in determining the level of exposure of toxic chemicals (at acceptable risks) to the human population. In this aspect, the PBPK model, which used both physiological parameters (such as organ volumes, blood flows, etc.) and chemical-specific parameters (such as tissue distribution coefficients and biotransformation rates), can be used to predict the kinetics of chemicals and exrapolate between different routes of conmpound administration and species. Conclusions Despite the extreme complexity of the rat, a relatively simple five-compartment PBPK model gives some useful information about the mechanism of toxicity and the routedependent metabolism of 2,4-TDA. * The model provides an objective mechanism for determining the bioavailable dose of the parent compound and/or its active carcinogenic metabolite(s) at target organs, which are inaccessible experimentally. * The plasma level of free 2,4-TDA should not be used as a biological marker for polyurethane foam degradation because of its extensive protein binding. * The low release rate of 2,4-TDA from implant foam degradation will produce the characteristic distribution and excretion similar to a low dose (0.5 mg/kg) iv bolus or feeding, but with a longer half-life and a larger volume of distribution, which indicates higher accumulation in the body. * Although the model has several adjustable parameters, the only really important parameters are the metabolism rate constants Vmax and Km and the distribution coefficient in the richly perfused tissues. Other parameters such as the equilibrium distribution coefficients in other compartments did not affect the 2,4-TDA disposition significantly. Physiological limits put restraints on many other parameters to keep the model predictions realistic. * By determining Vm. and K, as rate limiting steps in the biotransformation of 2,4-TDA, the PBPK model facilitates the extrapolation across species and from one route to another. The PBPK results for humans can be used to predict an excess lifetime risk of breast and liver cancer of 1 in 400,000 in patients with the PU foam-covered breast implants. At our present stage, the risk associated with exposure to a chemical cannot be accurately characterized by a single number or even a range of numbers. The use of this type of PBPK method will provide the Food and Drug Administration (FDA) with tools for the interpretation of data within a single species, for comparison of pharmacokinetic result and effects between different species, and to improve the prediction of human effects of many chemicals that are leached or degraded at a low dose rate from medical implants. The model is most useful in this case when the plasma level of the parent compound (e.g., 2,4-TDA) and/or its main metabolite are below the limit of detection of current analytical methods (0.1 ng/ml) due to plasma protein binding. Further research is needed on the biochemical and physicochemical characteristics of importance to biokinetic modeling. Volume 106, Number 7, July 1998 * Environmental Health Perspectives
7,512.4
1998-07-01T00:00:00.000
[ "Medicine", "Biology", "Chemistry" ]
Intracellular accumulation capacity of gadoxetate: initial results for a novel biomarker of liver function Previous studies have shown gadoxetate disodium’s potential to represent liver function by its retention in the hepatobiliary phase. Additionally, in cardiac imaging, quantitative characterization of altered parenchyma is established by extracellular volume (ECV) calculation with extracellular contrast agents. Therefore, the purpose of our study was to evaluate whether intracellular accumulation capacity (IAC) of gadoxetate disodium derived from ECV calculation provides added scientific value in terms of liver function compared to the established parameter reduction rate (RR). After local review board approval, 105 patients undergoing standard MR examination with gadoxetate disodium were included. Modified Look-Locker sequences were obtained before and 20 min after contrast agent administration. RR and IAC were calculated and correlated with serum albumin, as a marker of synthetic liver function. Correlation was higher between IAC and albumin, than between RR and albumin. Additionally, capacity of both RR and IAC to distinguish between patients with or without liver cirrhosis was investigated, and differed significantly in their respective means between patients with cirrhosis and those without. We concluded, that the formula to calculate ECV can be transferred to calculate IAC of gadoxetate disodium in hepatocytes, and, thereby, IAC may possibly qualify as an imaging-based parameter to estimate synthetic liver function. Imaging protocol. MR imaging was performed on a 1.5 T scanner (Avanto, Siemens Medical Solutions, Erlangen, Germany) equipped with a 32-channel body-phased-array coil. The standard liver MR imaging protocol using the hepatocyte specific contrast agent gadoxetate disodium comprises an axial T1-weighted spin echo sequence, an axial fat-saturated T2-weighted turbo spin echo sequence acquired with a 2D navigator for abdominal imaging, an axial T1-weighted dual echo sequence, axial T1 VIBE (volume-interpolated breath-hold) sequences for dynamic imaging before and 15 s, 55 s and 2, 5, 10 and 20 min after contrast agent administration and a coronally orientated T1 VIBE sequence for the hepatobiliary phase at least 20 min after contrast agent administration 12 . Study sequences. Apart from the clinical routine image protocol, patients received steady-state precession readout single-shot Modified Look-Locker inversion recovery (MOLLI) sequences in the axial plane before contrast agent administration and throughout the hepatobiliary phase. T1 maps were calculated automatically on a pixel-by-pixel basis, and displayed on a 12-bit lookup table with a visible color-coded map, on which the signal intensity of each of the pixels reflects their absolute T1 value. The imaging parameters for the study sequence are shown in Table 1. Image analysis. All images were analysed using PACS workstations (Centricity Radiology; GE Healthcare) by a radiologist with 12 years of experience in MR imaging. MOLLI images were assessed by a single reader, blinded to the paraclinical including histopathological findings, if present. Regions of interest of maximal size were placed in the liver parenchyma on native as well as on post-contrast MOLLI images. ROI placement was chosen avoiding larger vessels or focal lesions to avoid partial volume averaging. Additionally, ROIs were placed in the abdominal aorta both on precontrast and postcontrast images. Clinical and paraclinical parameters. Patients' electronic medical records were searched for history of liver disease and documented pertinent laboratory data (hematocrit and albumin, where available). Calculation of intracellular accumulation capacity. Calculation of the ECV normalized for hematocrit using an extracellular contrast agent was described elsewhere 9,11,13,14 : www.nature.com/scientificreports/ For hepatocyte specific contrast agents, one may assume a low-level equilibrium between intravascular und extracellular compartment, and a mainly intracellular storage. An illustration can be found in Fig. 1. Therefore, the above-mentioned formula may allow for calculation of IAC, with a minor correction by subtracting the low-level amount of extracellular contrast agent. The reduction rate was calculated as described elsewhere 4 : Statistical analysis. All statistical analyses were performed using SPSS Version 24 (IBM Corporation, New York, USA). For a test of normality, the Shapiro-Wilk test was applied. Accordingly, t-test and Mann-Whitney-U were applied. Correlation analyses were performed with Spearman's Rho and Pearson's correlation coefficient. Data are expressed as means and standard deviation (SD). Results Hepatic T1 relaxation times varied between 319.67 and 819.00 ms (mean 564.7 ms) for the native sequences, and between 137.67 and 460.00 ms (mean 219.41 ms) for hepatobiliary sequences. The hematocrit varied between 0.238 and 0.492 l/l (mean 564.72 l/l). Albumin values were available in 69 of patients and varied between 20 and 48 g/l (mean 39.89 g/l). IAC showed a mean of 3.33 with a minimum of 0.36 and a maximum of 8.37. Reduction rate varied between 21.19 and 78.09 with a mean of 61.06. Correlation between both measures was strong 15 with a Pearson's r of 0.697 (p < 0.001)-a visualization of this correlation is given in Fig. 2. Albumin. As a simplified surrogate parameter for liver function, albumin was chosen and correlations with both RR and IAC were computed. Correlation, as expressed by Spearman's ρ, was higher between IAC and albumin (ρ = 0.364, p = 0.003), than between RR and albumin (ρ = 0.285, p = 0.022) (Fig. 4), although both associations have to be classified as weak 15 . Discussion Based on the findings of our exploratory investigation, we wish to propose the intracellular accumulation capacity (IAC) as a novel imaging marker for liver function on Gd-EOB-DTPA enhanced MR-imaging. IAC can be calculated easily from T1 mapping sequence image data, and it correlates slightly better with serum albumin as a simplified surrogate parameter of liver function than the reduction rate. Our results show a close correlation between values of reduction rate and those of IAC. Equally, predictability of cirrhosis is comparable between both parameters. We thereby conclude, that the approach to calculate ECV using an extracellular contrast agent can be transferred to the intracellular space in hepatic imaging using the hepatocyte-specific contrast agent gadoxetate disodium. Quantitative measurement of the myocardial tissue T1 time has long been used and is now well established for the assessment of diffuse myocardial fibrosis. Nevertheless, the accuracy of post contrast T1 mapping is sensitive to a lot of confounding factors, such as dose and concentration of the contrast agent, the time delay between contrast agent administration and image acquisition, the wash-out rate, and the hematocrit level, which affects the partition coefficient of the gadolinium contrast agent as gadolinium resides only in plasma and does not enter intact red blood cells 8,16 . Calibrating the T1 maps to blood including its hematocrit seemed to be a valuable tool to circumvent at least several of these confounding factors, and led to the description of a new parameter named ECV (extracellular volume) 9,10,17 . Calculation of the percentage of the noncellular tissue volume (ECV) is established in cardiac magnetic resonance imaging and is based on the assumption of an equilibrium between the intravascular, extracellular compartment and the extracellular compartment in organs such as the myocardium or the liver. The myocardial ECV is measured as the percent of tissue comprised of extracellular space, which is a physiologically intuitive unit of measurement and is independent of field strength 10 . As most of the aforementioned confounding factors to myocardial T1 relaxation times also apply to hepatic relaxation times, we assumed, that description of liver function using gadoxetate disodium might also benefit from a calibration to blood including its hematocrit. Whereas extracellular contrast agents show an equilibrium between the extracellular space and the intravascular compartment without relevant intracellular uptake, with gadoxetate disodium, in the hepatobiliary phase, one may assume a mainly intracellular storage and a low-level www.nature.com/scientificreports/ equilibrium between intravascular und extracellular compartment 18 . Therefore, the adaptation of the ECVformula on IAC includes a minor correction by subtracting the low-level amount of extracellular contrast agent. The thereby calculated IAC can range from close to 0, meaning no relevant intracellular accumulation with a gadoxetate retention similar to blood, to around eight, indicating a high intracellular accumulation capacity due to well-functioning hepatocytes, as Gd-EOB-DTPA uptake into hepatocytes depends on the integrity of the hepatic transport proteins 4,5,19,20 . There are other approaches to quantify liver function using gadoxetate disodium of which most refer to signal intensity measurements and their derived measures liver-to-spleen ratio, liver-to-muscle-ratio, and relative enhancement [21][22][23][24][25][26] . Still, apart from field strength, signal intensities in MRI are also dependent on the manufacturer, the device, and also the sequence used, and, therefore, cannot be easily transferred, and in direct comparison, relaxation time measurements showed higher correlation to scintigraphically evaluated liver function than signal intensity-based measurements 21 . Most recently, another approach using relaxometry was presented by Liu et al., using the hepatocyte enhancement fraction as a surrogate parameter for liver function, which is calculated by the T1 differences before and after the enhancement of liver and spleen using the double compartment model 27 . This approach appears to be promising, and a direct comparison to the approach presented here would be desirable. Still, it includes a comparison to another organ, the spleen, thereby possibly introducing new confounding factors. To further evaluate the diagnostic value of IAC, we correlated it to serum albumin, which is one of the parameters that can be measured in peripheral blood and are used to assess liver synthetic function in clinical practice. Compared to reduction rate, the correlation was higher with IAC, a finding, which might support the assumption, that calibrating the T1 values to blood circumvents a few of the confounding factors limiting reproducibility of T1 values 10 and thus might lead to more precise results. This might be of value at an earlier stage of reduced liver function, even though the test is not specific for liver disease: albumin serum levels are also reduced in patients with malnutrition or malabsorption, protein-losing enteropathy, or nephrotic syndrome, but, nevertheless, even though we cannot confidently exclude malnutrition as a confounding factor, according to the accessible medical records, the other typical preconditions for serum albumin alterations were not present in our study population. Yet, in the present collective, prediction of cirrhosis was not improved by IAC compared to reduction rate. Due to the small number of patients with cirrhosis (n = 20) as well as the heterogeneity of its etiology as well as the disease's stage, we regard these results as preliminary and cannot draw any final conclusion as to IAC's value in cirrhotic livers. Limitations. There are some limitations to this investigation that need to be addressed: the study as approved by and registered with the local ethics committee did not include an additional blood sample for scientific reasons. Therefore, possibilities to measure liver function were limited as the only parameter which could be obtained in a relevant part of the patient collective was albumin, present in at least about 65% of patients. In a prospective approach, dedicated liver function tests such as the ICG or also the LIMAx test could have been applied, additionally, more patients with highly decreased liver function should be examined. Additionally, liver imaging was performed mainly for characterization of focal liver lesions-therefore the majority of patients included did not have relevantly impaired liver function. However, advantages of the IAC over the reduction rate are to be expected in patient populations with reduced liver function, therefore further studies are warranted to characterize the benefits of this imaging marker further. Conclusion The formula to calculate ECV can be transferred to calculate the IAC of hepatocytes using the hepatocyte-specific contrast agent gadoxetate disodium. Additionally, IAC shows a higher correlation to albumin, thus possibly qualifying as a new image-based parameter to estimate synthetic liver function.
2,654.4
2020-10-22T00:00:00.000
[ "Medicine", "Biology" ]
Combinatorial Bayesian Dynamic Linear Models of Bridge Monitored Data and Reliability Prediction Considering the uncertainties and randomness of the mass structural health monitored data, the objectives of this paper are to present (a) a procedure for effective incorporation of the monitored data for the reliability prediction of structural components or structures, (b) one transforming method of Bayesian dynamic linear models (BDLMs) based on 1-order polynomial function, (c) modelmonitoringmechanismused to look for possible abnormal data based onBDLMs, (d) combinatorial Bayesian dynamic linear models based on the multiple BDLMs and their corresponding weights of prediction precision, and (e) an effective way of taking advantage of combinatorial Bayesian dynamic linearmodels to incorporate the historical data and real-time data in structural timevariant reliability prediction. Finally, a numerical example is provided to illustrate the application and feasibility of the proposed procedures and concepts. Introduction Long-term ambient environments, such as chemical attack from environmental stressors and continuously increasing traffic volumes, make the physical quantities of civil infrastructure be subjected to changes in both time and space; these changes would make serious impacts on the serviceability and the ultimate capacity of structures and further have serious impacts on the remaining life of an existing structure [1].The structural performances' degradation processes (e.g., resistance, reliability indices), which are usually considered as Markov chains, are time-variant and irreversible.The timevariant reliability indices of bridges are dependent on both the applied loads and the remaining strength of structural components or system, which can reflect the safety and serviceability of bridges, and the reliability indices can be solved with first-order second-moment (FOSM) method [2,3].Therefore, assessing as well as predicting the structural time-dependent reliability indices is crucial for structural safety and serviceability assessment. Through health monitoring of bridges, the structural basic statuses, including strains, stresses, and deflections of specified structural components or structures, can be obtained.Nowadays the research on structural health monitoring (SHM) generally experiences two stages.The first stage, falling in the mature stage, is to install an array of sensors for the observation and collection of data on a bridge structure during a period of time [4][5][6][7].The second stage is mainly the application of health monitoring information.A sound number of studies are mainly focused on the modal parameter identification, structural damage detection technology, performance prediction, reliability assessment, and other fields [8,9].For research of the bridge reliability prediction and assessment, some achievements [10][11][12][13][14] are obtained, such as the reliability assessment of long span truss bridge, structural performance prediction based on monitored extreme data, and the use of the statistics of extremes to the reliability assessment and performance prediction of monitored highway bridges.However, due to the uncertainty of the bridges' real-time health monitored data, the research Bayesian Dynamic Linear Models (BDLMs) BDLMs are the predicting approaches based on a philosophy of information updating [15,16] which define a dynamic model system of time series processes that can incorporate all useful monitored information into the model to update the prediction model.The BDLMs include a state equation, an observation equation, the initial information, and the timedependent probability recursion processes based on Bayesian method.The state equation shows changes of the system with time and reflects inner dynamic changes of the system and random disturbances.The observation equation expresses the relationship between the measured data and the current state parameters of the system.According to the definition of BDLMs [15], for each time , the general dynamic linear model is characterized by the quadruple {1, , , } and formally defined as follows: observation equation: state equation: initial information: where is the observation data at time In this paper, the BDLMs mean that the observation equation and the state equation are both linear and are shown in ( 1) and ( 2).The 1-order polynomial function model is adopted to build the state equations. Transferred State Equation Based on 1-Order Polynomial Function and Monitored Data.For the mass and random monitored extreme data, especially for monitored data at time −1 and before time −1, the discretized motion equation and the fitted 1-order polynomial function, which is commonly used for the prediction of the trend data, are adopted to predict future stress data of time , so the 1-order polynomial function can be applied to properly build the BDLMs. (1) 1-Order Polynomial Function of Monitored Data.Consider where is the trend data (state variable) at time ; , are coefficients; is the state error indicating the model uncertainty; is the total monitored time, unit of which is day. (2) State Equation Based on (4).The first-order differential of ( 4) was considered as the discretized motion equation, where is the nominal speed of the trend data , which can be obtained with (4); is an error term.For simplicity, we consider a discretization in small interval of time ( − 1, ), as follows: that is, where it is assumed that the random error has density [0, = 2 ] and can be estimated with (13).With a further simplification, we take unitary time intervals as one day; namely, − ( − 1) = 1, so that (7) can be rewritten as follows: where (8) will be used to build the state equation of BDLMs, which will be shown in (10). Transferred BDLMs Based on 1-Order Polynomial Function. Based on Section 2.1, according to the definition of BDLMs [15,17], for each time , the general and easy forms of the dynamic linear models are defined as follows: observation equation: state equation: initial information: where is the monitored data at time ; is the state parameter indicating the level of the monitored data at time ; is obtained with (4); ] and are, respectively, the monitored error and the state error at time , which are all zero-mean normal random variables. For each time , the BDLMs include the following parameters: is the variance of monitored errors at time ; is the variance of state error at time ; ] and are, respectively, monitored errors and state errors.It is assumed that error sequences ] and are internally independent, mutually independent, and independent of ( −1 | D −1 ). With ( 9)- (11), the relationships between monitored data and state parameters are shown in It can be known from ( 12) that the modeling processes of BDLMs can be divided into two key steps, which are shown in Figure 1.The first step is to obtain the a priori probability density function (PDF) of at time based on the state equation and the a posteriori PDF of −1 at time − 1; the second step is to get the a posteriori PDF of at time based on the a priori PDF of state parameters at time and inspection/monitored data at time . In this paper, the monitored interval period of extreme stress data is one day; is estimated with the variance of monitored data.According to the research of [15], can be approximately solved with where is the transpose of and is the discount factor defined by engineering experience, which is usually 0.95-0.98 for the BDLMs based on 1-order polynomial function. Prediction and Monitoring of Monitored Extreme Stress Based on Combinatorial BDLMs 3.1.Assumptions of the BDLMs.BDLMs are presented as a special case of a general state-space model, being linear and Gaussian.So the BDLMs satisfy the assumptions of a state-space model.While the basic assumptions [15,18] of state-space model are as follows: (a) State variables, observation errors, and state errors all follow normal distributions. (b) ( ) is a Markov chain [15] which is shown as follows. Dependence Structure for a State-Space Model.Consider , where (⋅) is a general PDF. The recursive relation between state variables and inspection/monitored variables is shown in (14). If the initial state data follows the other distributions, then the distribution can be approximately obtained as follows: (1) With estimation method of kernel density, the actual distribution function ( −1 ) of the initial state data is approximately ( −1 ); namely, (2) Since any set of data can be fitted by a few normal distributions, namely, where ∑ =1 = 1 and ≥ 0, Φ(⋅) denotes the cumulative probability distribution functions of standard normal distribution. (3) The weights and distribution parameters of the fitted normal distributions can be obtained with the least residual error quadratic sum method OLS; namely, where is the weight.The values of unknown parameters for the fitted distributions can be possessed by the optimization computation with the rule of OLS.Furthermore, the optimized parameters must be determined to make sure that the value of OLS is the minimum. According to the definition of highest a posterior density (HPD) region [15], the predicted interval of the monitored data with a 95% confidential interval at time is where , − 1.645√ , is the predicted lower limit value and , + 1.645√ , is the predicted upper limit value. (4) The a posteriori distribution at time : where , = where where , = ∑ =1 , , , , = −1 , / ∑ =1 −1 , , = 1, 2, . . ., , and ∑ =1 , = 1, −1 , = ∑ =1 −1 , . −1 , is the predicted precision of the th model (the reciprocal of the one-step predicted variance) and −1 , is the predicted precision of the combinatorial prediction model.(7) Comparison of the predicted precisions between combinatorial prediction model and the prediction model based on the arithmetic means: where In this paper, the main idea of model monitoring mechanism is to use one or more alternative models to compare and evaluate model performance. According to the research [17], model monitoring is achieved through Bayesian factors under the normal assumption.The main idea is firstly to build an alternative model and then to combine an existing probability model for constructing the formula of Bayesian factors. In this paper, the adopted probability distribution density function of the alternative model and one-step prediction model are, respectively, (31) and (32) as follows: where = ( − )/√ is the standard prediction error and is the standard deviation of .The Bayesian factor for 0 (⋅) versus 1 (⋅) based on the observed value of is defined as where = ( − )/√ is the standard prediction error and is the standard deviation of . For integers = 1, . . ., , the Bayesian factor for 0 (⋅) versus 1 (⋅) based on the sequence of consecutive observations , −1 , . . ., −+1 is built as (35); namely, the built formula () of the cumulative Bayesian factor is where () is the cumulative Bayesian factor, which measures the evidence provided by the recent (up to and including time ) consecutive observations , −1 , . . ., −+1 .With (34), the changing curves of Bayesian factors with are shown in Figure 2. In this paper, according to the engineering experience, the adopted monitoring criteria [15,16,21] are as follows: if = 3 and () < 0.15, then the corresponding inspection/monitored data is abnormal, which need to be removed; otherwise, the inspection/monitored data is normal.After removing the abnormal data, in the changing curves of cumulative Bayesian factors, the prediction precision of the Bayesian dynamic model can be shown.Namely, if the cumulative Bayesian factor is bigger, the prediction precision of the Bayesian dynamic model is better.The uncertainty of the Bayesian dynamic models is smaller. Reliability Prediction Based on Combinatorial BDLMs 4.1.First-Order Second-Moment (FOSM) Method.In this paper, the FOSM method [2, 3] is adopted to predict the reliability indices of bridge structures.Namely, only the mean and variance of predicted data are taken into account.Suppose there are random variables (generalized resistance) and (generalized load effects including dead load effect and live load effect) which are internally independent and mutually independent; the mean value and standard variance of which are, respectively, as follows: , ; , . With FOSM method, the formula of the reliability indices can be obtained with Prediction Formula of Reliability Indices Based on FOSM. In this paper, a five-span continuous steel plate girder bridge is taken as an example.The total length of the bridge is 188.81 m.The explicit details about the aim and results of the monitoring program for the whole bridge are given in [11]. The extreme stress data at the beam bottom in the middle part of the second lateral span from the whole bridge is monitored.As far as the actual engineering is concerned, the computation accuracy of the reliability method (firstorder second-moment (FOSM)) [3] adopted in this paper is sufficient.Namely, only the mean value and second-moment value about the variable are used.The limit state function of the beam from the second lateral span is where is steel yield strength, is the stress caused by the dead weight of steel, is the stress caused by the dead weight of concrete, is the monitored extreme stresses predicted with the combinatorial BDLMs, and is a factor assigned to the data provided by the sensors. The reliability index (first-order) is where , are mean and standard deviation of ; , are mean and standard deviation of ; , are mean and standard deviation of ; , are mean standard deviation of ; is a factor assigned to the data provided by the sensors. For the real-time monitored reliability indices, the monitored data is one by one, so = 0, while for the reliability indices predicted with the combinatorial BDLMs in this paper, due to the randomness and the uncertainty of the monitored data, ̸ = 0. Application to an Existing Bridge The I-39 Northbound Bridge, which was described in Section 4.2, was built in 1961; it is a five-span continuous steel plate girder bridge.The extreme stresses at the beam bottom in the middle part of the second lateral span from the whole bridge are monitored for eighty-three days; the monitored data displayed the variability of the stresses caused by traffic, temperature, shrinkage, creep, and structural changes.The stresses from the dead weight of the steel structure and the concrete deck are not included in the measured data.And the day-by-day monitored extreme stress data are shown in Table 1 and Figure 3. In this existing example, the state equation, obtained with ( 4)- (8), is adopted to build the BDLMs; namely, 1-order polynomial function of monitored data is where is approximately state value of health monitored data at time . For obtaining the distribution parameters of initial information, the monitored extreme data of the 83 days is smoothly processed, and then the initial information of monitored data is approximately obtained, which is shown in Figure 4. Through Kolmogorov-Smirnov (K-S) test for the initial information, the initial a priori PDF is lognormal PDF or normal PDF shown in Figure 5 and (42). BDLMs Based on the Monitored Data. Based on the monitored data, with (4)-( 8) and ( 9)-( 11), the built BDLMs are as follows: observation equation: 42) shows that the initial information follows normal distribution or lognormal distribution.So the following four cases are discussed to predict the monitored extreme data. Case 1.Initial information follows normal distribution, and then the BDLMs are built to predict the monitored extreme data. Case 2. Initial information follows lognormal distribution; firstly the lognormal distribution must be transformed into a quasinormal distribution [21,22] with ( 18)- (19); and then the BDLMs are built based on the quasinormal distribution to predict the monitored extreme data. Case 3. The arithmetic mean of the one-step predicted mean values, respectively, obtained with Cases 1 and 2 is considered as the predicted extreme data of the third case. Case 4. The fourth case is to build combinatorial BDLMs with BDLMs obtained with Cases 1 and 2; the modeling processes of combinatorial BDLMs are described in Sections 3.2 and 3.3. In this paper, the Bayesian factors are adopted to seek the abnormal data, and the monitored results are shown in Figures 6 and 7, from which it can be known that the data of the 9th day is abnormal.From Table 1, it can be seen that the data of the 9th day is the biggest, so it may be abnormal. After removal of the abnormal data, the changing cumulative Bayesian factors shown in Figure 8 reflect that the prediction precision of BDLMs is better and better.The predicted extreme stresses and prediction precision (the reciprocal of predicted variances) of the above four cases are, respectively, shown in Figures 9-14. From Figures 9-13, it can be noticed that the predicted data and the predicted ranges of the four cases all fit the changing rules of monitored extreme data, but as far as the prediction precision shown in Figure 14 is concerned, the prediction precision of combinatorial model is the best.So the combinatorial prediction model of monitored extreme data is adopted to predict the structural reliability indices. Reliability Prediction Based on the Combinatorial BDLMs. In Figure 14, it can be observed that prediction precision of the combinatorial model is the best.So the combinatorial model of the extreme data is adopted to predict the structural reliability indices with (38) and (43).The predicted results, which are shown in Figure 15 assigned to bottom of the girder in the middle of the second lateral span which is shown in [19], where , are mean and standard deviation of the monitored extreme stresses predicted with the combinatorial BDLMs. Conclusions In this paper, based on the everyday monitored extreme stresses of bridge, the structural reliability indices are predicted with combinatorial BDLMs and FOSM method.And the following conclusions can be reached: (1) The BDLMs, which are used to seek the abnormal data of the mass monitoring information, are obtained with 1-order polynomial function based on the past information. (2) The monitored extreme stresses-based combinatorial BDLMs are firstly built.The predicted extreme stresses and the predicted ranges of the above four cases are almost the same, but as far as the prediction precision is concerned, the combinatorial BDLMs have the best prediction precision.(3) Based on the combinatorial BDLMs of monitored extreme stresses, structural reliability indices are predicted.Compared with deterministic monitored extreme stresses-based reliability indices, this paper considered the randomness and uncertainty of monitored data, so the predicted reliability indices are smaller.But the predicted smaller reliability indices may better reflect the actual state of the bridge.Thus, the smaller reliability indices may be more reasonably used to assess the structural safety and serviceability. In this paper, the proposed reliability prediction method is easy and may be widely used in the structural health monitoring.BDLMs are possible to include subjective judgments with the observed data in order to obtain a more informed and accurate prediction.The numerical applications presented, using the monitored extreme data of an existing bridge, illustrate the application and feasibility of the proposed approaches and concepts. 3. 4 . Model Monitoring of BDLMs.Model monitoring has three purposes.The first is to identify where model prediction function declines and which form the model fault occurred in.The second is to cope with the faults and to monitor and update the model.The third is to improve the accuracy of future prediction. where 𝑝 0 ( | D −1 ) is the one-step predictive probability density function of monitored extreme stress data; 1 ( | D −1 ) is probability density function of the alternative model, namely, the routine or standard probability density function; () is the Bayesian factor for 0 ( | D −1 ) versus 1 ( | D −1 ) based on the observed value of .Further, according to (31)-(33), the Bayesian factors can be obtained as follows: Figure 2 : Figure 2: Curves of Bayes factors versus one-step predicted errors. Figure 4 : Figure 4: Curves of initial information and the monitored extreme stress data. 15 Figure 6 : 3 T Figure 6: Curves of time-dependent Bayes factors (the data of the 9th day is abnormal). Figure 7 : Figure 7: Curves of time-dependent Bayes factors after eliminating the abnormal extreme data (the data of the 9th day is deleted). Figure 10 :Figure 11 : Figure 10: Predicted curves of extreme data when initial information follows lognormal probability distribution (Case 2). Figure 12 : Figure 12: Predicted curves of extreme data based on the combinatorial model (Case 4). Figure 13 : Figure 13: Comparison among predicted data of the four cases. 2 )Figure 14 : Figure 14: Prediction precision comparisons among the predicted stresses of the four cases. Figure 15 : Figure 15: Curves of reliability indices based on the combinatorial model of monitored extreme data. The mean value −1 is a point estimate of this level −1 , and −1 measures the associated uncertainty.Each information set D −1 comprises all the information available at time − 1, including D 0 , the values of the variances { , : > 0}, and the values of the observations −1 , −2 , . . ., 1 .Thus, the only new information becoming available at time is the observational value , so D = { , D −1 }. ; ] is the observation error or the observation noise; is the state variable at time ; [⋅] is normal probability density function; is the variance which indicates the uncertainty of observation errors; and are both the regression coefficient of states; is the variance which indicates the model uncertainty recursive from time − 1 to time . is the state error or state noise at time ; is the monitored total time; the initial information is the probabilistic representation of the predictors' belief about the level −1 at time − 1. ,−1 is the information set at time − 1, including mean value ( ,−1 ) and variance ( ,−1 ).In addition, suppose that ] , and , are internally and mutually independent of each other, and they are independent of , .If the initial state data follows the lognormal distribution, then the state data can be transformed into a quasinormal distribution [ , 2 ] [21, 22] with (18) and (19); the distribution parameters are, respectively, shown as where ] , is observational error of the th model; , is state variable at time ; , is the variance of monitored errors; , is the variance of state noise; , is state error or state noise; D , is the information set at time and before time ; and D , = { , D ,−1 }.D , + , , ; , = , − , , , ; , = , / , ; , = , − , (one-step predicted error); , is adaptive coefficient; and , is the transpose of , .(5) Predicted probability distribution based on the arithmetic mean at time :
5,224.8
2016-02-15T00:00:00.000
[ "Engineering" ]
PLPF‐VSLAM: An indoor visual SLAM with adaptive fusion of point‐line‐plane features Simultaneous localization and mapping (SLAM) is required in many areas and especially visual‐based SLAM (VSLAM) due to the low cost and strong scene recognition capabilities conventional VSLAM relies primarily on features of scenarios, such as point features, which can make mapping challenging in scenarios with sparse texture. For instance, in environments with limited (low‐even non‐) textures, such as certain indoors, conventional VSLAM may fail due to a lack of sufficient features. To address this issue, this paper proposes a VSLAM system called visual SLAM that can adaptively fuse point‐line‐plane features (PLPF‐VSLAM). As the name implies, it can adaptively employ different fusion strategies on the PLPF for tracking and mapping. In particular, in rich‐textured scenes, it utilizes point features, while in non‐/low‐textured scenarios, it automatically selects the fusion of point, line, and/or plane features. PLPF‐VSLAM is evaluated on two RGB‐D benchmarks, namely the TUM data sets and the ICL_NUIM data sets. The results demonstrate the superiority of PLPF‐VSLAM compared to other commonly used VSLAM systems. When compared to ORB‐SLAM2, PLPFVSLAM achieves an improvement in accuracy of approximately 11.29%. The processing speed of PLPF‐VSLAM outperforms PL(P)‐VSLAM by approximately 21.57%. simple structure, low cost, strong scene recognition ability, and the ability to capture rich texture information.Consequently, VSLAM has gained significant attention in both academic and industrial fields (Di et al., 2019). Conventional VSLAM typically relies on point features to track the movements of agents and build maps, as this approach is simple and effective.However, because images of nontextured or low-textured environments (Figure 1) lack sufficient point features, conventional VSLAM may suffer from some issues, such as tracking loss (Guo et al., 2021), failure in loop detection stage (Tsintotas et al., 2022).To address these challenges, researchers have been exploring alternative approaches.For instance, to deal with tracking loss issues, they attempted to develop point-line-plane (PLP)-based VSLAM systems that can combine line and plane features (Li, Li, et al., 2017). Additionally, some researchers tried to employ deep learning-based techniques for loop detection (An et al., 2022;Arshad & Kim, 2021;Lu et al., 2023).This paper only focuses on the tracking loss issue, because loop detection is not always a necessary step in all scenarios. Considering the current attempts still face limitations in low-/ nontextured indoor environments.Consequently, further investigation and development of VSLAM systems that can effectively operate in such scenarios remain essential. Inspired by that the conventional point-based VSLAM can handle scenes with rich textures, and the structures of indoor space (such as walls are perpendicular to the floors and ceilings) can be used as an effective supplement a feature in non-/low-textured areas, we propose a VSLAM system with an adaptive fusion of point-lineplane features (PLPF-VSLAM).It is able to adaptively select proper feature fusion strategies for localization and mapping, according to the texture richness of scenes.For scenes with rich texture features, the system will fuse point and line for tracking, and store such features in the map for mapping.For scenes lacking texture features, it will empty the PLP fusion.This paper is organized as follows.Section 2 provides an overview of the current research on VSLAM.Section 3 presents the PLPF-VSLAM.Section 4 evaluates the PLPF-VSLAM by using two RGB-D benchmarks, TUM data set and ICL_NUIM data set.Upon the results, conclusions and future work are drawn in the final section. | RELATED WORK In recent years, many different VSLAM systems have been presented. According to the type of feature utilized, VSLAM can be categorized into three types: (i) point-feature-based VSLAM, P-VSLAM, (ii) line feature-based VSLAM, L-VSLAM and (iii) VSLAM based on fusion of point, line, and/or plane features, PL(P)-VSLAM.It should be noted that it is difficult for a SLAM system to fulfill the accuracy requirements of mapping and tracking solely based on plane features. Thus, plane features are rarely used alone but are usually employed together with the other two types of features. | P-VSLAM P-VSLAM systems primarily rely on point features for tracking and mapping.The commonly used point features in P-VSLAM include scale-invariant feature transform (SIFT) (Lowe, 2004), speeded up robust features (SURF) (Bay et al., 2006), and oriented FAST and rotated Brief (ORB) (Rublee et al., 2011).The processing methods of point features are highly developed, which makes P-VSLAM becomes the current mainstream of VSLAM.There are several classic P-VSLAM systems, such as PTAM system (Klein & Murray, 2007), MonoSLAM (Davison et al., 2007), semidirect visual odometry (SVO) (Forster et al., 2014).In general, the PTAM system is regarded as the prototype of P-VSLAM.This system brought three major innovations to SLAM systems: it (i) replaces the traditional Kalman filtering with nonlinear optimization; (ii) employs a keyframe mechanism.That is, the system only needs to process the most representative image, rather than each frame of an image, which greatly improves the efficiency of the calculation; (iii) to meet real-time requirements, separates the tracking and mapping process by using a multithreading mechanism.However, because without considering the global loop closing, the PTAM system is only applicable to small scenarios, and its tracking process is easy to fail.Afterward, an open-source VSLAM based on point feature named ORB-SLAM was released, Mur-Artal et al. (2015).It employs the ORB feature, loop closing detection mechanism, and the BOW model, which forms a very complete framework of point-feature-based VSLAM.Because ORB-SLAM is prone to tracking loss when the camera rotates violently, on the basis of this version, the authors released ORB-SLAM2 after 2 years (Mur-Artal & Tardós, 2017).This system can support monocular, stereo, and RGB-D cameras.It realizes real-time localization and mapping, in which the accuracy of localization is at a centimeter-level.Hence, it is the most typical P-VSLAM system.But it should be mentioned that this system is very sensitive to dynamic objects and is easy to have tracking loss in dynamic scenes. | L-VSLAM In response to the limitations of P-VSLAM in non-/low-textured scenarios, researchers began studying L-VSLAM.Such a VSLAM utilizes line features as the primary source of information for tracking and mapping.For instance, Smith et al. (2006) applied the line feature in the SLAM system, in which a line is represented by two endpoints.Yet, this system is only suitable for small scenes where entire line segment can be fully captured.To address this limitation, Lemaire and Lacroix (2007) applied infinitely long line segments to large scenes.This practice effectively expands the applicable scenarios and further makes the process of matching line segments between frames easier. However, the initialization of line segments in space may fail in scenarios with a large landmark space.Other than that, Zhang et al. (2015) proposed a three-dimensional (3D) line-based stereo VSLAM system, which employs two different representations to parameterize 3D lines to obtain a better result.Inevitably, this system also has shortcomings.In particular, it is time-consuming in the straight-line tracking process as it is based on the optical flow method.There are several PL-based VSLAM systems, such as linesegment-detector (LSD)-SLAM (Engel et al., 2014), monocular-based PL-SLAM (Pumarola et al., 2017;Yang et al., 2021), point-line fusion (PLF)-SLAM (Fang & Zhan, 2020), PLI-VINS (Zhao et al., 2022).The LSD-SLAM (Engel et al., 2014) applied the direct method to semidense monocular in SLAM, which achieved semidense scene reconstruction on the CPU.But, this system is prone to tracking loss when the camera moves quickly.The monocular-based PL-SLAM proposed by Pumarola et al. (2017) utilized fusion of point and line features to the entire SLAM process.This system addresses tracking and matching problems of specific line segments by removing outliers based on the comparisons of length and orientation of line features.Similarly, Li et al. (2020) proposed a low-drift monocular SLAM method for indoor scenes.In this system, the estimation of rotation and translation are decoupled to reduce long-term drift in indoor scenarios.In particular, it estimates a drift-free rotation between cameras by using spherical mean-shift clustering and a weak Manhattan world hypothesis (Zhou et al., 2016).And then, the translation between the cameras is calculated based on the features of points and lines. | VSLAM that uses plane features generally include PP-based (Guo et al., 2019;Taguchi et al., 2013;Zhang et al., 2019) and PLP-based VSLAM (Li, Hu, et al., 2017;Li, et al., 2021;Xia et al., 2022).One of the typical PP-based VSLAM systems was proposed by Zhang et al. (2019), which takes the data from RGB-D camera as the input to do localization and mapping in a low-textured scenario.This system improves its accuracy and robustness by employing structural information in the whole process.However, the system assumes that plane edges should be intersections of vertical planes, limiting its applicability in scenarios with inclined planes.By fusion features of point, line, and plane, the SLAM system presented by Li et al. (2021) decouples rotation and translation and then obtains the rotation of object drift by constructing a Manhattan world.This practice further improves the accuracy of the system.Meanwhile, on the basis of an instance-based meshing strategy, this system constructed dense maps by dividing plane instances independently.However, the initialization of building a Manhattan world needs three pairs of perpendicular planes or lines.Thence, users need to consider whether the scenario meets such a specific condition before using it.PLP-SLAM (Shu et al., 2023) tightly incorporates the semantic and geometric features (point, line, and plane features) to boost both frontend pose tracking and backend map optimization. However, this method does not perform well in low-textured environments.UPLP-SLAM (Yang et al., 2023) designed a mutual association scheme for data association of point, line, and plane features, which not only considers the correspondence of homogeneous features (i.e., point-point, line-line, and plane-plane pairs), but also includes the association of heterogeneous features (i.e., point-line, point-plane, and line-plane pairs).By considering these cross-feature associations, UPLP-SLAM aims to improve the accuracy of the SLAM system, even in low-textured environments. | Tracking In the PLPF-VSLAM system, RGB and depth images are utilized as inputs.The tracking process involves estimating the pose transformation between two frames of images.Initially, point and line features are extracted from the RGB images, while plane features are extracted from the depth images.Meanwhile, incorrect feature matches are eliminated to ensure accuracy.Once the feature extraction is completed, the system constructs various projection error functions based on the matching results and predefined thresholds.These projection error functions capture the differences between the projected and the actual observed features.By optimizing these projection errors, the system obtains the pose estimation results, which represent the transformation between the two frames of images. | Feature extraction and matching ORB features are commonly used in VSLAM systems due to their desirable characteristics, such as invariance to rotation and scale, fast extraction, and efficient matching.These characteristics contribute to improved efficiency and performance in many scenarios.However, in nontextured or low-textured environments, the effectiveness of ORB features may be limited because they struggle to extract sufficient point features for accurate pose estimation.With that in mind, the PLPF-VSLAM adds line feature extraction based on LSD approach (Von Gioi et al., 2008), and uses line band descriptor (Zhang & Koch, 2013) to describe the feature of line segments. Indoor scenarios have a large number of non/low-textured planes, but they show many structural features (e.g., vertical, parallel). Such features also can be employed to improve the stability of a VSLAM system.The approach presented in Feng et al. (2014) is employed to extract plane features (Figure 3), which includes three steps: (i) divide the point cloud in the image into N nodes and remove the nodes with missing or discontinuous depth information, (ii) cluster the eigenvalues of each pixel in the image, and cluster the continuous blocks with feature differences within the threshold range into the same segmentation block, (iii) carry out iterative optimization for each pixel to output parameters of plane features. In this paper, a plane is described in the Hessian form, that is, T , which is the normal vector of the plane; d is the distance from the camera's optical center to the plane.In the process of matching two plane features, two values are needed: one is the angle of the normal vector between them, and another is the d of them.Two conditions are used to determine if two planes can be matched.One is if the angle between normal vectors of the two planes is less than a threshold (i.e.,   ), and another is if the distance between the two planes is less than a threshold ). | Pose estimation During the pose estimation process, the detected 3D points, lines, and planes from the previous frame are projected onto the current frame.This projection is done using the estimated pose transformation between the two frames.By projecting the features, their positions in the current frame can be estimated.To evaluate the accuracy of the pose estimation, a reprojection error is calculated by comparing the projected features with the corresponding features that are detected directly in the current frame.This re-projection error is used to construct an error function, which represents the differences between the projected and observed features.This error will be further minimized during the optimization process to obtain the optimal pose estimation. For point features, the reprojection error function is Equation ( 1): where u i is the feature point corresponding to the 3D point in the current frame; P i represents the 3D point in the world coordinate system; K indicates the camera internal parameters, and T cw denotes the transformation matrix from the world coordinate system to that of the camera. The Jacobian matrix of Equation (1) on T cw is Equation (2), while that on P i is Equation (3): (2) where P′ represents the coordinate of P i in the camera coordinate system, and R denotes the rotation matrix from the world coordinate system to that of the camera. For line features, we formulate the reprojection error function based on the point-to-line distance between l and two endpoints of projected line from the matched 3D line in the key-frame.For each endpoint P, the reprojection error can be noted as Equation ( 4): where K is the internal parameters of the camera; T cw represents the transformation matrix from the world coordinate system to that of camera; P denotes the endpoint of the 3D line segment; l is the coefficients of the 2D line equation. The normalized line of a line feature is Equation ( 5): (5) The Jacobian matrix of Equation ( 4) on T cw is Equation ( 6): The Jacobian with respect to P is Equation (7): where P′ is the coordinate of P in the camera coordinate system. A plane in the Hessian form has four parameters, but that in 3D space only has three degrees of freedom.Thus, to address this overparameterization gap, we denote the unit normal vector by ϕ and ψ to change its representation, where ϕ and ψ are the azimuth and elevation angles of the normal.Then, a plane is represented as a minimized parametric form with only three parameters, that is, it can be represented as Equation ( 8) (Zhang et al., 2019): The reprojection error is Equation ( 9): ( ) where π m is the observed value of the corresponding plane in the current frame; π w is the 3D plane in the world coordinate system; T cw is the transformation matrix from the world coordinate system to that of camera. The Jacobian matrix of the re-projection error (Equation 9) on T cw is Equation (10) (Zhang et al., 2019): (10) The Jacobian with respect to π w is Equation ( 11): where α β γ , , i i i are the numbers of matched point, line, and plane features; F F 1, 2, and F3 represents the objective function of point, line and plane features, respectively.F F 1, 2, and F3 are expressed as Equation ( 13): where H H , p l , and H π are Huber functions of point, line, and plane, respectively; Γ , Γ p l , and Γ π are the covariance matrix associated to the scale at which the key points, line endpoints, and planes were detected, respectively.Subsequently, a selection process ensues to ascertain the inclusion of specific point, line, and plane features within the map.If a feature can be reliably tracked across no fewer than three keyframes, it will be considered stable and thus included in the local map.Conversely, if a feature cannot be tracked consistently, it will be removed from the map.Once the keyframes and their corresponding map features are added to the local map, optimization is carried out using the BA algorithm.This optimization practice serves to refine the camera poses and the positions of the point, line, and plane features in the local map, with the overarching objective of minimizing the reprojection error.By optimizing the local map, the accuracy and consistency of the map representation are improved, leading to heightened reliability of localization and mapping outcomes. | LOOP CLOSING In the field of VSLAM, relying solely on the pose transformation calculation between two adjacent keyframes leads to an inevitable accumulation of errors.This accumulation, in turn, renders the system unreliable over extended duration of operation.Therefore, it is critical to YAN ET AL. | 55 eliminate the accumulated error by performing pose optimization in loop closing.The loop closing of PLPF-VSLAM is based on the approach presented in (Mur-Artal & Tardós, 2017).This process mainly includes two components: loop detection and loop correction. The loop detection is to detect the loop keyframe by using a BOW model (Gálvez-López & Tardos, 2012).To determine whether the current keyframe can be used as a loop keyframe, we need to calculate the similarity transformation from the current keyframe to the loop keyframe; on the basis of similarity transformation, obtain the translation and rotation between the current and the loop keyframe; and perform projection and matching according to the translation and rotation to detect the reliability of the current loop. The loop correction starts by adjusting all camera poses based on a known similarity transformation.Then, the adjusted pose is employed to update the map points that correspond to the connected keyframes.Meanwhile, it fuses the map point of the loop keyframe with that of the current keyframe.Afterward, these fused map points are further reprojected and rematched to establish new matching relationships, and according to the new relationships, the poses of all cameras are optimized based on the pose graph.Finally, loop correction is finished after using the full BA algorithm. | EXPERIMENTS To evaluate the performances of PLPF-VSLAM, we utilized two commonly used RGB-D benchmark datasets, Technical University of Munich (TUM) data set (Sturm et al., 2012) and the Imperial College London and National University of Ireland Maynooth (ICL_NUIM) data set (Handa et al., 2014).The former includes a large set of data sequences containing both RGB-D data from the Microsoft Kinect and ground truth pose estimates from the motion capture system. Notably, the accuracy of the ground truth measurements attains a millimeter-level precision.The latter collects the image sequences in synthetic indoor spaces, mainly including living room and office.This data set can provide RGB images, depth images, and ground truth of camera poses.TUM data set not only shows rich-textured scenes but also low-/nontextured scenarios.More important, it has real trajectories during data collection.Therefore, TUM data set is employed to determine parameters for PLPF-VSLAM, while the ICL_NUIM data set for testing.PLPF-VSLAM is compared with five other VSLAM systems, including ORB-SLAM2 (Mur-Artal & Tardós, 2017), PL-SLAM (Pumarola et al., 2017), LSD-SLAM (Engel et al., 2014), Planar-SLAM (Li et al., 2021), L-SLAM (Kim et al., 2018), in which the first one is based on point features, the second and third are based on fusion of point and line features, and the last two are based on fusion of PLPF.It should be noted that the performances of the five VSLAM systems come from the literature.The computer for this experiment is equipped with Intel Core i7-7500U (3.5 GHz) and 12 GB memory. The performance of the four modes (P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM) is evaluated by the root mean square errors (RMSE) of the absolute trajectory error (ATE) (Equation 14). where x ˆi represents the keyframe trajectory estimated by a VSLAM, and x i denotes the real trajectory of the camera. Other than the RMSE, we propose another criterion to evaluate the overall performance (O P ) of the four modes.The O P is determined by the mean tracking time of each frame, and the RMSE.For a given scenario, the mode that has the minimum of O P will be selected.O P can be calculated by Equation ( 15): where tm i denotes the mean tracking time of each frame of the ith mode; RMSE i represents the RMSE of the ith mode.i = 1, 2, 3, and 4 corresponds to the four modes P, PL, PP, and PLP; and η and λ are the weights of mean tracking time of each frame, and RMSE, respectively. | Determination of parameters In this part, we attempted to determine the parameters of PLPF-VSLAM by processing the TUM data set.The thresholds for judging if two planes are matched are set on the basis of the research (Li et al., 2021;Zhang et al., 2019).In particular, the threshold of the angle between normal vectors of the two plane is set as ∘ 10 and the threshold the distance between the two planes is set as m 0.1 (i.e., ).Furthermore, we leverage parallel and perpendicular relationships of the map planes as additional constraints during the tracking process. The parameters of the four modes (P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM) on the 10 selected sequences from the TUM data set are shown in Table 1.These sequences were chosen specifically because they contain plane features, allowing for a comprehensive evaluation of the different modes.For each sequence, three parameters were computed for each mode: the mean tracking time per frame, the RMSE, and the O P value.These parameters provide insights into the performance of each mode in terms of computational efficiency and accuracy. | Weight calculation To achieve a balance between processing speed and accuracy in PLPF-VSLAM, it is crucial to carefully select suitable values for the parameters η and λ.These two parameters play a significant role in determining the overall performance metric O P .The parameter η represents the weight assigned to the mean tracking time per frame, while the parameter λ represents the weight assigned to the RMSE.Finding the right balance between these two parameters is essential for optimizing the overall performance of the system. First of all, according to the mean tracking time and RMSE in the last four columns of Table 1, the v 1 and v 2 for each of the four modes in each sequence are computed by using Equation ( 16).It should be added that the sequence named fr3/nstr_ntex_far is not considered in the calculation process, because it only can work in the PLP-SLAM mode. Then, we calculated the v 1 and v 2 for the four modes of P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM in the nine sequences, and the average ratio (ρ) based on v 1 and v 2 is Equation ( 17). T A B L E 1 Parameters of the four modes on the 10 selected sequences from the TUM data set.Note: X point , X line , and X plane are the normal distribution models that the matching numbers of point, line, and plane features."×" means that tracking loss happened or a significant portion of the sequence is not processed.The η and λ are 0.47 and 0.53, respectively, when computing O P .The bold font values mean the minimum of the O P , The bold values mean the minimum of the O P , which shows the VSLAM that has the best performance in this scenario. Having ρ, to balance the impact of the mean tracking time of each frame and RMSE on O P , we set the relationships between η and λ as Equation ( 18): Finally, we can obtain the η and λ, where η = 0.47 and λ = 0.53. Then, the O P is computed (Table 1). | Data processing For each sequence, after counting the number of matched point, line, and plane features are counted, we found that they follow the Gaussian distribution (Figures 4 and 5).The two figures show the Gaussian fitting result of the number of matched features in the fr1/ room and str_tex_far sequences, respectively. | Data fusion According to Table 1, the four fusion modes (P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM) perform the best in multiple sequences.Therefore, we need to fuse the distributions of numbers of matched PLPF corresponding to multiple sequences into a Gaussian distribution to represent the number of matched features threshold interval used by each mode.For example, on the basis of O P1 , P-VSLAM performs the best in the four sequences of fr1/room, fr1/desk, fr1/xyz, and fr3/str_tex_near.Therefore, we fused the distributions of numbers of matched point, line, and plane features in each of these four sequences to obtain an optimal distribution. To combine multiple distributions into a single one, we aim to minimize the variance of the resulting distribution.For instance, suppose we have two Gaussian distributions, 2 , and we want to fuse them into a new distribution 2 , To achieve this, we can follow the specific method outlined in Equation ( 19): Equation ( 19) further can be simplified as Equation ( 20), and then the variance of the fused distribution based on it becomes Equation ( 21): To minimize the variance of the fused distribution, we can take the derivative of Equation ( 21) with respect to k.This yields the following result (Equation 22): On the basis of the k (Equation 23) that minimizes the variance, we can compute the final fused distribution.After fusing the distributions of the four modes, we finally get the feature-matching distribution of them (Figure 6a,c,e).Note: "×" means that the tracking is lost at some point or a significant portion of the sequence is not processed; "-" indicates that the data is not provided in the literature.The bold values mean the minimum of the Op, which shows the VSLAM that has the best performance in this scenario. After processing and optimizing the data, we summarized Figure 6b,d,f to obtain the thresholds for each mode (Equation 24). However, it should be noted that overlaps can occur between different modes.In such cases, we consider the μ of the Gaussian distributions corresponding to the number of matched point, line, and plane features for different modes, and choose the mode that has a closer μ to the overlapping region as the best choice. where P, PL, PP, and PLP represent P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM, respectively.The n n , p l , and n π mean the numbers of matched point, line, and plane features, respectively.The two weights are assigned as η = 0.47 and λ = 0.53. Based on the μ of the distributions corresponding to the number of matched point, line, and plane features, the applicable conditions of the four modes are shown in Figure 7.In the figures, the four modes, P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM are colored green, blue, yellow, and red, respectively.Figure 7a is the overview, while 7b and 7c are side-views.Figure 7a can be populated as Figure 7d. | TUM data set In this experiment, we conducted the experiments by using the results shown in Equation ( 24).With this PLPF-VSLAM, we then evaluated the performance of the six VSLAM on the 10 sequences from the TUM data set (Table 2) In sequences with rich texture features (fr1/xyz, fr1/desk, fr3/nstr_tex_near, fr3/str_tex_far, and fr3/str_tex_near), the PLPF-VSLAM automatically selects point or PL features for tracking and mapping.In the sequences with sparse features (fr1/room, fr3/nstr_ntex_far, fr3/ nstr_tex_far, fr3/str_ntex_far, and fr3/str_ntex_near), it adds plane features to the tracking and mapping to ensure the accuracy.In terms of mapping results, after adding plane features, the maps have a better structural characteristics of the scene, and clearer outlines. In addition, considering that real-time performance is also an important indicator of a VSLAM, we compared our system to ORB-SLAM2 and Planar-SLAM on five sequences from the total time and mean tracking time (Table 3 and Figure 11).The processing times show that ORB-SLAM2 has the best performance in scenarios with rich features (fr1/room, fr3/str_tex_far, and fr3/nstr_tex_near), which is much faster than the systems based on the fusion of PLP. Compared with the Planar-SLAM system that also uses PLPF fusion, PLPF-VSLAM has a better performance on all sequences.It can be explained that not all the scenarios are non-/low-textured, thus our system adaptively selects the fusion of point, line, and plane features, which makes the processing time shorter.However, compared to ORB-SLAM2, our method is slower.On the one hand, adaptive fusion is based on the number of matched features.The extraction and matching of line and plane features increases the processing time.On the other hand, on a sequence, our system not only uses point features but also line and plane features in some places for mapping and tracking, which also increases the processing time. To sum up, PLPF-VSLAM demonstrates its versatility in handling both rich-textured and non-/low-textured indoor scenarios.As for the processing time, it is slightly longer than ORB-SLAM2, but is faster than Planar-SLAM which is also based on the fusion of point, line, and plane.The reason why our system takes longer time is that it aims to deal with non-/lowtextured indoor scenarios.Thus, the threshold of feature extraction is not as strict as that of ORB-SLAM2, which is time-consuming when more features are involved in the computation.Moreover, adding the additional line and plane features to the map also takes more time.Therefore, overall, the PLPF-VSLAM system is the best when taking all aspects into account.Note: "×" means that the tracking is lost at some point or a significant portion of the sequence is not processed.The bold values mean the minimum of the Op, which shows the VSLAM that has the best performance in this scenario. Note: The bold values mean the minimum of the Op, which shows the VSLAM that has the best performance in this scenario. | ICL_NUIM data set The sequences from living room and office in ICL_NUIM data set are also used to evaluate the accuracy of the PLPF-VSLAM.Base on RMSE of ATE, we compare our system with ORB-SLAM2, L-SLAM, and Planar-SLAM.The performances of different systems are shown in Table 4 and Figure 12.As no non-/low-textured scenarios are involved, four systems stably finish localization and mapping. In the ICL_NUIM data set, our system, along with L-SLAM and Planar-SLAM, achieves favorable results.This can be attributed to the abundance of structural features present in the data set, which offer ample line and plane features.These additional features serve to provide valuable constraints for pose estimation, thereby enhancing the overall accuracy of the system.Therefore, in such scenarios, the VSLAM based on point, line, and plane have a better performance than ORB-SLAM2 based on point features. 1 Three indoor scenes with different levels of texture richness.(a) Rich-textured scene, (b) low-textured scene, and (c) nontextured scene. Point-line(PL)(P)-VSLAM PL(P)-VSLAM incorporates a fusion of point, line, and/or plane features to enhance tracking and mapping accuracy.During the extraction and matching of line features, several challenges may arise, such as unclear endpoint positions and weak set constraints, leading to a high number of mismatches.As a result, researchers shifted their focus towards fusion-based VSLAM, which typically includes three types: point-line (PL), point-plane (PP), and point-line-plane (PLP). 3 | PLPF-VSLAM: VSLAM WITH ADAPTIVE FUSION OF PLPF PLPF-VSLAM can adaptively select fusion strategies according to the different characteristics of the scenarios.As shown in Figure 2, PLPF-VSLAM has the same framework as the conventional VSLAM (Mur-Artal & Tardós, 2017), which includes three threads: tracking, local mapping, and loop closing.Compared with conventional VSLAM, improvements happen in the tracking threads.In particular, taking the numbers of matched features as a reference.Four new modes (Point Tracking Mode, PL Tracking Mode, PP Tracking Mode and PLP Tracking Mode) are adaptively selected for tracking process. The framework of the PLPF-VSLAM.F I G U R E 3 Plane extraction of two sequences in the ICL_NUIM data set.Plane extraction of (a) Ir-kt0 and (b) kt0 sequence. T is the plane in the coordinate system of camera.After obtaining the re-projection error of each feature, we start to construct the optimization objective function based on the least squares.The construction of different objective functions for different scenarios is based on their richness of features.For scenarios with rich textures, we choose the P-VSLAM.For other scenes with insufficient features, we use the fusion of four modes: P-VSLAM, PL-VSLAM, PP-VSLAM, and PLP-VSLAM, according to the number of features in the scenario.The criteria for distinguishing if a scenario is non-/low-textured or rich of textures is the number of matched PLPF (n n n , , p l π ).The objective function is Equation (12): The local mapping thread plays a role in the construction of the local map, leveraging the keyframes generated within the tracking thread to estimate the precise pose of each keyframe, along with the associated map points, lines, and planes.These features are subsequently assimilated into the local map.In the course of processing a keyframe, the bundle adjustment (BA) algorithm is employed with the aim of mitigating the local pose error.BA optimizes the poses and positions of the map features by minimizing the reprojection error, ultimately yielding a local map of heightened accuracy and consistency.A local map in PLPF-VSLAM primarily consists of keyframes and their associated map points, lines, and plane features.The construction of the local map involves fusing different types of features based on their richness in the given scenarios.Initially, the keyframe generated by the tracking thread is added to the local map. F I G U R E 4 The Gaussian fitting result of the number of matched features in the fr1/room sequence.The number of matched (a) points, (b) lines, and (c) points.F I G U R E 5 The Gaussian fitting result of the number of matched features in the str_tex_far sequence.The number of matched (a) points, (b) lines, and (c) points. . To more intuitively show the F I G U R E 8 Comparison of the RMSE of different VSLAM on the TUM data set.YAN ET AL. | 61 F I G U R E 9 Reconstructed maps of the six most representative sequences from the TUM data set.(a) fr1/xyz, (b) fr1/room, (c) fr3/ nstr_ntex_far, (d) fr3/nstr_tex_near, (e) fr3/str_ntex_far, and (f) fr3/str_tex_near.performance of different VSLAM systems, we make Figure 8 based on Table 2.As for the cases where tracking loss or data is missing, the maximum RMSE in the sequence is used.The trajectories and reconstruction of maps in PLPF-VSLAM are visually depicted in Figures 9 and 10 (here, only the most representative scenes are displayed).This figure provides a comprehensive visualization of the paths followed by the camera and the resulting map reconstruction. F I G U R E 11 Comparison of mean tracking time of different VSLAM on the TUM data set.T A B L E 4 RMSE of different VSLAM on ICL_NUIM data set (unit: m).
8,065.4
2023-08-28T00:00:00.000
[ "Computer Science" ]
Comparative economic analysis of tomato (Lycopersicon esculenta) under irrigation and rainfed systems in selected local government areas of Kogi and Benue States, Nigeria The study compared the economic performance of tomato (Lycopersicon esculenta) under irrigation and rain fed systems in Bassa and Makurdi Local Government Areas of Kogi and Benue States of Nigeria, with the aim of assessing the determinants of its profitability. Primary data obtained from a sample of 120 farmers by stratified and multi-staged random sampling from four villages were analyzed using percentages, means, gross margin, net profit, Shepherd-future coefficient and exponential regression model of combined profit function. Results revealed gender inequality; all respondents under irrigation system were male, compared to 71.7% female participation under rain fed system. Average net profits were N128,750 and N57,050; and economic efficiencies were 1.380 and 0.986 for irrigated and rain-fed systems respectively. Results also showed that farm size, planting material and herbicide were significant at one and five percent levels, and positively correlated with farmers’ profit; while age and costs of fertilizer and labor were negative. The study concludes that tomato is more profitable and economically efficient under irrigation; and that increased access to land, herbicides, and improved seeds will promote profitability of the crop in the study area. INTRODUCTION Tomato (Lycopersicon esculenta) is an important vegetable crop grown in many parts of the world, contributing significantly to income security and the nutritive diets of many households.According to Mofeke et al. (2003) vegetable crops constitute 30 to 50% of iron and vitamin A in resource poor diet.Vegetable crops including tomatoes are widely cultivated in most parts of Sub Sahara Africa, particularly by small scale farmers in most states of Nigeria (Adeolu and Taiwo 2009;Giroh et al., 2010).Global production of fruits and vegetables tripled from 396 million MT in 1961to 1.34 billion MT in 2003(International Institute of Tropical Agriculture, 2005) and Nigeria ranked 16 th on the global tomato production scale, accounting for 10.79% of Africa's and 1.2% of total world production of tomatoes (Weinberger and Lumpkin, 2007).Denton and Swarup (1983) observed that tomato production in the Northern States as in other parts of the country is done during the dry season, while its production is scarce during the rainy season because of high disease incidence associated with growing tomatoes and preference of tomato producers for grain food crops during rainy season.Nigeria is unable to meet its growing domestic requirements for vegetables, fruits, floriculture, herbs and spices, dried nuts and pulses.Between 2009 and 2010, Nigeria imported a total of 105,000 metric tons of tomato paste valued at over 16 billion Naira to bridge the deficit gap between supply and demand in the country (Food and Agriculture Organization, 2006).Kalu (2013) attributed this situation to socio-economic constraints surrounding the key actors in the tomato value chain, institutional weaknesses and declining agricultural research. Irrigation farming relatively low in Nigeria and Africa as a whole, with irrigated area estimated at only 6% of total cultivated area, compared with 37% for Asia and 14% for Latin America (FAOSTAT, 2009).Svendsen and Sangi (2009) observed that more than two-third of existing irrigated area is concentrated in five countries namely Egypt, Madagascar, Morocco, Aouth Africa and Sudan.Given that irrigated crop yields are more than double of rainfed yields in Africa (Liangzhi et al., 2010), it is important to invest on irrigation development with particular focus on locations and technologies with greatest potential for irrigation.The efforts of the Federal Government of Nigeria, with the support of the World Bank and the African Development Fund to develop irrigation systems in the country started with the approval of the implementation of the National Fadama Development Project in 1992 (World Bank, 1992), followed by second National Fadama Development Project between 2004and 2010, and the on-going third National Fadama Development Project (2008-2016).Small scale irrigation systems have gone a long way to support dry season farming of crops all over the country.Dry season production of vegetables is common along the banks of the rivers Niger and Benue that cut across cities and towns in Kogi and Benue States, respectively. Tomato crop is cultivated in traditional small holdings in Nigeria and specifically in the study area.Denton and Swarup (1983), observed that tomato had ceased to be the main crop during rainy season in Northern Nigeria.In cognizance of the characteristic competition among major food crops for the limited resources of the farmers, one then wonders if the capital investment on irrigation for dry season farming of tomato is worthwhile relative to rainy season production; and what factors could further promote the profitability of tomato under irrigation or rain-fed system?Within this context, the general objective of the study was to assess the relative performance of tomato crop under irrigation and rain-fed systems in Bassa and Makurdi Local Government Areas of Kogi and Benue States of Nigeria respectively.Specifically, the study assessed the relative profitability and economic efficiency of tomato crop under irrigation and rain-fed systems, and identified the determinants of its profitability in the study area.It was thus, hypothesized that production of tomato crop under irrigation was not more economically efficient and profitable than those grown under rain-fed system; and that the socio-economic characteristics of the farmers did not affect profitable production in the area.It is expected that the results of the study would contribute to agricultural transformation policies and promote food security in Nigeria. CONCEPTUAL AND ANALYTICAL FRAMEWORK The conceptual framework is within the context of relative efficiency and profitability of investments in the production of tomato under irrigation and rain-fed systems.Thus, the concepts of gross margin and net profit were employed to compare the profitability, while Shepherd-future model was used to compare the economic efficiency of tomato crop under irrigation and rain-fed systems.Gross margin was measured in terms of the amount in Naira that is contributed to the enterprise after paying for direct variable unit costs, while the net profit accounts for the direct fixed costs in addition.The total variable costs comprised the expenses incurred on variable inputs such as fertilizers, seeds, and labor; while fixed costs comprised expenses on rent on land, depreciation of capital assets such as irrigation pumps, hoes, cutlasses, and wheel barrow.Shepherd-future coefficient was used as a measure of how well each naira return on the enterprise is utilized to cover the operational and overhead expenses.Shepherd-future model was expressed as the ratio of gross margin obtained from the production of tomato crop to the total cost of production (Shepherd, 1962).Greater positive net profit and Sheperd-futre coefficient would indicate higher profitability and efficiency of tomato crop production. METHODOLOGY Study area The study was carried out in North Central Nigeria, covering Sharia and Gboloko villages in Bassa LGA of Kogi State, and Ugondo and Mbayong villages in Makurdi LGA of Benue State, where Fadama land is very prominent.The area is located in Southern Guinea Savanna zone within latitude 14°N and 16°N and longitude 12°E and 13°E, respectively for Kogi and Benue States; with annual rainfall of between 1100 and 1600 mm and an average temperature of 35°C (National Population Commission, NPC 2006).Both states are bound on the North and West by Rivers Niger and Benue. Data sampling and collection methods A total sample of 120 farmers used for the study was selected using stratified and six-staged random techniques.The sample was stratified into two; 60 farmers under one rrigation system and 60 farmers under rain-fed system.The six-stage random sampling firstly comprised, of the two States (Kogi and Benue) randomly selected from among six States in the Southern Guinea agroecological zone (Kwara, Niger, Benue, Kogi, Nasarawa and Taraba States).Secondly, one Local Government Area (LGA) was selected from each State (Bassa LGA from Kogi state and Makurdi LGA from Benue state).Thirdly, two villages were selected in each LGA (including Sharia and Gboloko in Bassa LGA of Kogi State, and Udongo and Mbayong in Makurdi LGA of Benue State).Fourthly, 3 Fadama Associations (FAs) were selected in each of the four villages making 12 FAs; and fifthly, 5 farmers were randomly selected from each of the FAs making a total of 60 farmers (15 farmers per village) under irrigation system.Lastly, 15 farmers that cultivated tomato crop under rain-fed system were purposively selected from each of the four villages to make a total of 60 farmers under rain-fed system. Structured questionnaire was used to obtain primary data about farmers' socio-economic characteristics; such as sex (male or female), family size (number of persons in the household), age (years), and educational level (number of years in school).Also, data were obtained on production variables such as farm size (ha), farm output (N); variable costs (N) such as costs of fertilizer, labour, planting materials, pesticide and herbicide; and fixed costs (N) such as rent on land, as well as depreciation of capital assets such as irrigation pumps, hoes, cutlasses, and wheel barrow. Analytical methods Descriptive and inferential statistics including frequency, percentage, gross margin and net profit analyses were used to describe the socio-economic characteristics of farmers and level of profitability of tomato crop production.Multiple regression was also used to analyze the socio-economic determinants and coefficients of profitability, while Shepherd-future coefficient was used to determine the economic efficiency of tomato crop under irrigation and rain-fed systems. Models specification (1) The functional form of the multiple regression function is: Socio-economic characteristics of tomato farmers The results revealed that all the farmers under irrigation s ystem were male while about 72% of farmers under rain-fed system were female farmers (Table 1); indicating gender inequality in access of farmers to irrigation facilities, and insensitivity of the irrigation programme to the Millennium Development Goal of gender equality and women empowerment (United Nations Development Programme, 2002, andInternational Food Policy Research Institute, 2006).About 62% of farmers under irrigation system and 22% of farmers under rain-fed system were above 50 years of age; indicating that majority of the farmers were getting advanced in age and may lack sufficient vigor for large scale and efficient production of vegetable crops.About 73% of farmers under irrigation system and 68% of farmers under rainfed system have no formal education; indicating that low literacy rate among the farmers might hinder adoption of innovations since education has been reported to influence the level of technology adoption (Chinaka et al., 1995). Despite the family size that was above six for about 94% of the farmers, more than 80% of farmers under irrigation system employed hired labor for farm operations; indicating that family labor was not a preferred option for reducing the cost of labor.About 98% of farmers under irrigation system have more than 10 years of farming experience; suggesting the possession of necessary farming and irrigation skills for increased productivity and efficiency.About 53% of farmers under irrigation system were land owners while about 47% obtained their farm land on lease from government; land ownership structure might be a decision factor with respect to investment on irrigation facilities as farmers that owned land could guarantee continuous access to the use of irrigated land.Majority of the farmers (58% under irrigation system and 98.4% under rain-fed system) cultivated less than 2 ha of farm size; indicating that tomato crop farming is generally small scale and corroborating Kalu (2013) that tomato is produced on small holdings in Northern Nigeria. Relative profitability and economic efficiency of tomato crop Table 2, results showed that gross margins obtained per hectare of tomato crop under irrigation and rain-fed systems were N153,500 and N68,000, respectively; and average net profits per hectare were N128,750 and N57,050.Shepherd-future coefficients were 1.380 and 0.986t, indicating that tomato crop was economically efficient under both irrigation and rain-fed systems.The results indicated that investment on tomato crop production under irrigation system was worthwhile, as it yielded greater revenue in excess of operational and overhead expenses in comparison with that of rain-fed system.These results corroborate Hussain and Wijerathna (2004) linking irrigation and poverty alleviation in developing countries.Adewumi et al. (2005) that tomato farming under small scale irrigation systems was profitable in Sokoto State; where an average gross margin of N87,543.00/ha and average net income of N77,559.80/ha, with a rate of return to investment greater than 1 were obtained.Denton and Swarup (1983) had earlier observed that tomato crop has ceased to be the main crop during the raining season in Northern States, because farmers sustained greater loss during the rainy season due to diseases, nematodes, insect pests and high flower drops; resulting in lower yield and poor quality fruits (Sabo and Dia, 2009).Kalu (2013) later observed that farmers in Northern Nigeria engaged in the production of other crops during raining seasons, while they planted tomatoes in the dry season using the irrigation system, as a strategy for reducing losses incurred on tomato farming.Gani and Omonona (2009) also confirmed greater profitability and economic efficiency for maize production under irrigation system relative to rain-fed system.These results indicated that investments in irrigation facilities for tomato crop production would promote higher income among small scale farmers, and thus contribute to poverty alleviation in the study areas.Similarly, irrigation agriculture has been linked to poverty reduction in six Asian Countries (Hussain, 2007). Determinants of profitability of tomato production The exponential regression model of combined profit function gave the coefficient of multiple determination (R 2 ) value of 0.91 (Table 3), implying that 91% of the variation in farmers' profit is explained by the independent variables while the remaining 9% could be accounted for by the error term.The parameter estimates of combined profit function for tomato crop showed that age and planting material as well as farm size and herbicide were significant at one percent and five percent respectively.Farm size, level of education, cost of herbicide and planting material have positive correlation with farmers' profit; while age and costs of fertilizer and labor were negatively correlated to farmers' profit.These results implied that an increase in farm size, level of education, planting material and cost of herbicide would lead to an increase in income from tomato.Formal education could aid managerial ability of farmers and enable them achieve greater efficiency in tomato crop production.Age of farmers was significantly negative; implying that older farmers tend to be less efficient.The cost of fertilizer was also negatively correlated with farm income and was not significant; suggesting that achieving greater efficiency in fertilizer utilization would not likely lead to significant increase in profit obtained on tomato crop production.Result also showed that cost of labor was negative while cost of herbicide was positively correlated with profit; meaning that an increase in the amount spent on labor will lead to reduced profit while an increase in the level of herbicide cost will lead to increase in profit on tomato production.This suggests that substitution of herbicide for labor may reduce cost of weeding and thus, enhance profitability of tomato production.The cost of planting material was also positively correlated with profit and significant at one percent; indicating that more use of improved planting materials might enhance the profitability of tomato crop production.These results confirmed the findings of Ayanwale and Abiola (2008) that farm size, education level and capital inputs are critical determinants of efficiency of vegetable production under tropical conditions. SUMMARY AND CONCLUSIONS The study aimed at assessing the extent to which investment in small scale irrigation have contributed to profitability of tomato crop farming in the North Central Southern Guinea Savanna agro-ecology of Nigeria.Specifically, the study estimated the relative profitability and efficiency of tomato crop production under irrigation and rain-fed systems, and identified the socio-economic determinants of profitability of tomato crop.Primary data obtained from a sample of 120 farmers were analyzed using descriptive statistics, gross margin, net profit, as well as Shepherd-future and multiple regression models. Results showed that all the respondents under irrigation system were male farmers, indicating gender inequality in farmers' access to irrigation facilities.Majority of the farmers were above 50 years of age, with low literacy and average family size of 8 per household.Results also showed that the gross margins per hectare under irrigation and rain-fed systems were N153,500 and N68,000 respectively; and that average net profits per hectare were N128,750 and N57,050; indicating profitability of tomato crop under both systems.Shepherd-future coefficients were 137.98 and 98.62% respectively for irrigation and rain-fed systems; indicating that tomato crop was more economically efficient under irrigation system than rain-fed system.Parameter estimates from the combined profit function also revealed that farm size, level of education, cost of herbicide and planting material have positive correlation; while age of farmers, costs of fertilizer and labor were negatively related to farmers' profit.Age of farmers and planting materials were significant at one percent, while farm size and costs of herbicide were significant at five percent. Thus, it was concluded that tomato crop was more profitable and economically efficient under irrigation; and that increased access to land, herbicides, and improved seeds would promote profitability of the crop in Bassa and Makurdi Local Government Areas of Kogi and Benue States, Nigeria.Land policies that increase access of farmers (with special consideration for women farmers) to adequate land, irrigation facilities, herbicides and good quality improved planting material would promote gender equality in irrigation farming and profitability of tomato production; thus contributing to food security in Nigeria and enhancing the income potential of farmers. Table 1 . Socio-economic characteristics of farmers under irrigation and rain-fed systems. Table 2 . Costs and Returns per hectare on tomato crop under irrigation and rain-fed systems. Note: Asterisked (*) figures are depreciated costs for the respective fixed capital items. Table 3 . Parameter estimates of exponential regression model of combined profit function for tomato production.
4,051.6
2014-11-01T00:00:00.000
[ "Agricultural And Food Sciences", "Economics" ]
Wide range refractive index sensor using a birefringent optical fiber In this article, an efficient high birefringent D-shaped photonic crystal fiber (HB-D-PCF) plasmonic refractive index sensor is reported. It is able to work over a long low refractive index analyte range from 1.29 to 1.36. This modified simple structured hexagonal PCF has high birefringence in the near-infrared region. A thin gold film protected by a titanium dioxide (TiO2) layer is deposited on the D-surface of the PCF which acts as surface plasmon active layer. The sensor consists of an analyte channel on the top of the fiber. The performance of the HB-D-PCF is analyzed based on finite element method. Both wavelength and amplitude interrogation techniques are applied to study the sensing performance of the optimized sensor. Numerical results show wavelength and amplitude sensitivity of 9245 nm/RIU and 1312 RIU−1 respectively with high resolution. Owing to the high sensitivity, long range sensing ability as well as spectral stability the designed HB-D-PCF SPR sensor is a potential candidate for water pollution control, glucose concentration testing, biochemical analyte detection as well as portable device fabrication. Introduction In the current century, demand for portable, lightweight, long-range, highly sensitive sensors are in peak due to the faster technical development. To satisfy this need intense research work is being performed in the field of photonic crystal fiber (PCF) incorporated optical sensors. PCFs are advantageous over other waveguides due to its great structural flexibility, advantageous optical properties as well as light manipulating capability (De and Singh 2019a;Maji and Chaudhuri 2014;Russell 2003). Based on these charming properties many PCF sensors are reported which are well responding in the field of physical and analyte sensing (De et al. 2019;Pinto and Lopez-Amo 2012). On the other hand, surface plasmon resonance (SPR) is a unique optical phenomenon. In which free electrons at metal-dielectric interface got oscillated by the p-polarized light. Also, maximum energy transfer takes place from the fiber guided mode to the surface plasmon mode when frequency of both these modes are same. This is the resonance frequency which significantly got shifted with changing environment (Islam et al. 2019;Rifat et al. 2016c). So, when this delicate SPR phenomenon is combined with the PCF then the response of these sensors improvised several times in comparison to the non-SPR PCF sensors (Dora Juan 2017). Though PCF is not the first optical element with which plasmonic metals were combined. Firstly invented bulky and less responding prism-based SPR sensors (Raether 1968) were left behind by the optical fiber using SPR sensors. With time, structural and material selection limitation of the standard optical fiber becomes the barrier in its sensitivity enhancement. Then SPR based PCF sensors take the optical sensing technology to a new height (Dora Juan 2017) in the last decade. For the fabrication of SPR based PCF sensor both internal and external sensing approaches have been applied (Akowuah et al. 2012;Dora Juan 2017). In the case of internal sensing approach air holes of PCF are selectively coated with plasmonic metal as well as filled with analyte (Akowuah et al. 2012). But for external sensing approach both plasmonic metal coating and analyte are situated at the outer surface of PCF (Rifat et al. 2016a). If a comparison is drawn in between these two techniques external sensing is more suitable for real-time applications and mass production of PCF-SPR sensors. As, plasmonic metal layer deposition and its thickness control, analyte filling, probe reuse are more accessible in this case (Dora Juan 2017). Many external plasmonic layers incorporated PCF sensors are reported till date, like, D-shape sensor , slotted sensor (Akowuah et al. 2012), flat fiber sensor (Rifat et al. 2016d), micro channel consisting sensors (Liu et al. 2017b) etc. It is prominently noticeable that most of these sensors have high sensitivity and spectral stability in the RI range of 1.40-1.46 due to the less RI discrepancy between the analyte and background material which is silica in most of the cases. But when the analyte RI is far less than the background material then confinement of propagating mode is less affected by the external analyte change resulting in inadequate response to the surrounding changes (Dora Juan 2017). To date, several PCF-SPR low RI sensors are reported. In 2017 Liu et al. reported two open-ring channels consisting PCF based SPR sensor (Liu et al. 2017b) which has a sensitivity of 5500 nm/RIU in the RI range 1.23-1.29. Though this sensor is responding well toward the analyte change but double-side polishing of the PCF is a troublesome job as well as it makes the probe more fragile. Also, this sensor is operating in the mid-infrared range. Optical sources in this range are not easily available. Next year, Dash et al. proposed a micro channel consisting PCF based SPR sensor (Dash et al. 2018) of sensitivity 5000 nm/RIU in the RI range 1.32-1.34. Though this structure is less fragile in nature but both sensitivity as well as responding analyte range are limited. After that Islam et al. proposed two plasmonic strips consisting birefringent PCF based plasmonic sensor (Islam et al. 2019) which has a sensitivity of 111000 nm/RIU in the RI range 1.33-1.43. Though previously mentioned sensor is showing very high sensitivity but its spectral stability is inferior. For real-time applicability of a sensor not only high sensitivity but also its spectral stability matters. Very recently, Wang et al. reported a polarization independent PCF using SPR sensor (Wang et al. 2019b) in 1.20-1.33 RI range with sensitivity 7738 nm/RIU. Though it has good sensitivity but its internal structure is very complicated. Also, there are multiple SPR peaks due to the higher order mode coupling. It may create problem during suspected analyte detection. From a vast literature survey we have concluded that though a large number of D-shaped fiber senor have reported till today. But a simple structured, long ranged, biocompatible sensor is still in need. Considering this fact, based on external sensing approach a HB-D-PCF SPR sensor is designed and analyzed for a long low RI analyte region. The D-surface of this fiber probe is coated with active gold and titanium-di-oxide (TiO 2 ) layers. Using the commercial COMSOL Multiphysics software (FEM), for different active layer thickness coupling conditions and loss spectrum are numerically investigated to achieve the optimized sensor structure. Also, performance of the optimized structure is studied thoroughly based on the wavelength and amplitude interrogation techniques. Mostly incomplete coupling is observed between the fundamental core mode and fundamental SPP mode in the analyte RI range 1.29-1.36. For this sensor maximum wavelength and amplitude sensitivity are found to be 9245 nm/RIU and 1312 RIU −1 respectively with high resolution. The advantage of the proposed HB-D-PCF is that it shows high birefringence in the nearinfrared region. This helps in manipulating the core guiding light toward the D surface, as a result more interaction with the analyte. Also, due to its moderate propagation loss it is suitable in fabricating a practically realizable sensing probe. As the sensor is operating in near-infrared region, the penetration depth of the evanescent wave and its interaction with the analyte are more in comparison to the visible region. Structure design and numerical modeling The schematic of the designed HB-D-PCF sensor and its 3D view are depicted in Fig. 1a, b. Air holes are distributed in a hexagonal manner in the cladding region of the PCF. Also, there are two large elliptical air holes around the core. These elliptical holes are promoting the birefringence of the fiber. For this structure, distance between Fig. 1 a Schematic of the proposed HB-D-PCF; b 3D view of the HB-D-PCF sensor two successive air holes i.e. pitch, circular air hole diameter and eccentricity of the elliptical air holes are 2.00 µm, 1.10 µm and 0.25 respectively. The area of each elliptical hole is 3.14µm 2 . This HB-D-PCF is made of silica and all holes are filled with air. Optimization of the previously mentioned structural parameters can be found in detail in our another publication (De and Singh 2019b). In this work we used the same PCF with modification for further study. This PCF can be considered as a strong competitor of the commercially available PCF PM-1550 by Thorlabs. Many sensors and interferometers are reported based on this PM-1550 PCF. Similarly, our designed PCF can also be used for fabricating versatile PCF sensors. Incorporation of the plasmonic metal layer on the polished surface of this HB-D-PCF and its application as a low RI analyte sensor is one of them. In our designed HB-D-PCF plasmonic sensor the basic fiber is polished and then the polished surface is coated with thin gold and titanium dioxide (TiO 2 ) layers respectively. On top of these active layers suspected analyte is placed. Polishing depth from the surface of the fiber, gold layer thickness and TiO 2 layer thickness are denoted by h, T g and T t respectively. Gold is a well-known chemically stable plasmonic metal. For this structure, high RI transparent TiO 2 layer protects the gold layer from corrosion as well as enhances the coupling between the evanescent waves of core guided light and external analyte. To realize this HB-D-PCF sensing probe practically, it is suggested to incorporate a thin (≤ 5 nm) TiO 2 layer between the fiber (silica) and gold layer. This ultra-thin TiO 2 layer is incorporated as an adhesive layer in spite of weaker adhesion in comparison to commonly used Cr and Ti coating (Aouani et al. 2009) because of its advantageous spectral tenability over previously mentioned materials (Jiao et al. 2009). In our calculation, it is incorporated as a part of the upper TiO 2 tuning layer to reduce the computational time. Moreover, the TiO 2 layer brings down the operating wavelength as well as SPR frequency in the near-infrared region. It is advantageous in several aspects e.g. infrared light has deeper penetration of SPW evanescent tail as a result sensitivity enhancement and wider availability of infrared sources and detectors (Ziblat et al. 2006). For this HB-D-PCF analyte RI is varied from 1.29 to 1.36. Also, the sensor structure is optimized based on polishing depth and layer thickness which are discussed in Sect. 5.1. RI of air is 1. Material dispersion of silica is considered throughout the simulation using the Sellmeier equation (De and Singh 2019a). The complex dielectric constant of gold is taken into account in this simulation as per Johnson and Christy data (Johnson and Christy 1972). The dispersion relation of TiO 2 is taken into account as per Mishra et al. (Mishra and Mishra 2015), Here, λ is the wavelength of core guided mode. FEM based numerical simulation is used to analyze the sensor under study. Also, an anisotropic perfectly match layer is placed around the fiber to reduce radiation loss. During numerical analysis cross-section of the sensor is discretized into small triangular element. Then Maxwell's em equations are applied at each element. Combining all these solutions global matrix is formed and finally effective refractive indices (n eff ) of different modes are computed. During this simulation 26,503 domain elements, 2289 boundary elements and 175,387 degrees of freedoms are solved. (1) Fabrication prospects The designed HB-D-PCF is practically realizable in dimension. This hexagonal fiber can be fabricated using developed stack and draw technique (Russell 2003) and laser drilling technique (Becker et al. 2013). The fused preform technique can be applied to realize the elliptical air holes (Falkenstein et al. 2004). Also, lase drilling technique (Becker et al. 2013) and 3D printing technique (Rosales et al. 2020) can be applied to build the preform. Then using this preform fiber can be fabricated by developed stack and draw technique. The D-surface of this structure can be achieved by careful polishing of the fiber surface Dora Juan 2017). For this HB-D-PCF sensor, gold and TiO 2 layers are externally coated. So, this structure is free from internal or selective coating complexity. The gold and TiO 2 layers can be deposited on the D-surface by applying the sputtering technique (Armelao et al. 2005) and chemical method (Pathak et al. 2016) respectively. Additionally, few microliter analyte is needed to pour on the top of the sensing probe. Figure 2 depicts a recommended experimental setup. Free space coupling (Heng et al. 2016) or recently developed connector technique (Morishima et al. 2018) can be applied to launch light at the probe and then routed to the OSA (optical spectrum analyzer) (Wu et al. 2017). Considering all these aspects and currently available fiber technology, we authors are hopeful regarding the real-time applicability of this sensor. It should be kept in mind that polishing of the PCF should be performed very carefully to avoid the damage of the probe. Also, after the fabrication the probe should be attached with a rigid platform to avoid the deterioration of the sensor performance. Working principle of the proposed SPR sensor and dispersion relation For this probe, light is propagating along z-direction and all modal analysis are performed at the x-y plane (Fig. 1a). The working principle of this HB-D-PCF is governed by coupled mode theory (CMT). As per this theory core guided mode and SPP mode gets coupled at a particular frequency when their effective refractive indices (n eff ) are matched. It is also known as the phase matching point (Dora Juan 2017). Here, it is worthy of mentioning that throughout all analysis y polarized fundamental core mode is considered because this mode breaks the symmetry of the structure as the plasmonic layer is situated in the y-direction. As a result better interaction takes place between evanescent wave of y polarized core mode and analyte in comparison to the x polarized mode in the infrared region ; Dora Juan 2017). For this sensor, energy coupling takes place from the core mode to the fundamental surface plasmon polarization (SPP) mode and maximum loss appears in the core mode at the resonance wavelength. This loss spectra is highly dependent on structural parameters, active layers thickness and the surrounding environment. Confinement loss of both core guided mode and SPP mode can be formulated as, here, Im(n eff ) is the imaginary part of effective RI (Gangwar and Singh 2017). Figure 3 shows the dispersion repletion of core mode and SPP mode for the optimized structure at n a = 1.33. In Fig. 3 Re(n eff ) of core mode, Re(n eff ) of SPP mode and loss of core mode variation with changing wavelength are denoted by solid green line, solid red line and blue line with circle respectively. It can be observed from Fig. 3 that at 1.415 µm propagating wavelength Re(n eff ) of core mode and SPP mode are matched and coupling takes place between them. At this particular wavelength loss of the core mode is maximum and it is known as SPR wavelength. Figure 4 represents the electrical field distribution of core and SPP modes for n a = 1.33. Figure 4a, b are the core and SPP mode at 1.380 µm wavelength (away from coupling) and Fig. 4c is the coupled mode at 1.415 µm wavelength. It can be visualized that at coupling wavelength electric field is prominently present both in core and SPP mode. The SPR wavelength suffers a significant shift with the changing environment. So, wavelength sensitivity can be calculated by tracking the SPR wavelength shift as follows, here, Δλ peak is the SPR wavelength shift for Δn a , analyte RI change. Not only the strength of coupling but also the coupling nature between core and SPP mode can be explained well based on CMT (Fan et al. 2015;Ma et al. 2013). Following this theory two coupled modes can be presented as follows, (2) α(dB∕cm) = 8.686 × 2π λ Im n eff × 10 4 (3) S (nm/RIU) = Δ peak Δn a (4) here, β 1 and β 2 are propagation constants, E 1 = A exp(iβz) and E 2 = B exp(iβz) are fields associated with the core and SPP mode. z and Ҡ are the propagation length and coupling strength respectively. By substituting E 1 and E 2 in the solution of Eqs. (4) and (5), it can be written as, here β ave = (β 1 + β 2 )/2 and δ = (β 1 − β 2 )/2. β 1 , β 2 and δ all are complex quantities with δ = δ γ + iδ i . When, δ i ˃ Ҡ, β + and β − have same real part and different imaginary part. In this case an incomplete coupling takes place. On the other hand, complete coupling takes place in a converse case. Figure 5 depicts the incomplete coupling at n a = 1.33 for the designed HB-D-PCF. The coupling nature over the entire analyte range will be discussed in Sect. 5.2. Influence of structural parameters on loss spectrum and optimization The PCF shows high birefringence throughout the near-infrared region (as shown in Fig. 6 (De and Singh 2019a)) and it suffers increment with increasing wavelength. It has birefringence of 3.93 × 10 -3 at 1.55 µm wavelength. The presence of broader passage along y-direction is the reason behind more modal spreading as well as high birefringence (De and Singh 2019a). The effect of polishing depth and active layers thickness on loss spectrum are investigated in detail to design an optimal performing HB-D-PCF sensor, and (6) Fig. 4 electric field distribution of a core guided mode, b SPP mode and c coupled mode findings are depicted in Fig. 7. During this process parameters are optimized one at a time. With changing parameters losses of core guided modes are calculated using Eq. (2). For a SPR sensor, thickness of the plasmonic metal layer is a crucial parameter. Because SPP wave as well as modal coupling are strongly dependent on this layer thickness. Figure 7a shows the variation of loss spectra for different polishing depths when T g = T t = 40 nm. For increasing h from 3.94 µm to 4.39 µm with an iteration of 0.15 µm, SPR wavelength suffers blue shift (indicated with blue arrow) from 1.269 µm to 1.165 µm with varying loss peak. In the beginning, loss started to increase with increasing polishing depth because deeper polishing allows more interaction between core mode and external analyte. In this case loss is maximum for h = 4.24 µm with sharp loss peak. But when the fiber is polished more then there is a coupling tendency between core mode and first order SPP mode (blue dash-dot line). As a result, loss peak of fundamental mode decreases and sensitivity becomes less. Also, very deep polishing makes the fiber fragile and coupled modes desultory. So, h = 4.24 µm is chosen to get a well-responding sensing probe. It can also be observed that response of the probe is highly sensitive to the polishing depth. So, special care should be taken during fabrication. Figure 7b shows loss spectra of core mode for varying T g with h = 4.24 µm and T t = 40 nm. For increasing T g from 35 to 55 nm with an iteration of 5 nm, SPR wavelength suffers red shift (indicated with red arrow) from 1.160 µm to 1.274 µm. One fact is noticeable in Fig. 7b that resonance loss is maximum for 40 nm thick gold layer with a most sharp peak among all. It can be explained as, for a well responding SPR sensing probe the plasmonic layer has an optimum thickness. When the plasmonic layer is too thin then it is not able to accommodate a sufficient number of SPP modes due to high mechanical damping. contrarily when the plasmonic layer is too thick core mode is unable to interact with external analyte properly or penetration depth is limited i.e. plasmonic damping (De and Singh 2020). So, the optimum gold layer thickness is chosen as 40 nm for this HB-D-PCF sensor. Then the effect of T t on fundamental core mode loss is studied in detail with h = 4.24 µm and T g = 40 nm as shown in Fig. 7c. For increasing T t from 35 to 60 nm with an iteration of 5 nm, SPR wavelength suffers red shift (indicated with red arrow) from 1.140 to 1.576 µm with loss increment at the beginning. For T t = 50 nm, loss curve is very sharp with highest loss. Also, higher order resonance loss peak is much lower than the fundamental core loss peak. It can be noticed that with increase T t resonance loss peak started to increase until Fig. 7 a Loss spectrum of the designed HB-D-PCF sensor with changing h at n a = 1.33. b T g and c T t at n a = 1.33 the other higher order loss peak appears prominently. The reason behind this is the TiO 2 layer enhances the mode coupling as well as light analyte interaction. But wider T t screens core guided light from analyte. So, T t = 50 nm is chosen as optimized TiO 2 layer thickness. Performance analysis The loss spectrum of the optimized HB-D-PCF SPR sensor having structural parameters h = 4.24 µm, T g = 40 nm and T t = 50 nm are depicted in Fig. 8a. The proposed sensor is performing best in the analyte RI (n a ) range 1.29 to 1.36. For an analyte having RI below this range modes become desultory. Also, for an analyte above this RI range wavelength response is poor. It can be observed from Fig. 8a that SPR wavelength suffers a red shift from 1.220 to 1.672 with increasing n a (indicated with red arrow). The reason behind this is that n eff of core mode and SPP mode are close to RI of silica fiber and n a respectively. So, increment in n a causes red shift of the phase matching point as well as SPR wavelength (De and Singh 2020). It is also noticeable in Fig. 8a that loss peak goes through a circle of rising and fall in that particular range. In the beginning, loss suffers increment with increasing n a and then comes to a maximum of 239 dB/cm at 1.415 µm wavelength for n a = 1.33. After that, loss stars to decrease with increasing n a . This is because of the strongest coupling between core mode and SPP mode at n a = 1.33 which is also observable in Fig. 10. Throughout the studied RI range loss is moderate for this sensor which is another attractive feature for its practical implementation. The performance of the optimized HB-D-PCF SPR sensor is summarized in Table 1. SPR wavelength shift with changing n a is depicted in Fig. 8b. By applying wavelength interrogation technique on this cure wavelength sensitivity can be calculated using Eq. (3). The highest sensitivity of this sensor is found to be 9245 nm/RIU. These observed data points are fitted very well with second-degree polynomial equations with R 2 value 0.9998. The relation between SPR peak and n a can be presented as follows, Figure of merits (FOM) (Fan et al. 2015) of this sensor can be calculated as follows, (7) λ peak = 81.0511 − 126.6890n a + 50.2362n 2 a Fig. 8 a Loss spectrum for various analytes, b SPR wavelength variation with changing analyte with varying n a full width at half maximums (FWHM) are presented in Table 1. For this HB-D-PCF SPR sensor FOM can reach up to 342 at n a = 1.33 which is reasonably high. In this designed fiber, asymmetry is induced around the core by incorporating two elliptical air holes next to the core. Due to this a broader passage way is created along y direction in comparison to x direction. Here, more interaction takes place between fiber guided light and analyte because active layers (Au and TiO 2 ) are situated on polished surface along y direction. As a result sensitivity got enhanced (Dash and Jha 2014). Also the amplitude interrogation technique is applied to investigate the sensitivity of the proposed sensor. This technique is economically beneficial because of the use of a single source. When propagating light suffers loss of α(λ, n a ) due to the propagation through a probe length L, its amplitude sensitivity can be defined as (Dash and Jha 2014;Rifat et al. 2015), Figure 9 depicts amplitude sensitivity of the proposed sensor in the RI ranging from 1.30 to 1.36. Again red shift is noticeable in this case with increasing n a (denoted by red arrow). The reason for this shift is previously mentioned. Maximum S A is found to be 1312RIU −1 with n a = 1.33. Also, the coupling characteristics between core mode and SPP mode is presented in Fig. 10. It shows incomplete coupling throughout the studied RI range. So, this HB-D-PCF SPR sensor is advantageous in detecting bio analytes (Merwe 2000). Resolution shows the minimum detection ability of a sensor which can be calculated using following equation (Rifat et al. 2016d), In case wavelength interrogation method, R of the HB-D-PCF sensor is 1.08 × 10 -5 RIU when Δλ min = 0.1 nm and for amplitude interrogation method it is 7.62 × 10 -6 RIU for 1% , n a n a (10) R = Δn a × Δ min Δ peak RIU Conclusion An optimized SPR sensor is proposed based on a high birefringent PCF. Its sensing performance as well as coupling nature are investigated in RI range 1.29-1.36 using wavelength and amplitude interrogation techniques. This HB-D-PCF SPR sensor exhibits maximum wavelength and amplitude sensitivity of 9245 nm/RIU and 1312 RIU −1 respectively. It is able to detect RI change up to the order of 10 -6 . Also it shows a high FOM of 342. This moderately lossy SPR sensor is practically realizable using developed stack and draw Our work method and can be a good competitor of commercially available fiber using D-PCF sensors. Long range sensing ability of this complexity free structure is its key advantage over many D-shaped plasmonic PCF sensors. The promising features of the proposed sensor make it a suitable candidate for integrating with the lab-on-a-fiber technology, developing fiber interferometer as well as fabricating portable biochemical sensing devices.
6,106.8
2021-02-04T00:00:00.000
[ "Physics" ]
Spin-Valve-Controlled Triggering of Superconductivity We have studied the proximity effect in an SF1S1F2s superconducting spin valve consisting of a massive superconducting electrode (S) and a multilayer structure formed by thin ferromagnetic (F1,2) and superconducting (S1, s) layers. Within the framework of the Usadel equations, we have shown that changing the mutual orientation of the magnetization vectors of the F1,2 layers from parallel to antiparallel serves to trigger superconductivity in the outer thin s-film. We studied the changes in the pair potential in the outer s-film and found the regions of parameters with a significant spin-valve effect. The strongest effect occurs in the region of parameters where the pair-potential sign is changed in the parallel state. This feature reveals new ways to design devices with highly tunable inductance and critical current. At present, the last type of the above-mentioned spin valves is the least studied among the variety of possible technical solutions.In contrast to the other types of tunable inductors [17,46,47], it does not require the current suppression of superconductivity and can be considered a tunable linear element.The typical configuration of such a device is shown in Figure 1.It consists of a massive S-electrode and an F 1 S 1 F 2 s multilayer structure formed by thin ferromagnetic (F 1,2 ) and superconducting (S 1 , s) layers.Superconductivity in the outer s-film of the SF 1 S 1 F 2 s structure is maintained by both intrinsic superconducting correlations and the proximity effect of the massive S-electrode.The intensity of these sources of superconductivity and, consequently, the order parameter in the outer s-film ∆ s , as well as the kinetic inductance of the structure, are determined by the mutual orientation of the magnetization vectors of its F-layers.It is supposed that the presence of the bulk S-electrode leads to an increase (compared to F 1 S 1 F 2 s spin valves) in the difference in the magnitude of ∆ s for the parallel (P) and antiparallel (AP) orientations of the magnetization vectors.However, quantitative estimates of the maximum magnitude of the possible spinvalve effect is still to be obtained.The ranges of the SF 1 S 1 F 2 s structural material parameters where this maximum is reached are also unknown. The aim of this work is to verify this conjecture by formulating the criteria for the structure to exhibit a potent spin-valve effect and to find the set of material constants that would allow the selection of suitable materials for the design of SF 1 S 1 F 2 s SSVs. Model We assume that the conditions of the dirty limit are satisfied for all the films in the SF 1 S 1 F 2 s multilayer.We also restrict ourselves to considering only the parallel and antiparallel orientations of F-film magnetization vectors M 1,2 . Under these conditions, we can study the proximity problem in the SF 1 S 1 F 2 s SSV in the framework of the one-dimensional Usadel equations [48] with Kupriyanov-Lukichev boundary conditions [49] at the SF and Fs interfaces. In Equations ( 1)-(3), p and q are the subscripts of the corresponding layers, are Matsubara frequencies, ∆ p is the pair potential, H p is the exchange energy of the ferromagnetic layer (H p = 0 in nonferromagnetic materials), T C is the critical temperature of the bulk superconductor, ξ p = (D p /2πT C ) 1/2 is the coherence length, D p is the diffusion coefficient, G p and Φ p are normal and anomalous Green's functions, respectively, γ Bpq = R Bpq A Bpq /ρ p ξ p is a suppression parameter, R Bpq and A Bpq are the resistance and area of the corresponding interface, and ρ p is the resistivity of the p-th film.The plus sign in (3) means that the p-th material is located on the side x m − 0 from the interface position x m , and the minus sign corresponds to the case where the p-th material is at x m + 0. Hereafter, we use the following normalization: h = 1 and k B = 1.The boundary conditions at free interfaces, ∂Φ/∂n = 0, follow from the the requirement that there be no current across them.Here, n is the direction of the normal to the corresponding boundary. Below, we characterize the degree of superconducting correlations in the outer sfilm by the magnitude of the order parameter ∆ s at its free surface and by the difference, δ = ∆ ↑↓ − ∆ ↑↑ , in the ∆ s values calculated in the antiparallel ( ∆ ↑↓ ) and parallel ( ∆ ↑↑ ) directions of the F-layer magnetization vectors. The formulated boundary-value problem (1)-(3) has been solved numerically [50].We set the temperature T = 0.5T C and the thickness of the thick S-layer d S = 5ξ S .We also used the exchange energy H p = 100T C and a suppression parameter γ B = 0.3 for both F-films and for all FS boundaries.These parameters are typical of Nb interfaces with ferromagnetic alloys (see the review in [51] and references therein), with a liquid helium working temperature and a T C of about 9 K for Nb.The boundary-value problem (1)-(3) was solved using numerical methods developed for solutions to nonlinear differential equations through LU factorization for band matrices with three diagonals, in combination with the relaxation method [50].The numerical algorithm has been adapted to solve the Usadel equations, where the superconducting order parameter is treated as a given coordinate function.The exit from the iterative loop on nonlinearities occurred when the difference between two successive iterations reached an accuracy of 10 −9 .The anomalous Green's functions thus computed were then used to compute a new coordinate dependence of the order parameter.The resulting dependence ∆(x) was again substituted into the Usadel equations.The exit from the iteration cycle by ∆(x) was realized when the maximum difference between two successive iterations was less than 10 −6 T C . Proximity Effect in SF 1 S 1 F 2 s Trigger We begin our analysis by studying the proximity effect in an SF 1 S 1 F 2 s SSV when the resistivities of all materials in the structure (ρ F = ρ S ) are the same, the coherence lengths ξ F1 = ξ F2 , and the thicknesses of the S-and F-layers are equal to d F1 = 0.15ξ S , d S1 = 0.2ξ S , and d F2 = 0.25ξ S . Figure 2a-c show the dependencies of the order parameter on the free surface of the s-layer ∆ s and the parameter δ (panel d) on its thickness d s in the case of P (dotted lines) and AP orientations (solid lines) of the magnetization vectors M 1,2 for the SF 1 S 1 F 2 s and F 1 S 1 F 2 s structures without a bottom superconductor electrode.The curves are calculated for different ξ F /ξ S ratios, equal to 1, 2.5, and 2.7.As expected, the transition from the parallel to antiparallel mutual orientation of the vectors M 1,2 is accompanied by an increase in the magnitude of the modulus of the order parameter ∆ s on the free surface of the s-film.Note that for ξ F /ξ S = 1 and ξ F /ξ S = 2.5, the switching is accompanied by a change in the sign of the order parameter. First of all, it should be emphasized that an increase in the ratio ξ F /ξ S actually means a decrease in the thickness of the F-films in units of ξ F .This is the reason for the observed growth of ∆ s (d s ) with an increase in ξ F /ξ S at a fixed value of d s /ξ S . The calculations show that, for the largest effective thickness of the F-layers (ξ F /ξ S = 1) in the F 1 S 1 F 2 s multilayer, the superconducting correlations are completely suppressed at d s = d cr ≈ 3.4ξ S (for the case ξ F /ξ S = 1), and d cr practically does not depend on the mutual orientation of the vectors M 1,2 .This independence is preserved even at smaller thicknesses of the F-layers, which is confirmed by calculations at ξ F /ξ S = 2.5 and ξ F /ξ S = 2.7 with d cr ≈ 2.5ξ S and d cr ≈ 2.4ξ S , respectively.This means that there is no standard spinvalve effect in the structure associated with a change in the effective exchange energy in the ferromagnetic part of the device acting on the s-superconductor.In other words, the superconductivity in the s-layer depends only on the proximity effect of the F 2 -film. The situation in SF 1 S 1 F 2 s devices is completely different.The presence of a massive superconducting S-electrode, whose weakest point is moved closer to the center of the structure, gives additional support to the superconductivity in the s-film.This can be seen from the shape of the black curves in Figure 2a: the magnitude of |∆(d s )| at ξ F /ξ S = 1 and d s = d cr shifts from 0 up to ≈0.5T c , and the dependence drops more smoothly to 0 after It is important to note that for a given d s , the reversal of the direction of the magnetization vector of one of the F-layers to the opposite direction is accompanied by a change in the sign of ∆ s , keeping the difference δ at a negligibly small level.This means that the thickness of the F 2 -film appears to be so large that the additional superconducting support provided by the S-layer practically does not reach the s-film and only provides a phase shift between the superconducting correlations in the S-and s-parts of the SF 1 S 1 F 2 s structure.Note that in the F 2 s proximity system, the magnitude of ∆ s does not depend on the phase of the correlation leading to δ = 0.The small deviation of δ from zero found as a result of the calculations is due to the effect of proximity between the SF 1 S 1 and F 2 s parts of the SF 1 S 1 F 2 s structure. A decrease in the effective thickness of the ferromagnetic layers is accompanied by an increase in the mutual influence of the SF 1 S 1 F 2 and F 2 s blocks.Figure 2b,c show that at ξ F /ξ S = 2.5 and ξ F /ξ S = 2.7, there is a significant increase in the absolute values of δ.It is seen in Figure 2d that the dependence of ∆ s is a nonmonotonic function of the s-layer thickness.It achieves a maximum at With d s ≤ d cr and the parallel orientation of the vectors M 1,2 , the superconductivity in the s-layer turns out to be almost completely suppressed and weakly dependent on d s .In this thickness region, the observed growth of δ with increasing d s is due to an increase in the superconductivity induced in the s-layer, which occurs in the AP configuration of the vectors M 1,2 . At d s > d cr , there is intrinsic superconductivity in the s-film.This is manifested by the growth of the ∆ s module with increasing d s and a monotonous decrease in the dependence of ∆ s .The larger d s is, the stronger the intrinsic superconductivity is in the s-film and the closer ∆ s is to zero. Figure 3 It can be seen that the presented curves are qualitatively different from similar dependencies characterizing the proximity effect in SN and SF multilayer structures with a ferromagnetic film on the free surface.In the SN multilayer case, there should also be jumps in the F-module at the interfaces.However, these jumps do not lead to a change in the phase of the F(x) functions.It does not depend on the spatial coordinate and must coincide with the phase of the massive S-electrode. In multilayer SF structures, the decay of superconducting correlations in F-layers has a damping oscillatory character.This feature causes both module and phase jumps of anomalous functions to occur at the interfaces.Figure 3 shows that the amplitudes of these jumps can differ between adjacent boundaries.The condition that the normal derivative of the anomalous functions F is equal to zero selects, from all their possible spatial configurations, only those that provide an extremum of F on the outer surface of the F-layer.Such phase synchronization leads to the fact that, among all possible spatial configurations F(x), only those in which the phase difference between the massive S-electrode and the outer F-layer is equal to either 0 or π are realized. The SF 1 S 1 F 2 s structure we are studying ends with an s-film in which the spatial variations are not oscillatory.In this case, the jumps of the module and the phase of the functions F on the internal interfaces impose a spatial dependence Θ(x), which ensures the absence of a current in the multilayer.For this reason, the values of Θ(d s ) in Figure 3b are slightly different from 0 in the AP state and are not equal to π with the parallel orientation of the magnetization vectors for the 1-st Matsubara frequency and rather quickly converges to π as their number increases. We can draw three important conclusions from our analysis of the proximity effect.First, in the SF 1 S 1 F 2 s structure, there is a phase mismatch anomalous Green functions on the free surface of the s-film.They do not coincide with each other and do not match the phase of the order parameter.This problem should be taken into account when designing any device containing such a structure as an electrode in a multilayer tunnel junction [52] or as a kinetic inductor in detectors or neuromorphic circuits [19]. Second, we have shown that with ξ F /ξ S = 2.5 and the fixed values of the other parameters of the studied structure, a significant spin-switching effect is realized at the thickness of the s-layer d s ≈ d cr .Namely, the switching of the mutual orientation of the magnetization vectors of the F-layers is accompanied by a change in the magnitude of the parameter modulus from values close to zero to values comparable to the values of ∆ in a massive S-electrode. Third, a significant difference between d P cr and d AP cr proves the possibility of using the standard SF 1 S 1 F 2 spin valve not only for standard S-layer superconductivity control operations but also as a tool to switch superconductivity on or off in the F 2 s part of a structure weakly coupled to the SF 1 S 1 F 2 spin valve.Thus, the SF 1 S 1 F 2 spin valve actually performs the function of a trigger that turns superconductivity on or off in the F 2 s part of the SF 1 S 1 F 2 s device. In the following, we will analyze how stable the obtained trigger effect (TE) is by examining the dependence of the maximum achievable value δ(d s ) = δ max and the thickness of the s-layer d s = d max s at which this maximum is reached on the material and geometrical parameters of the SF 1 S 1 F 2 s structure. Influence of Material Properties and Structural Dimensions on the Trigger Effect The conclusions formulated in the previous section were based on calculations performed for ρ F = ρ S and three fixed ratios of ξ F /ξ S .To understand how stable they are with respect to the variation in these ratios, we generated the maps shown in Figure 4a,b.The values of all the other parameters remained the same as in the calculation of the curves shown in Figure 2. In Figure 4a, the color palette shows the values of the parameter δ max as a function of the ρ F /ρ S and ξ F /ξ S ratios.The red color corresponds to the maximum values of δ max .The blue color corresponds to the minimum values.The dashed curve divides the plane of the parameters ρ F /ρ S and ξ F /ξ S into two regions.In the upper-right corner above this curve, the values ∆ s are positive.Below this curve they are negative. It can be seen that in the vicinity of ξ F /ξ S ≈ 1 (blue area in the lower part of Figure 4a), the values of δ max are close to zero, regardless of the ratio of ρ F /ρ S .At ξ F /ξ S ≳ 1.8, there is a noticeable trigger effect (δ max /T C ≳ 0.5) at almost any ratio of ρ F /ρ S .The strongest triggering effect δ max /T C ≈ 1 occurs in the region shown by the dashed line, where the value of ∆ P changes its sign. The second important parameter in Figure 2 is the thickness of the s-film d max s at which the trigger effect is maximal.In Figure 4b, the color palette shows the values of the parameter d max s as a function of the the ρ F /ρ S and ξ F /ξ S ratios.The red color corresponds to the maximum values of d max s .The blue color corresponds to the minimum values.The data presented in Figure 4b allow us to determine, for fixed values of ρ F /ρ S , ξ F /ξ S , and δ max , which thickness of the s-layer should be chosen to produce the state with the maximum TE.It should be noted that although the maximum δ max amplitude occurs in a wide range of parameters, the corresponding d max s is different at different points.For example, the maximum TE occurs for ρ F = 0.3ρ S , ξ F = 4ξ S at d max s ≈ 3ξ S , while the same TE for ρ F = 4ρ S , ξ F = 1.8ξ S is realized at d max s ≈ 1ξ S .In the design of small-scale superconducting devices using a trigger effect in control elements, this feature can be important and useful. To evaluate the influence of geometric factors on the TE effect, we set d F2 = d F1 + 0.1ξ S and examined the dependence of δ max on d F1 /ξ S for S 1 -layer thicknesses equal to 0.2ξ S , ξ S , and 2ξ S .We have chosen such a relation between the thicknesses of the ferromagnetic layers to allow for independent remagnetization between the ferromagnetic layers F 1 and F 2 in the pseudo-spin-valve structure, which is consistent with experimental data for SF multilayer structures [8,16].Such a choice preserves the difference in thickness between the F 1 and F 2 layers.In this case, the phase addition in the F 1 S 1 F 2 part of the structure varies in the case of the parallel arrangement of magnetization vectors and remains constant in the AP case.The calculations for ρ F = 2ρ S , ξ F = 2ξ S , and ρ F = ρ S , ξ F = ξ S are shown in Figure 5.All other parameters have the same values as those used in Section 3. It can be seen that increasing the thickness of the S 1 layer leads to a suppression of the maximum value of δ max and a shift in the position of the maximum to the larger d F1 /ξ S ratio.This behavior of the δ max (d S1 ) dependence is quite natural. At small and fixed thicknesses of the F-films, an increase in d S1 should be accompanied by a decrease in the influence of the F 2 layer on the amplitude of the anomalous functions at the SF 1 interface and a leveling of the difference between their values in the P and AP configurations.This leads to a shift in d AP cr to larger values, to a convergence of d AP cr and d P cr , and to a suppression of δ max with increasing d S1 .This suppression is clearly seen in Figure 5a,b.It actually means that the growth of d S1 leads to the splitting of the SF 1 S 1 F 2 s structure into two weakly interacting SF 1 S 1 and S 1 F 2 s blocks.With the thicknesses of their superconductors several times larger than ξ S , their own superconductivity is sufficient to synchronize the phases of the order parameter and the anomalous functions as the spatial coordinate moves away from the SF boundaries.In this limit, the parameter δ max → 0, and the values of ∆ s in P and AP configurations can only differ in sign. For large and fixed values of d F1 , superconductivity in the vicinity of SF interfaces is strongly suppressed in the first approximation.It is obvious that the thicker the S 1 interlayer, the faster the recovery of superconductivity in the AP case compared to the P case.This is why the parameter δ max appears larger as d S1 becomes thicker. In an intermediate segment of d F1 , the functions δ max (d F1 ) reach the maximum.The position of the maximum on the d F1 scale is shifted to the larger d F1 with increasing d S1 .This tendency is quite obvious.The ferromagnetic layers F1 and F2 are the cause of the rotation of the pairing phase Θ(F).The changes in Θ(F) are simply additive in the case of the zero thickness of the S1-layer.At the same time, the superconducting order in the S1-layer tends to return the phase to 0 and has a negative effect on the overall phase rotation.The larger the regions occupied by superconductors, the thicker the F-layer should be, which controls the final state of the SF 1 S 1 F 2 s structure.For convenience, all values of d max s at which δ max is reached are not shown in Figure 5, as they have no additional meaning. Finally, we have studied the influence of the exchange energy of ferromagnets on the triggering parameters.Figure 6 shows maps of δ max versus the material parameters ρ F /ρ S and ξ F /ξ S for the values of H = 20T C (a) and H = 50T C (b), which are weaker compared to the H = 100T C that we used previously to obtain Figure 4a.It can be seen that the general form of the dependencies has been preserved.As in Figure 4a, there are two regions of parameters that divide the ρ F /ρ S and ξ F /ξ S planes into two regions that differ in the sign of ∆ s .The absolute values of δ max are rather weakly dependent on h.At the same time, the position of the high-TE region is significantly shifted with the change in H.While for strong ferromagnets H = 100T C , the significant TE effect appears only at ξ F ≥ 2ξ S and makes significant demands on the choice of materials, at H = 20T C , the strongest TE effect is available in the interval 0.5ξ S ≤ ξ F ≤ 1.5ξ S , which is quite reasonable for experimental realization.For convenience, all values of d max s where δ max is reached are not shown in Figure 6, as they are similar to those in Figure 4b. Discussion and Conclusions Our studies of the trigger effect in the SF 1 S 1 F 2 s structure have shown that it is very stable with respect to variations in its material and geometrical factors.The effect itself is that the SF 1 S 1 F 2 spin valve does not control the superconducting state of the whole structure, but only that of its F 2 s part.In this case, the fact that the F 2 s block is in the pre-critical state is significantly exploited.The critical thickness of the s-film is determined from the equality of the order parameter and the anomalous Green functions at the F 2 s boundary to zero and the equality of the normal derivative to zero at the free boundary of the superconductor.It lies in the neighborhood of about 3.5ξ S .For such large values of the critical thickness, the order parameter and the anomalous Green functions have the opportunity to increase from zero to values comparable to T C at the free surface of a superconductor with the increasing spatial coordinate.We have shown that in SF 1 S 1 F 2 s devices, there is a large difference in the critical thickness of their F 2 s part between parallel and antiparallel F-film magnetization vector orientations.This difference is the basis of the trigger effect we discovered.It allows the superconductivity in the s-film of the structure to be switched on or off by changing the mutual orientation of the vectors M 1,2 .Moreover, with such a switch, the absolute values of the order parameter at the free surface of the s-layer can differ only slightly from its equilibrium values.Importantly, the sign of the order parameter can be either positive or negative.This opens up new possibilities for the design of devices to control the inductance and critical current of Josephson junctions. For example, the critical current of tunneling SF 1 S 1 F 2 sIS structures, where "I" denotes a layer with tunnel-type conductivity, is determined by the superconducting material parameters of those regions of the s-and S-films that are adjacent to the I-layer.By exploiting the trigger effect in the SF 1 S 1 F 2 s electrode of the SF 1 S 1 F 2 sIS structure, one can provide its switching between 0 and π states, while its critical current and characteristic voltage would be close to thsoe of standard junctions in digital circuits. At the same time, the lack of phase synchronization of the order parameter and anomalous Green's functions on the free surface of the s-layer is an important feature that should be taken into account in the design of Josephson tunnel structures that use the trigger effect in their operation.As a consequence of this desynchronization, it is impossible to determine the phase difference in the order parameters between the s-and Figure 1 . Figure 1.Sketch of the SF 1 S 1 F 2 s structure in P (a) and AP (b) orientations of magnetization.Note that the upper layer can be transferred from the superconducting state to the normal state and vice versa by changing the mutual orientation of the magnetization vectors of the ferromagnetic layers of the structure. Figure 2 . Figure 2. Dependence of the order parameter on the s-layer free surface ∆ s (panels (a)-(c)) as a function of d s in the case of P and AP orientations of the vectors M 1,2 (solid and dotted lines, respectively) for SF 1 S 1 F 2 s and F 1 S 1 F 2 s structures (black and red colors, respectively).The curves are calculated for three values of the parameter ξ F /ξ S , equal to 1, 2.5, and 2.7 ((a), (c), and (d) panels, respectively).Dependence of the parameter δ (panel (d)) on d s for values of the parameter ξ F /ξ S equal to 1, 2.5, and 2.7 (black, red, and blue colors, respectively).In panel d, the values of δ, calculated with ξ F /ξ S = 1, are increased by a factor of 10 for the sake of clarity.The other parameters of the SF 1 S 1 F 2 s-(F 1 S 1 F 2 s-) structure are d S = 5ξ S , d F1 = 0.15ξ S , d S1 = 0.2ξ S , d F2 = 0.25ξ S , H = 100T C , T = 0.5T C , γ B = 0.3, ρ F = ρ S , ξ F1 = ξ F2 . provides a deeper insight into the characteristics of the proximity effect in the SF 1 S 1 F 2 s structure.The graphs demonstrate the spatial distribution of the module of the pair amplitude F(x) = Φ p,ω / ω 2 p + Φ p,ω Φ * p,−ω (panel a) and its phase Θ = arctan(Im(F)/Re(F)) (panel b) calculated for the first Matsubara frequency and ξ F /ξ S = 2.5, d s = d max s = 2.5ξ S .The results obtained with the parallel and antiparallel orientations of the F-film magnetization vectors are shown as solid black and dashed red curves, respectively.The blue rectangles indicate the areas occupied by the ferromagnetic layers. Figure 3 . Figure 3. Spatial distributions of module of the pair amplitudes F and their phases Θ at the first Matsubara frequency (panels (a) and (b), respectively), calculated for ξ F /ξ S = 2.5, d s = 2.5ξ S , and P and AP orientations (black solid and red dotted lines, respectively).The blue rectangles indicate the areas occupied by the ferromagnetic layers.The inset in panel (b) shows the phase value on the free s-surface Θ s for different Matsubara frequencies.The other parameters of the SF 1 S 1 F 2 s structure are d S = 5ξ S , d F1 = 0.15ξ S , d S1 = 0.2ξ S , d F2 = 0.25ξ S , H = 100T C , T = 0.5T C , γ B = 0.3, ρ F = ρ S , Figure 4 . Figure 4. Maps of material parameters of ferromagnets ρ F , ξ F for the maximum difference during magnetization reversal δ max (a) and achieved at thicknesses d max s (b).Below the dotted line in the P orientation, the ∆ in the s-layer is negative; above the line, it is positive.The other parameters of the SF 1 S 1 F 2 s structure are d S = 5ξ S , d F1 = 0.15ξ S , d S1 = 0.2ξ S , d F2 = 0.25ξ S , H = 100T C , T = 0.5T C , γ B = 0.3. Figure 5 . Figure 5. Dependence of the maximum difference δ max upon magnetization reversal on the thickness of ferromagnets d F1 for the material parameters ρ F /ρ S = 2, ξ F /ξ S = 2 (a) and ρ F /ρ S = 1, ξ F /ξ S = 1 (b) and for the case when the middle layer is a normal metal (c).Panel (d) is the δ max map depending on the thicknesses of the ferromagnets d F1 and the thickness of the superconducting middle layer d S1 .In the calculations, it was always assumed that d F2 = d F1 + 0.1ξ S .The other parameters of the SF 1 S 1 F 2 s(SF 1 NF 2 s) structure were d S = 5ξ S , d S1 = 0.2ξ S , H = 100T C , T = 0.5T C , γ B = 0.3. Figure Figure5cshows that there is no shift in the position of δ max (d F1 ) as d S1 increases in the case of the substitution of the S 1 -film by a normal metal.Due to the substitution, the regions occupied by superconductors do not change as d S1 increases.As a result, there is no shift in the maximum in the δ max (d F1 ) dependencies.The color palette in Figure5dgives the value of δ max as a function of the d S1 /ξ S and d F1 /ξ S ratios.The red color corresponds to the maximum values of δ max .The blue color corresponds to the minimum values.The data presented in Figure5dallow us to determine, for fixed values of d S1 /ξ S , which thickness of the F-layer should be chosen in order to realize states having the maximum value of δ max and a positive or negative value of ∆ s (d s ).For convenience, all values of d max Figure 6 . Figure 6.Maps for material parameters of ferromagnets ρ F , ξ F for the maximum difference in magnetization reversal δ max at thick exchange energies H = 20T C (a) and H = 50T C (b).Other parameters of the SF 1 S 1 F 2 s structure are d S = 5ξ S , d F = 0.15ξ S , d S 1 = 0.2ξ S , d F 1 = 0.25ξ S , T = 0.5T C , γ B = 0.3.
7,597.8
2024-01-23T00:00:00.000
[ "Physics" ]
Electron radiation–induced degradation of GaAs solar cells with different architectures The effects of electron irradiation on the performance of GaAs solar cells with a range of architectures is studied. Solar cells with shallow and deep junction designs processed on the native wafer as well as into a thin‐film were irradiated by 1‐MeV electrons with fluence up to 1×1015 e−/cm2. The degradation of the cell performance due to irradiation was studied experimentally and theoretically using model simulations, and a coherent set of minority carriers' lifetime damage constants was derived. The solar cell performance degradation primarily depends on the junction depth and the thickness of the active layers, whereas the material damage shows to be insensitive to the cell architecture and fabrication steps. The modeling study has pointed out that besides the reduction of carriers lifetime, the electron irradiation strongly affects the quality of hetero‐interfaces, an effect scarcely addressed in the literature. It is demonstrated that linear increase with the electron fluence of the surface recombination velocity at the front and rear hetero‐interfaces of the solar cell accurately describes the degradation of the spectral response and of the dark current characteristic upon irradiation. A shallow junction solar cell processed into a thin‐film device has the lowest sensitivity to electron radiation, showing an efficiency at the end of life equivalent to 82% of the beginning‐of‐life efficiency. possibility of applying a back reflector, thin-film devices require smaller active layer thicknesses, further reducing costs related to both the weight and the growth of the epitaxial layers. The reflectivity of the rear mirror in high-quality materials has also been proven important to maximize photon recycling, which in turn increases the open circuit voltage and therefore the efficiency of the devices. [16][17][18][19][20][21] In these structures, the bottom subcell consists of thin-film GaAs, which has demonstrated the highest conversion efficiency among all types of single junction solar cells. 1 Additionally to back contact design strategies, 22-24 the position of the junction in GaAs cells has been identified as an important parameter, showing that a device with the junction closer to the bottom of n-on-p cells allows for operation in the radiative recombination regime. 20,25,26 This type of cell, therefore, has a higher open circuit voltage and is preferred over the standard structure with a junction located closer to the front. But even though the deep junction design allows for better performance at the beginning-of-life, its resilience in the space environment is expected to be lower than that of the conventional shallow junction design. 27,28 The most challenging aspects for solar cells in space are the exposure to particle irradiation and the temperature cycling. Because of the copper commonly applied as the flexible carrier for thin-film GaAs cells, degradation related to copper diffusion is a potential problem for devices with this architecture. It has been shown that the effects of copper-diffusion are temperature dependent, and for temperatures below 200 • C, it does not reduce the cell performance in drastic degrees, provided the absence of damages induced by thermal stress, such as cracks or bends. 29 The level of irradiation that cells would face throughout their entire lifetime in space depends on the type of mission. Based on the hypothesis that the permanent displacement damage produced by the incidence of charged particles is the main aspect that degrades the device performance in space, the mission equivalent damage from electrons, protons, ions, and neutrons of different energies can be averaged by a certain electron fluence. 30-33 Geostationary orbit missions (GEO) usually last for 15 years, and the damage created by the irradiation environment is equivalent to that obtained by a fluence of 1 × 10 15 1-MeV electrons/cm 2 . For low earth orbit (LEO) missions, which last for approximate 10 years at a lower altitude, the equivalent fluences are five to 10 times lower. The recombination centers formed in GaAs solar cells under irradiation have been studied in depth 30,31,33-36 and the implications of the junction position with lifetime degradation have been discussed. 27,28 It is generally understood that irradiation reduces the minority carriers' diffusion length, and therefore the average distance that these carriers have to travel before reaching the p−n junction directly affect the cells resilience to the space environment. A systematic study of different architectures, however, has not yet been reported, and there is a lack of consistency between the previously reported minority carriers' lifetime degradation constants. Furthermore, in view of the current trend of developing thin and ultra-thin radiation-hard solar cells, 14 In the current study, the possible influence of electron irradiation on hetero-interfaces of GaAs solar cells is systematically investigated. For this purpose, GaAs cells with different junction depths with respect to the hetero-interfaces, both on their native substrates and pro- Device model The solar cells subjected to 1-MeV irradiation have been analyzed based on the 1D analytical Hovel model 39 and its extended version for thin-film solar cells with back-side reflector. 40 The model formulation is rather general and well suited to describe different solar cell designs provided that material and geometry parameters are changed accordingly. A schematic depiction of the modeled structure and corresponding variables used in this study are shown in Figure 3, where X E and X B denote the thickness of the emitter and base layers, respectively, and W denotes the width of the depleted region across the junction. where the radiative lifetime is given as ,rad = 1∕BN AB(DE) , B being the microscopic radiative recombination rate of the semiconductor, and N AB , N DE the acceptor and donor doping density in the base and emitter. For the sake of thermodynamic consistency, the coefficient B is calculated by integrating the spontaneous emission rate associated with the GaAs absorption profile used in the CPS and is found to be 6.22×10 −10 cm 3 /s. Photon recycling is modeled through the photon recycling factor f PR , calculated according to the model reported by Steiner et al. 19 For the solar cells in this study the calculated f PR ranges from approx-imately 0.78 for the substrate based cells to 0.93 for the thin-film devices. Finally, ,SRH characterizes the non-radiative recombination lifetime. At the microscopic level, ,SRH results from electron-hole recombination and generation events through defect states whose rates can be modeled according to the classical Shockley-Read-Hall theory. Multiple defects can be taken into account, provided that they can be considered independent, characterized by their own density, energy and capture time constants. Under the assumption of low-level injection, exploited in this work, this yields a constant effective lifetime, ,SRH , independent of the injection level and dominated by the defect state with higher rates, i.e. those generally located close to mid-gap. 43 Therefore, in the present work, ,SRH was used as a fitting parameter and no a priori hypothesis was done on the nature and characteristics of the defect levels. After de-embedding the possible influence of the parasitic series and shunt resistances, the dark J − V characteristic (J dark ) of the solar cell can be modeled by two diodes in parallel 25 : where J 01 and J 02 are the reverse saturation current densities of the 1kT and 2kT components, respectively, and q, k, and T the electron charge, Boltzmann constant and temperature. The ratio between the two components of the dark current is voltage dependent, with non-radiative recombination in the perimeter of the cell and in the space-charge region dominating at low voltages (the 2kT region) and recombination in the quasi-neutral regions (QNR) dominating at higher voltages (the 1kT region). According to the junction diffusion theory, J 01 arises from the bulk and interface recombination of minority carriers in the base and emitter QNR regions 39 and is given by with each component given by and where d B and d E are the thickness of the QNR of the base and emitter with intrinsic carrier density n 2 i,B and n 2 i,E , respectively. The intrinsic carrier density is computed taking into account the bandgap narrowing effect in the highly doped regions. In particular, the bandgap narrowing significantly affects the QNR recombination current in the highly doped base and emitter layer of the DJ and SJ cells, respectively. We have assumed bandgap shrinkage E g ≈ 2 × 10 −11 N 1∕2 AB eV for p-type GaAs 44 and E g ≈ 2 × 10 −8 N 1∕3 DE for n-type GaAs. 45 The J 02 dark current component involves non radiative recombination mechanisms in the space charge region that can usually be modeled according to the Shockley-Read-Hall theory, 40,46 with analytical or semi-analytical formulations available under the assumption of a single mid-gap defect level 46 and for the more realistic case of multiple trap levels. 33 Overview of the performance at BOL and upon irradiation The average photovoltaic cell parameters measured at BOL for the different device architectures are reported in Table 2 and compared with the simulated values. When corrected to the active area, the solar cells present efficiencies close to 25% under AM1.5G, and close to 21.5% under AM0 (1367 W/m 2 at 28 • C). The produced thin-film solar cells mounted on a metal foil present a specific power above 1200 W/kg, and when combined with light weight mounting systems and flexible protective coatings for space application they show the potential to reach a module specific power above 400 W/kg. 2 In order to identify the four model parameters S p , S n , p , and n before and after irradiation, EQE spectra, illuminated and dark J − V parameters at BOL and upon irradiation were considered. The Table 2 are very representative of the measured values and indicate a good quality of the epitaxial layers. The measured V oc values are all within 1% of the simulated values, and a slightly larger variation is seen for the J sc , probably due to a non-perfect deposition of the ARC layers, which can directly affect the experimentally obtained current. Note that the J sc values of the TF cells are significantly lower than that of the SB cells because their grid coverage is much higher. When J sc is corrected for the effective exposed area, the values are comparable. The effect that the electron irradiation has on the illuminated J − V parameters is expressed in terms of the remaining factor with respect to the BOL values, defined as Parameter∕Parameter BOL . The average experimentally determined remaining factors from J sc and V oc are shown in Figure 4A The modeled efficiency remaining factors match the measured values within 5% relative, as shown in Table 3. The remaining efficiencies are clearly higher for SJ than for DJ cells, and are higher for both geometries in a thin-film design than when they are substrate based, (1) and accounting -whenever needed -for an imperfect passivation at the window and BSF interfaces. Following this approach specifically Analysis at BOL The Note. At BOL, the values of n (p) and L n (p) are the nominal ones and only S n and S p are used as fitting parameters. At EOL, both lifetime and surface recombination velocity are used as fitting parameters. The values presented in parentheses may be affected by a large error. Table 5. whereas for thin QNR (L p(n) ≫ d E(B) ), J 01 is dominated by surface recombination and Equations (3a) to (3c) reduce to The SJ design presents a strongly asymmetric doping (N DE ∕N AB = 100), and therefore, the base dark current component tends to be highly dominant ( Figure 8A). On the other hand, in the DJ design Table 5). This indicates a significant contribution to J 01 from the base layer. In fact, as can be verified in Figure 8B A/cm 2 ) that is attributed to an S n comparatively higher than that of the SB structure, as also observed for the DJ cells. Overall, it turns out that in both DJ and SJ geometries p and S p are the predominant factors affecting the EQE at BOL, while n and S n mainly influence the V oc through the recombination current in the p-type QNR region. Analysis of bulk and interface radiation damage In order to simulate the cell performance after intermediate electron irradiation doses, the decrease of the SRH lifetime ,SRH with radiation is modeled as 1 The degradation of S p and S n is considered to be linearly dependent on the fluence and expressed by where S (BOL) is the value of S at BOL and K S is the interface damage rate, deduced from the best CPS model fit to the experimental data. In the SJ cells, most of the photogenerated free carriers, therefore, only have to diffuse over a short distance to the pn-junction to be drifted towards the right electrode and be collected. This is particularly true for carriers generated by short wavelength light. For longer wavelengths a smaller fraction of the light is able to penetrate deeper into the cell and consequently generates some minority carriers deeper in the base, which have to travel further before reaching the pn-junction. Therefore, the short wavelength photocurrent is mostly sustained by the thin emitter, whereas most of the long wavelength photocurrent is supported by the thick base. Electrons generated deeper in Differently from the SJ cells, in DJ cells all the minority carriers photogenerated in the emitter (except for the small fraction generated deeper in the cell) have to diffuse over a long distance before reaching the pn-junction. Therefore, the degradation of p and S p in the emitter reduces the collection efficiency over the entire wavelength range in the DJ cell (see Figure 6), resulting in a significant reduction of J sc , whereas the impact of the base parameters is restricted to the longer wavelength region and turns out to be completely marginal. In fact, the significant asymmetry of the spectral response between the short and the long wavelength ranges observed for the DJ cells after irradiation can only be correctly modeled if an increased S p is assumed, supporting the approach described in Equation (7). In summary, the EQE upon irradiation is influenced mainly by the lifetime in the emitter for DJ cells and by the lifetime in both emitter and base for SJ cells. Moreover, the observed increase in S p in both SJ and DJ cells upon irradiation indicates a degradation in the emitter-window hetero-interface quality. The study of the dark J − V characteristics upon irradiation (see 6 Values for the radiation induced damage rates deduced from the J − V and EQE measurements of the cells geometry, as can be observed in Figure 8, and is quite insensitive to n as long as the L n ∕d B ratio remains higher than one. Therefore, the electron lifetime cannot be reliably extracted. In order to determine the damage rates, an approach was taken in which initially K p and K n were assumed to be the same for all configurations and then adjusted by closely fitting the model outcomes to the measured EQE, average illuminated J − V parameters and dark J − V curves. The fitted EOL values for , L and S resulting from this approach are depicted in the right portion of Table 4. Because the depletion region in the DJ solar cells is much closer to the base-BSF hetero-interface, the increase in S n is the limiting mechanism to the performance for this geometry, rather than the decrease in n , and the values for K n cannot be deduced. Therefore, the n values shown in Table 4 for the DJ cell are set equal to those extracted from the analysis of the SJ cells. Conversely, in the SJ solar cells the junction distance to the rear interface is so large that the increase in S n is hardly relevant to the performance, and therefore the values for K S n cannot be determined. The extracted values of K p , K n , K S p and K S n for the various cell geometries under study are stated in Table 6. The hypothesis of a linear dependence of the recombination velocities with irradiation fluence provides a very good agreement between measured and simulated values of J sc , V oc and for the whole fluence range, as seen from the detailed comparison of measured and simulated data in Figure 4 and in Table 3. Overall, the observed degradation of the performance of the solar cells upon irradiation is satisfactorily simulated assuming similar lifetime damage rates for electrons and holes for all the architectures of the solar cells, indicating that the material radiation damage is probably not affected by the device geometry or the fabrication steps. The identified values for the lifetime damage constant of minority electrons and holes are in good agreement with previous studies. 27 ,47 In particular, taking into account the carriers' diffusivity, the ratio K n ∕K p corresponds to a ratio in terms of diffusion length damage constant of about one tenth, inferring a damage rate for the diffusion length in n-type GaAs about ten times larger than that one in p-type GaAs, as theoretically predicted by Yamaguchi et al. 27 The thickness of the active layers and the position of the depletion region are shown to be the determinant parameters with regards to the radiation resistance of the cells. Thin-film devices present the big advantage of having a back reflector that allows the thickness of the active layers to be significantly reduced. Therefore, a shallow junction solar cell processed into a thin-film geometry is found to be the best structure for space applications. The MOCVD growth of hetero-interfaces such as GaAs/AlInP and GaAs/InGaP has been shown to be a challenge in the past. [49][50][51][52] A meticulous control of chemical composition, material inter-diffusion and surface segregation is necessary in order to prevent the formation of mixed compounds that reduce the abruptness of the interfaces. The fact that interface recombination velocity is affected by irradiation indicates that it is an important aspect to be optimized, with the potential to further increase the resilience of the TF -SJ devices under irradiation. to the device geometry or fabrication steps. The incidence of electrons introduces lattice defects in the cells that act as recombination centers, directly impacting carrier lifetimes. Because for the DJ cells at EOL the hole diffusion length is smaller than the emitter thickness, the collection of generated carriers is strongly reduced, and this geometry presents a much larger decrease of J sc when compared to SJ devices. CONCLUSIONS Therefore, we find that DJ cells in the present configuration are not suited for space application. Most importantly, however, the modeling study has pointed out that besides the reduction of the lifetime of the carriers, the electron irradiation strongly affects the quality of hetero-interfaces, characterized by a linear increase in the interface recombination velocity. The current study shows that the degradation of the window-emitter and base-BSF hetero-interfaces quality is responsible for a significant increase of the diffusion component of the dark current, and consequently for the reduction of V oc . Therefore, it is a critical aspect which deserves further investigation since it can become the bottleneck for the optimization of the cell radiation tolerance. A shallow junction solar cell processed into a thin-film geometry is found to be the best structure for space applications, presenting an EOL average efficiency that is 82% of the BOL value. The presence of a rear reflector in the thin-film geometry allows the design of thinner devices that show the potential to further increase the BOL performance and the resilience under irradiation, provided that the interface radiation hardness can also be improved.
4,597.6
2020-01-08T00:00:00.000
[ "Physics" ]
Providing adaptive properties of the drive of a rotary drilling machine The article is devoted to the issue of automatic adjustment of the drilling machine to operating modes close to optimal. Information about the method of automatic control of the rotational drilling process is provided. The essence of the method consists in a special design of the drive, in which the work of the hydraulic motor and the hydraulic cylinder are connected through the working process. As a result of this connection of hydraulic elements, the torque of rotation on the hydraulic motor controls the feed of the hydraulic cylinder. Tuning throttles provide adjustment of modes to the required range of drilling conditions. Depending on the strength of the rock being drilled and the operating conditions of the drilling machine, the drive automatically changes the feed rate, thus ensuring maximum productivity. Several variations of this adaptive drive of the drilling machine are described. A description of the authors development, the drive scheme, and a description of the principle of operation is given. The advantages of the adaptive drive and its disadvantages are shown. Introduction There is a wide distribution of drilling machines of various capacities and productivity for various applications and, as a rule, these are rotary type drilling machines. Various methods and principles of drilling process management are applied with varying degrees of automation and control methods, and these methods have certain advantages and disadvantages. Let's look at the main ones. Indicators of the drilling process, along with the physical and mechanical properties of the material being drilled, depend on two operating parameters: the feed force (or feed speed) and the rotation frequency of the drill rod. At the same time, the following methods of their regulation are the most characteristic [1,2]: 1. Regulation of the constant value of the specific feed S (mm / revolution). The rotation speed and the feed pressure are regulated. 2. Maintaining a constant value of thrust (in theory of cut -free feed). The speed of rotation is adjustable. For example, [3]. 3. Maintaining a stable speed value when adjusting the feed force *Correspondingauthor:lem-mikhailr@ ya.ru 4. ensure maximum drilling performance through microprocessor control of feed force and rotation speed. 5. Maintaining a rational ratio of rotation speed and feed force depending on the strength of the drill rock (adaptive parameter control) [4]. All of the above methods, but to a lesser extent the latter method, are characterized by difficulties in drilling management when factors that require the participation of the operator of the drilling machine-the driller. It is known that the drilling process is influenced by external and internal factors. External factors include the variability of drilling conditions (changes in the strength of the drilled rock, rock viscosity, drilling depth, the presence of inclusions, layers with different properties). Internal (technological) factors include wear of the cutting tool, strength limitations of the design and drive power, etc. Therefore, the operator-controlled drilling method is more often used, taking into account the manifestation of influencing factors, sometimes relying on the readings of sensors for the state of the drilling process [5]. For powerful drilling systems, microprocessor control of the drilling process is used, taking into account the influencing factors. But these systems are very expensive and are rarely used for relatively low-power drilling machines. Obviously, there is a problem of automating the drilling process for relatively lowpower drilling machines, relatively inexpensive and reliable methods and structures. At the same time, automatic control of the drilling process must be carried out taking into account these and other factors. According to the authors, taking into account the wellknown publications [5,6,7,8] and the experience of using a similar adaptive drive [9], the most appropriate from the point of view of simplicity of solving the problem at minimal cost is the use of an adaptive hydraulic drive. In this question, the concept of drilling process management without involving automation and computer technologies is known [10]. This method is known, it has been developed by a number of researchers and tested in the industry [11]. The adaptive drive of the drilling machine will automatically adjust the drilling parameters to operating modes that are close to rational, when the working conditions of the drilling machine change. For example, in a well-known adaptive drilling machine, the feed force and rotation speed of the drill rod are regulated within certain limits when drilling conditions change [12]. The article considers the possibility of automating the drilling process by using the drive of a drilling machine with two working movements, which has an adaptive structure. This drive will provide the connection of thrust and rotational speed, so that the thrust force of will control the torque to the drill rod (shaft motor). The theoretical basis for the essence of the adaptive drilling process is the results of the analysis of the interaction of the cutter with the destroyed material [4]. the main points of this analysis are given below. Elements of the theory of adaptive rock cutting Under the adaptive interaction of the cutter with the rock, we will consider such cutting process, such a variation of the relationship between feed and cutting when energy is supplied to the cutter, samoraspustitsya between the mutually perpendicular components, which are used in the cutting of coal to determine how the components of cut (Z) and flow (Y). This implies that the cutter is moved in the direction of the Z component, and mechanically restricted from other movements, with the exception of micromovements due to the backlash, bending of the holder, resilient supports and rails. For rice. 1 the working model of such interaction is given. elements of adaptive rock cutting The model of adaptive interaction of a cutter with a rock conventionally represents one of the types of adaptation of cutting tool parameters to the variability of strength, viscoplastic characteristics of the destroyed material. The essence of automatic change of cutting parameters is to redistribute the load between the cutting force (Z) and the feed (Y). note that the cutting force (Z) is formed mainly by the pressure of rock on the front face of the cutter and reflects the phenomenon of separation of this face of the destroyed elements in front of the front face of the cutter. The feed force (Y) is formed by the resistance to indentation of the cutter blunt area from its back face. The destruction of the rock is caused by compressive loads and is accompanied by small crushing and crushing of rock particles. Using the above model, we consider the physical essence of the formation of feed components (Y) and cutting (Z) in the adaptive interaction of the cutter with the destroyed material. The process of cutting rock is characterized by the following phenomena: 1. The isolation elements under the action of the cutter on the array, to a greater extent in front of the front face (surface). 2. the Size and shape of the separated elements depend on the compliance of the material characteristics with the characteristics of the cutting part and cutting parameters. 3. the cutting Process is characterized by the dissipation of energy that goes to the deformation of the material, its crumpling and fine grinding. And above all under the back face of the incisor. Minimization of input energy dissipation is the direction to achieve optimal cutting, with a minimum specific energy consumption of the cutting process. 4. The separation element is preceded by the stress state in the array, the magnitude, the direction of this voltage, physico-mechanical properties of fractured material, the geometry of the cutting determine the energy consumption for destruction. Consider the process of forming the load on the cutter in the cutting direction and in the feed direction (Fig. 1). Under the action of forces applied to the cutter, and when it moves in the direction of cutting V p, volumetric stresses are created in the material being destroyed. The shape and intensity of stresses in the material are determined by the inhomogeneity of the material and its various deformability in the main directions studied along The z and Y axes, coinciding with the lines of action of the corresponding components. As you know, the inhomogeneity and fracturing of real coals and rocks have a random orientation and are manifested in the Z and Y directions in different ways. In the feed direction along the Y-axis, the action of the cutter can be described as the indentation of its blunt site into the half-space, and the resistance to this indentation is determined by the elastic-plastic properties and compression resistance (along the Y-axis ) in the undercut zone [13]. The physical processes that characterize the interaction of the cutter and the rock are well studied for power cutting and are the basis for indicators of rock fracture strength [14,15]. The hardness index "contact strength" is widely used to evaluate the properties of rocks. The feed force Y can be represented by the expression: where: y к Р , µ y , E y , F y 3 -contact strength of the material being destroyed, its elastic modulus, modulus of deformability, and the projection of the cutter blunt site on a plane perpendicular to the y axis, respectively. The "y" index corresponds to the given characteristics in the direction of the y axis. In power cutting, the resistivity and volume of the deformed material under the blunt pad is largely determined by the size of the pad itself, the contact strength of the material, and does not depend on the front cutting angle, the sharpening angle (if the rear angle is const) and to a small extent depends on the rear angle of the cutter. In adaptive cutting, this specific volume pressure and deformation under the cutting platform depend on physical phenomena occurring on the front face of the cutter. Indeed, according to the model under study, Z = y. The orientation of the vector resultant of the forces of destructible material on the front face defines a front cutting angle (δ), a tensile strength of rocks in tension ([σр]) and shear([τ]), elastic-plastic properties of the material being destroyed (μ, E) in the direction perpendicular front face [15]. Therefore the force of resistance to the movement of the cutter in the direction of the front face of the cutter can be represented by a functional relationship: Z1= f2(δ z , [σр] z , [τ] z , µ z , E z ) (2) or taking into account the friction resistance on the back face of the cutter: The decisive factor in the formation of the load on the front face is the proportion of rock destruction by tensile forces (separation). In the works [15,16], special attention is paid to this physical phenomenon when cutting. It is the implementation of this type of destruction that determines the cutting efficiency and its specific energy intensity. The listed factors in the considered phase of interaction between the cutter and the fracture conditions before the front face are in balance for a moment. In adaptive cutting, this equilibrium can be described by the following functional relationships: where: Z 1 , Y 1 -are the instantaneous values of the component cutting forces Z and feed Y. Thus, the elements of the theory of adaptive cutting revealed a significant difference between the mechanics of the interaction of the adaptive cutter and the mechanics of the interaction of the cutter with the destroyed material during typical cutting. In adaptive cutting, rational cutting parameters are not set, but they are self-determined within limits. Therefore, a very important aspect of adaptive cutting is to describe the limitations of autoregulation of the cutting process. Development of an adaptive drive for a drilling machine One of the main factors determining the need to regulate the parameters and modes of rotational drilling is the strength (strength ) of the drilled rock. Therefore, consider the method of automatic control of the feed force depending on the rotation frequency of the drill rod when changing the strength of the drill rock, more precisely, the moment of resistance to rotation, which in turn depends primarily on the specific feed, which decreases with increasing rock strength, and increases with decreasing strength. Here is an example of one of the first rotary drilling machines with adaptive drive, which implements adaptive cutting. This is a drilling machine with a hydraulic drive of rotation of the drill rod and a hydraulic drive of rotation that provides the rod feed in the direction of drilling, in which the feed force automatically changes with the change of the torque on the rotation drive [17,18]. Self-regulation of such a machine is provided by a two-differential drive, which reduces the feed speed while increasing the torque. The disadvantage of such machines is a relatively complex and expensive drive, including a mechanical differential. Another example of a rotary drilling machine with an adaptive drive is a machine in which the feed force also changes depending on the moment of resistance to rotation of the rod. This machine uses an asynchronous motor to rotate the rod and hydraulic cylinders to feed the rod into the face. The hydraulic cylinders are fed with hydraulic fluid through an adjustable throttle controlled by the rotation moment on the drill rod. The technical solution under consideration provides automatic reduction of the feed force when the torque resistance to rotation is overstated above the set value by a relatively simple design solution. However, the depth of regulation in this technical solution is limited -when the rod is jammed, the feed force is not automatically reversed, but only reduced to a minimum value. In addition, the disadvantage of an automated drive of such a drilling machine is the use of electric and hydraulic motors. Drilling machines with a hydraulic rotation engine and a hydraulic feed cylinder are also known. The adaptive drilling machine has a relatively simple design: a hydraulic cylinder of a telescopic type and a hydraulic motor that is fixed to the retractable rod of the hydraulic cylinder. To ensure adaptive properties in such a drilling machine, we have proposed its modernization, which gives the drive adaptive properties. The essence of the modernization is that the drive of this drilling machine is made according to a special scheme of connecting hydraulic elements to form two main hydraulic differentials. For rice. 2 shows the hydraulic drive scheme of the drilling machine with adaptive properties. As you can see in the diagram Fig. 2, the hydraulic cylinder 1 and the hydraulic motor 2 are connected in series, with the spool 5 in the working position (III). In this case, the hydraulic line from the output of the hydraulic motor 2 is connected to the piston cavity of the hydraulic cylinder 1 and to the entrance to the third tuning throttle 9, with increased hydraulic resistance, the output from which is connected to the drain line. The rod cavity of the hydraulic cylinder 1 is also connected to the drain line through the first adjusting throttle 7 and the spool 5. To the pressure line after the spool 6 the second tuning choke 8 is connected, the output from which is connected to the rod cavity of the hydraulic cylinder 1. to visualize the principle of operation of the developed adaptive drive, we present it in the form of a bridge scheme shown in Fig.3. As seen in Fig. 3 the first hydraulic differential is formed by the pressure line, the hydraulic motor 2 and the adjusting throttle 8. The hydraulic line through the throttle 8 creates a counter-pressure in the rod cavity of the hydraulic cylinder 1. When the torque on the hydraulic motor increases, its hydraulic resistance increases, and the back pressure increases, reducing the feed force. The second hydraulic differential is the hydraulic cylinder 1, the output link of which is the hydraulic cylinder rod. The movement of the rod and the feed force developed by it is determined by the pressure difference in the rod and in the piston cavities. It is obvious that with such a drive scheme, the feed force is regulated by the torque on the hydraulic motor, and the tuning chokes allow you to adjust the degree of such influence. The drive provides adaptive adjustment to rational drilling modes when changing the strength of the drilled rock .
3,805.2
2020-01-01T00:00:00.000
[ "Materials Science" ]
Non-poissonian Distribution of Point Mutations in DNA In general, for chemical reactions occurring in systems, where fluctuations are not negligibly small, it is necessary to introduce a master equation for distribution of probability of fluctuations. It has been established that the monomolecular reactions of a type as A ↔ X are described by the master equation, which leads to a Poisson distribution with the variance equal to the average value N0. However, the consideration of the Löwdin mechanism as autocatalytic non-linear chemical reactions such as A + X ↔ 2X and the corresponding master equation lead to a non-Poissonian probability distribution of fluctuations. In the presented work, first-order autocatalysis has been applied to the Löwdin's mechanism of spontaneous mutations in DNA. Describing double proton transfers between complimentary nucleotide bases along the chain by first-order autocatalytic reactions, the corresponding master equation for protons in tautomeric states becomes non-linear, and at non-equilibrium conditions this leads to the non-Poissonian distribution of spontaneous mutations in DNA. It is also suggested that the accumulation of large fluctuations of successive cooperative concerted protons along the chain may produce higher non-linearities which could have a significant impact on some biochemical processes, occurring in DNA. INTRODUCTION The role of mutations in DNA is crucial for human aging, metabolic and degenerative disorders and cancer, as well as for biological evolution of living systems (Löwdin, 1966;Friedberg et al., 2006). The point mutations caused by the substitution of one nucleotide base for another may occur during DNA replication by DNA polymerases, the performance of which is very important for genome integrity and transmission of genetic information in all living organisms. Although DNA replicates with high fidelity, DNA polymerase can make mistakes with the average frequency in the range of (10 −7 -10 −9 ) per base pair per cell division (Drake et al., 1998). Spontaneous mutations are point mutations caused by the substitution of one nucleotide base of DNA for another one occurring due to endogenous factors during normal cell metabolism. The rare tautomeric hypothesis (Watson and Crick, 1953a,b;Topal and Fresco, 1976;Bebenek et al., 2011;Wang et al., 2011) originally proposed by J. Watson and F. Crick (Watson and Crick, 1953a,b) is considered as a possible mechanism of formation of spontaneous mutations suggesting the existence of different chemical forms of nucleotide bases, so called tautomers, in which the protons occupy in one of its tautomeric forms. The origination of mutagenic tautomers has not been completely established, although three possible mechanisms are discussed: (1) intramolecular single proton transfer in DNA bases (Basu et al., 2005;Gorb et al., 2005;Zhao et al., 2006;Hovorun, 2010, 2011;Brovarets et al., 2012); (2) proton transfer in a single base assisted by water molecules (Gorb and Leszczynski, 1998a,b;Kim et al., 2007;Fogarasi, 2008;Michalkova et al., 2008;Furmanchuk et al., 2011;Markova et al., 2017); (3) Löwdin's mechanism of double proton tunneling (DPT) between complimentary bases (Löwdin, 1963(Löwdin, , 1966 (Figure 1). Based on the rare tautomeric hypothesis by Watson-Crick, Löwdin (Löwdin, 1963(Löwdin, , 1966 suggested that spontaneous mutations in DNA occur due to double proton transfer between two complementary bases along intermolecular H-bonds by quantum tunneling. Thus, each proton in the connecting hydrogen bonds can be in one of two quantum states, in deep or shallow potential wells. Following the pioneering works of Löwdin, the tautomeric base pairs (A * -T * , G * -C * ) have been extensively studied in terms of their lifetime, the probability of occurrence and the energy by using different theoretical approaches (Florian et al., 1995;Florian and Leszczynski, 1996;Gorb et al., 2004;Villani, 2005Villani, , 2010Ceron-Carrasco et al., 2009a,b;Brovarets et al., 2012), including DFT calculations, ab initio MP2, quantum mechanics. All these calculated findings substantially support the Watson-Crick's tautomeric hypothesis of the origin of spontaneous point mutations, constrained by thermodynamic and kinetic criteria (Florian et al., 1994;Dabkowska et al., 2005;Brovarets et al., 2012) to be relevant to spontaneous mutations. Based on the Löwdin's mechanism, the probability of formation of spontaneous mutations was calculated from the kinetics of double proton transfer during DNA replication by taking into account 2D Marcus theory of double proton transfer (DPT) (Turaeva and Brown-Kennerly, 2015). The model allows to establish the spontaneous mutation probability as a function of temperature, replication rate and solvent. It was also established that there are different factors impacting on DNA mutations, including external electric fields (Ceron-Carrasco et al., 2014), metallic cations (Ceron-Carrasco et al., 2012), even the genetic sequence (Ceron-Carrasco and Jacquemin, 2015) through the hydrogen transfers between the complimentary nucleotide bases in DNA. In the framework of Lowdin's mechanism of spontaneous mutation formation, we can suppose that those factors directly or indirectly change the potential barrier relief for proton transfers, leading to the change of the mutation rate and the distribution of mutations along the chain. In the present work, we will show that the Löwdin's mechanism of spontaneous mutations results in a non-Poissonian distribution function of mutagenic tautomeric forms of DNA by using first-order autocatalysis. It is well-known that the first-and higher order autocatalysis was applied to different biological processes, including reproduction (Eigen, 1971;Biebricher et al., 1983;Schuster, 2018), cooperation (Higgs and Lehman, 2015;Schuster, 2018), chirality of biological molecules (Frank, 1953). The autocatalytic reaction of first order was applied to reproduction of RNA and DNA molecules (Biebricher et al., 1983), giving rise to their exponential growth with different reproducing variants leading to natural selection. The correct and error-prone RNA replication leading to point mutations was also described by the autocatalytic reaction of first order by introducing the mutation matrix with the assumption of uniform error rate (Eigen, 1971). Catalyzed replication leading to cooperation among replicators was described by the autocatalytic reaction of second order (Higgs and Lehman, 2015). In general, according to the evolution model (Schuster, 2018), the characteristic features of the first order autocatalysis include selection and optimization, while the second order autocatalysis covers oscillations, deterministic chaos, spontaneous pattern formation and high sensitivity to stochastic phenomena caused by small particle numbers. So, competitive reproduction gives rise to selection, but catalyzed reproduction is needed for cooperation of competitors. In our model, the autocatalytic reaction of first order is applied to the process of double proton transfer in DNA, which gives a non-Poissonian distribution of tautomeric states of hydrogens along the chain which as a result of replication leads to spontaneous mutations. The search of experiments on the distribution function of spontaneous mutations to verify our model leads to the Luria-Delbruck's experiments and distribution of mutants (Luria and Delbrück, 1943), that gives a non-Poissonian relationship for the distribution of mutant bacteria colonies consistent with the experimentally obtained values, in which the variance was considerably greater than the mean. We suppose that the non-Poissonian character of the distribution function of tautomeric forms of nucleotide bases can be counted toward the Löwdin's mechanism of origin of spontaneous mutations formation, since the intramolecular single proton transfer in DNA bases describing by the monomolecular reactions of a type as A ⇐⇒ X is described by the master equation, leading to the Poissonian distribution, where the variance is equal to the average value N 0 . STOCHASTIC MODEL OF SPONTANEOUS MUTATION FORMATION In general case, for chemical reactions occurring in systems, where fluctuations are not considered negligible small, it is necessary to introduce a master-equation for distribution of probability (Gardiner, 1983;Haken, 1983). Analyzing a FIGURE 2 | The double-well potential for a single proton tunneling. monomolecular reaction of a type as A ⇐⇒ X on the base of the master equation the Poisson distribution function with average value of N 0 can be derived. It is well-known that at considering the reactions of A + X ⇐⇒ 2X the corresponding kinetic equations for markovian processes become non-linear and this peculiarity leads to the non-Poissonian distribution functions. This result was proved by Nicolis and Prigogine (Nicolis and Prigogine, 1977) and caused a great scientific interest. The process of generation of spontaneous mutations in DNA through double proton transition during the replication can be treated as first-order autocatalytic chemical reactions, described by the following reaction scheme: Here A denotes protons along the DNA strand which are in their regular stable position, X is the protons in the tautomeric state (Figure 2), k and k − are the rate constants of forward and reverse chemical reactions, respectively. The first reaction corresponds to the processes of the transfer of a proton from the regular position into the tautomeric state and vice versa, while the second reaction corresponds to the generation of tautomeric forms of nucleotide bases due to the interaction of the single proton in the tautomeric state with the regular proton and its relaxation into the single proton tautomeric state. Denoting the number of protons in the tautomeric state by N, we can establish a master-equation for the distribution of fluctuations P(N, t) in the following general form: Here w is the transition probability per unit time. The crux of the master equation is to determine the transition rates explicitly for each chemical reaction. We investigate all transitions leading to N or going away from it. For spontaneous mutations in DNA we consider two types of transition-the tautomeric proton generation (N → N + 1, N -1 → N) and the tautomeric proton annihilation (N+1→ N, N→ N−1) for both chemical reactions. We will show in detail the derivation of total transition rate for the first reaction (1). 1. In the direction k 1 ("birth" of a tautomeric proton X). The number of the transitions per second is equal to the occupational probability, P(N, t), multiplied by the transition probability per second, ω (N + 1, N), which is the product of the linear density of regular protons N A along the DNA strand and the reaction rate k 1 . So, for transition → N + 1, we have w (N + 1, N) = k 1 N A L. Here L is the length of DNA. In the same way the transition rate is calculated for transition N − 1 → N: For this reaction the total transition rate is received in the following form: 2. In a similar way, we can find the reverse process of the first reaction. Here the number of tautomeric protons is decreased by 1 ("death" of a metastable proton X). Considering the "death" process for transitions N +1 → N and N → N −1, in the reverse k −1 direction ("death" of X) the total transition rate is equal to: 3. For the reaction (2), in the k 2 direction ("birth" of a tautomeric proton X), the total transition rate can be written: 4. For the reaction (2) in the k −2 direction ("death" of a tautomeric proton X), the total transition rate can be written: Thus, by taking into account (Equations 4-7) the transition rates for the reactions (1) and (2) are given: The stationary solution of the master equation (3) is determined as Eigen (1971): Let us denote the probabilities of two transitions by the following expressions: The condition of extremes of stationary P (N) is Haken (1983): We can find the extreme solutions N 0 of P (N) from the following expression which is received by rewriting (Equation 14) by taking into account Equations (13), (8), and (9): The solution of Equation (15) for N 0 can be received: The positive value of N 0 can be obtained if k −2 + k −1 L < k 2 N A L. The plot of the probability P (N) represents the curve with two maxima. By assuming that the transfer of the second proton takes place almost instantaneously compared to its reverse process, which corresponds to the large ratio of k 2 /k −2 , we can receive the probability distribution function with one maximum (Figure 3): It is seen from Equation (17) that N o is positive when the transfer of proton from the regular position into the tautomeric state proceeds slower than its reverse reaction (1), while the second proton is transferred almost instantaneously (2) compared to the reverse process of the first reaction (1). If the principle of detailed balance is satisfied for both chemical reactions (1) and (2), then the Poisson distribution function for fluctuations can be deduced (Haken, 1983). We must write the detailed balance equations then for both reactions (1) and (2): Dividing Equation (18) by Equation (19) and using the explicit expressions (Equations 8 and 11), we find the relation: Here µ is a constant. By using Equation (20) we can rewrite the detailed balance equation: By inserting Equation (21) into Equation (12) at normalization condition of P (0) we find the Poisson distribution: In general, however, for non-equilibrium processes, where the detailed balance principle is not valid, the character of distribution function P (N) (Equation 12) is non-Poissonian (Gardiner, 1983;Haken, 1983). Since double proton transfers during the DNA replication are far from equilibrium, a non-Poissonian distribution for tautomeric forms should be well founded. These results can be used to support the mechanism of the origin of spontaneous mutations in DNA based on concerted double proton transfers between complimentary nucleotide bases during the DNA replication instead of single proton transfers in a single base. CONCLUSION In this work, we applied first-order autocatalysis to the Löwdin's mechanism of spontaneous mutation formation, in which concerted double proton transfers in DNA lead to the formation of tautomeric forms nucleotide bases during the DNA replication. The stochastic model results in a masterequation for the distribution function of tautomeric nucleotide bases, the stationary solution of which is a non-Poissonian function with the assumption that the processes of double proton transfers during DNA replication are far from equilibrium conditions. We suppose that these peculiarities of Löwdin's mechanism of spontaneous mutation formation should be taken into account in the discussion of the origin of spontaneous mutation formation. It is interesting to note the possibility of accumulation of point mutations locally on DNA due to the cooperation effect between tautomeric hydrogens. The cooperation effect should increase the order of autocatalytic reactions. Such fluctuations can be possible due to the Unegative effect (Anderson, 1975), which can lead to the formation of a solitary wave or a soliton on the chain at a certain distance between metastable proton pairs (Golo et al., 2001). The soliton can move along the chain and impacts on different biochemical processes, occurring in DNA, including its replication, proofreading and so on. At realization of such effect, the rate constants of the reactions (1) and (2) become the function of distance between the point mutations along the chain. This perspective of fluctuations will be considered in our future research. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary material. AUTHOR CONTRIBUTIONS NT was the author of the idea about application of stochastic theory to spontaneous mutations in DNA and the main contributor of the manuscript. BO was the author of the idea about the possibility of soliton generation in DNA based on accumulation of metastable proton fluctuations.
3,483.2
2020-01-31T00:00:00.000
[ "Physics" ]
Regulation of glycosphingolipid metabolism in liver during the acute phase response. The host response to infection is associated with multiple alterations in lipid and lipoprotein metabolism. We have shown recently that endotoxin (lipopolysaccharide (LPS)) and cytokines enhance hepatic sphingolipid synthesis, increase the activity and mRNA levels of serine palmitoyltransferase, the first committed step in sphingolipid synthesis, and increase the content of sphingomyelin, ceramide, and glucosylceramide (GlcCer) in circulating lipoproteins in Syrian hamsters. Since the LPS-induced increase in GlcCer content of lipoproteins was far greater than that of ceramide or sphingomyelin, we have now examined the effect of LPS and cytokines on glycosphingolipid metabolism. LPS markedly increased the mRNA level of hepatic GlcCer synthase, the enzyme that catalyzes the first glycosylation step of glycosphingolipid synthesis. The LPS-induced increase in GlcCer synthase mRNA levels was seen within 2 h, sustained for 8 h, and declined to base line by 24 h. LPS-induced increase in GlcCer synthase mRNA was partly accounted for by an increase in its transcription rate. LPS produced a 3-4-fold increase in hepatic GlcCer synthase activity and significantly increased the content of GlcCer (the immediate product of GlcCer synthase reaction) as well as ceramide trihexoside and ganglioside GM3 (products distal to the GlcCer synthase step) in the liver. Moreover, both tumor necrosis factor-alpha and interleukin-1beta, cytokines that mediate many of the metabolic effects of LPS, increased hepatic GlcCer synthase mRNA levels in vivo as well as in HepG2 cells in vitro, suggesting that these cytokines can directly stimulate glycosphingolipid metabolism. These results indicate that LPS and cytokines up-regulate glycosphingolipid metabolism in vivo and in vitro. An increase in GlcCer synthase mRNA levels and activity leads to the increase in hepatic GlcCer content and may account for the increased GlcCer content in circulating lipoproteins during the acute phase response. The host response to infection is associated with multiple alterations in lipid and lipoprotein metabolism. We have shown recently that endotoxin (lipopolysaccharide (LPS)) and cytokines enhance hepatic sphingolipid synthesis, increase the activity and mRNA levels of serine palmitoyltransferase, the first committed step in sphingolipid synthesis, and increase the content of sphingomyelin, ceramide, and glucosylceramide (GlcCer) in circulating lipoproteins in Syrian hamsters. Since the LPSinduced increase in GlcCer content of lipoproteins was far greater than that of ceramide or sphingomyelin, we have now examined the effect of LPS and cytokines on glycosphingolipid metabolism. LPS markedly increased the mRNA level of hepatic GlcCer synthase, the enzyme that catalyzes the first glycosylation step of glycosphingolipid synthesis. The LPS-induced increase in GlcCer synthase mRNA levels was seen within 2 h, sustained for 8 h, and declined to base line by 24 h. LPS-induced increase in GlcCer synthase mRNA was partly accounted for by an increase in its transcription rate. LPS produced a 3-4-fold increase in hepatic GlcCer synthase activity and significantly increased the content of Glc-Cer (the immediate product of GlcCer synthase reaction) as well as ceramide trihexoside and ganglioside GM3 (products distal to the GlcCer synthase step) in the liver. Moreover, both tumor necrosis factor-␣ and interleukin-1␤, cytokines that mediate many of the metabolic effects of LPS, increased hepatic GlcCer synthase mRNA levels in vivo as well as in HepG2 cells in vitro, suggesting that these cytokines can directly stimulate glycosphingolipid metabolism. These results indicate that LPS and cytokines up-regulate glycosphingolipid metabolism in vivo and in vitro. An increase in GlcCer synthase mRNA levels and activity leads to the increase in hepatic GlcCer content and may account for the increased GlcCer content in circulating lipoproteins during the acute phase response. Glycosphingolipids (GSLs) 1 are a diverse group of complex lipids that contain the hydrophobic ceramide moiety and a hydrophilic oligosaccharide residue (1). GSLs are synthesized by the sequential addition of sugar residues to ceramide by glycosyltransferases that are specific to each glycosidic linkage (1). GSLs are present in the plasma membrane of all eukaryotic cells and are involved in a variety of important biological processes, including cell recognition, proliferation and differentiation, regulation of cell growth, signal transduction, interaction with bacterial toxins, and modulation of immune responses (reviewed in Refs. [2][3][4]. The acute phase response represents an early and highly complex reaction of the host to infection, inflammation, or trauma and is accompanied by changes in hepatic synthesis of several acute phase proteins such as increases in C-reactive protein and serum amyloid A (5). The acute phase response is also accompanied by several changes in lipid and lipoprotein metabolism that include stimulation of fatty acid and cholesterol synthesis and a marked increase in serum triglyceride and cholesterol levels (6). These metabolic alterations can be induced by endotoxin (lipopolysaccharide (LPS)) treatment, which mimics Gram-negative infections (7). The effects of LPS are in turn mediated by cytokines, including tumor necrosis factor-␣ (TNF-␣) and interleukin-1␤ (IL-1␤), and it has been shown that many of the metabolic effects of infection, inflammation, and trauma can be induced by these cytokines (reviewed in Ref. 7). Sphingolipids are important constituents of lipoproteins (8,9). We have shown recently that LPS and cytokines up-regulate hepatic sphingolipid synthesis in Syrian hamsters (10). LPS induced a 75% and 2.5-fold increase in hepatic sphingomyelin and ceramide synthesis, respectively, as well as a 2-fold increase in the activity of hepatic serine palmitoyltransferase (SPT), the first and rate-limiting enzyme in sphingolipid synthesis (10). LPS also increased SPT mRNA levels, suggesting that the increase in SPT activity was due to an increase in its mRNA. Finally, lipoproteins isolated from Syrian hamsters treated with LPS contained significantly higher levels of ceramide, sphingomyelin, and glucosylceramide (GlcCer) (10). It is of note that the increases in GlcCer levels (19-fold in VLDL and 7.3-fold in LDL) were greater than the increases in ceramide (3.7-and 2.2-fold in VLDL and LDL, respectively) and sphingomyelin (no change in VLDL and 84% increase in LDL), suggesting that GlcCer synthesis may be reg-ulated at a step distal to SPT. GlcCer is the precursor of all neutral GSLs as well as the sialic acid-containing acidic GSLs or gangliosides (11) and is synthesized by the enzyme GlcCer synthase (UDP-glucose:Nacylsphingosine D-glucosyltransferase or GLcT-1; EC 2.4.1.80) that catalyzes the transfer of glucose from UDP-glucose to ceramide (11). The cDNA for human GlcCer synthase was recently cloned and was shown to be expressed ubiquitously (12). As the first enzyme in the GSL synthetic pathway, it is likely that the regulation of GlcCer synthase will play an important role in determining the rate of formation of GSLs. The metabolism of GlcCer and the regulation of GlcCer synthase has been extensively studied in mammalian skin (13)(14)(15)(16)(17)(18). However, very little is known about the factors that regulate GlcCer synthase activity and expression in tissues other than the epidermis despite its ubiquitous expression (19,20). Because of the striking increase in GlcCer content of circulating lipoproteins following LPS treatment (10), we postulated that LPS and cytokines might increase GlcCer synthase in the liver. The present study was designed to determine whether LPS and cytokines regulate GlcCer synthase mRNA and activity both in the liver of intact animals and in HepG2 cells (a human hepatoma cell line) in vitro. We have also examined the effect of LPS on the content of several GSLs in the liver. EXPERIMENTAL PROCEDURES Materials-[ 14 C]UDP-glucose (263 mCi/mmol) and [␣-32 P]dCTP (3,000 Ci/mmol) were obtained from NEN Life Science Products. Multiprime DNA labeling system was purchased from Amersham Pharmacia Biotech (Amersham, United Kingdom); minispin G-50 columns were from Worthington; oligo(dT)-cellulose type 77F was from Amersham Pharmacia Biotech (Upsala, Sweden); and Nytran membranes were from Schleicher & Schuell. Kodak XAR5 film was used for autoradiography. High performance TLC plates (silica gel 60) were obtained from Merck. Chromatography standards, including ceramide, sphingomyelin, and GlcCer, were purchased from Sigma. Ceramide trihexoside and gangliosides were obtained from Matreya (Pleasant Gap, PA). LPS (Escherichia coli 55:B5) was purchased from Difco and was freshly diluted to desired concentrations in pyrogen-free 0.9% saline (Kendall McGraw Laboratories, Inc.). Human TNF-␣ with a specific activity of 5 ϫ 10 7 units/mg was provided by Genentech, Inc. Recombinant human IL-1␤ with a specific activity of 1 ϫ 10 9 units/mg was provided by Immunex. Human IL-6 was provided by Walters Fiers (University of Ghent, Ghent, Belgium). The cytokines were freshly diluted to desired concentrations in pyrogen-free 0.9% saline containing 0.1% human serum albumin. Animal Procedures-Male Syrian hamsters (140 -160 g) were purchased from Charles River Laboratories (Wilmington, MA). The animals were maintained in a reverse-light-cycle room (3 a.m. to 3 p.m. dark, 3 p.m. to 3 a.m. light) and were provided with rodent chow and water ad libitum. Anesthesia was induced with halothane, and the animals were injected intraperitoneally with LPS, TNF-␣, or IL-1␤ at the indicated doses in 0.5 ml of 0.9% saline or with saline alone. Food was subsequently withdrawn from both control and treated animals, because LPS and cytokines induce anorexia (21). Animals were studied 2-24 h after LPS administration or 8 h after cytokine administration as indicated in the text. The doses of LPS used (0.1 to 100 g/100 g body weight (BW)) have significant effects on triglyceride, cholesterol, and sphingolipid metabolism in Syrian hamsters (10,22,23) but are far below doses that cause death in rodents (LD 50 ϳ5 mg/100 g BW). Similarly, the doses of TNF-␣ and IL-1␤ (17 and 1 g/100 g BW, respectively) were chosen, because previous studies have demonstrated that these doses have marked effects on serum lipid and lipoprotein levels, reproducing many of the effects of LPS on lipid metabolism in Syrian hamster (10,24). GlcCer Synthase Activity Assay-The synthesis of GlcCer from exogenous ceramide was assayed as described (15,18). Briefly, the total assay volume of 110 l contained 50 M UDP-[U-14 C]glucose (70 mCi/ mmol), 50 mM MOPS (pH 6.5), 5 mM MnCl 2 , 2.5 mM MgCl 2 , 1 mM NADPH, 5 mM dimercaptopropanol, and 1% w/v CHAPS. The solid substrate was prepared by adsorbing 20 g of ceramide (Type IV; Sigma) onto 1 mg of silica gel, and the reaction was initiated with the addition of 0.1-0.2 mg of microsomal protein. The incubation was carried out at 37°C for 30 min, and the reaction was terminated by the addition of ice-cold PBS. Pellets were washed four times by resuspension in PBS (4°C) and centrifugation. The final pellets were resuspended in PBS, and the radioactivity was counted by liquid scintillation spectrometry. Isolation of RNA and Northern Blotting-Total RNA was isolated by a variation of the guanidinium thiocyanate method (25) as described earlier (22). Poly(A) ϩ RNA was isolated using oligo(dT)-cellolose and was quantified by measuring absorption at 260 nm. Gel electrophoresis, transfer, and Northern blotting were performed as described previously (22). The uniformity of sample applications was checked by UV visualization of the acridine orange-stained gels before transfer to Nytran membranes. The cDNA probe hybridization was performed as described earlier (22). The blots were exposed to x-ray films for various time periods to ensure that measurements were done on the linear portion of the curve, and the bands were quantified by densitometry. We and other (22,26) have found that LPS increases actin mRNA levels in liver by 2-5-fold in rodents. TNF-␣ and IL-1␤ produce a 2-fold increase in actin mRNA levels. LPS also produced a 2.6-fold increase in cyclophilin mRNA in liver (27). Thus, the mRNA levels of actin, and cyclophilin, which are widely used for normalizing data, cannot be used to study LPS-induced or cytokine-induced regulation of proteins in liver. However, the differing direction of the changes in mRNA levels for specific proteins after LPS or cytokine administration, the magnitude of the alterations, and the relatively small standard error of the mean make it unlikely that the changes observed were due to unequal loading of mRNA. Measurement of Transcription-Nuclei were isolated from hamster liver using the homogenization procedure described by Clarke et al. (28). The rate of transcription in hamster liver nuclei was measured using the nuclear run-on assay as described earlier (23). Radioactive RNA bound to nylon filters was quantified by liquid scintillation counting. HepG2 Cell Culture and Cytokine Treatment-HepG2 cells (a human hepatoma cell line) were obtained from the American Type Culture Collection (Manassa, VA) and maintained in minimum essential medium (Mediatech, Inc.) supplemented with 10% fetal bovine serum under standard culture conditions (5% CO 2 , 37°C). Cells were seeded into 100-mm culture dishes and allowed to grow to 80% confluence. Immediately before the experiment, cells were washed with calciumand magnesium-free PBS, and the experimental medium (Dulbecco's minimum essential medium plus 0.1% bovine serum albumin) containing TNF-␣, IL-1␤, or IL-6 at the indicated concentrations was added. Cells were incubated at 37°C for the indicated times. RNA purification and Northern blotting were performed according to previously described methods (22). Statistics-Results are expressed as mean Ϯ S.E. Statistical significance between two groups was determined by using the Student's t test. Comparison among several groups was performed by analysis of variance, and significance was calculated by using Bonferroni's post hoc test. RESULTS We first examined the effect of LPS treatment (100 g/100 g BW) on GlcCer synthase mRNA levels in the liver of Syrian hamsters. As shown in Fig. 1A, hepatic GlcCer synthase mRNA levels increased nearly 20-fold within 2 h following LPS administration. The LPS-induced increase in GlcCer synthase mRNA levels is sustained for 8 h, returning to base line by 24 h. The dose-response curve for LPS effect on GlcCer synthase mRNA levels was performed at 8 h after administration. The data presented demonstrate that the LPS-induced increase in hepatic GlcCer synthase mRNA levels is a very sensitive re-sponse, with the half-maximal increase seen with ϳ0.3 g/100 g BW LPS and a maximal response at 1 g/100 g BW (Fig. 1B). Thus, very low doses of LPS stimulate a rapid and marked increase in GlcCer synthase mRNA levels in liver. In order to examine the mechanism for the induction of GlcCer synthase mRNA, we next measured the rate of transcription in liver nuclei obtained from control and LPS-treated (100 g/100 g BW, 4-h treatment) hamsters. The data presented in Fig. 2 demonstrate that the rate of GlcCer synthase transcription was 2.4-fold higher in liver nuclei from LPStreated hamsters, suggesting that an increased rate of transcription partly accounts for the increase in GlcCer synthase mRNA levels after LPS treatment. We then determined if the increase in GlcCer synthase mRNA levels results in a change in hepatic GlcCer synthase activity. LPS treatment produced a 3.3-and 4.2-fold increase in GlcCer synthase activity in the liver after 8 and 16 h of treatment, respectively (Fig. 3). To determine whether the increase in hepatic GlcCer synthase activity is reflected in changes in hepatic GSLs, we next measured their content in the liver after LPS treatment. The major GSLs detected in Syrian hamster liver were GlcCer, ceramide trihexoside, and ganglioside GM3. The data presented in Fig. 4A demonstrate that the content of ceramide (the immediate precursor of GlcCer) is decreased, whereas the content of GlcCer (the immediate product of GlcCer synthase reaction) is significantly increased in the livers of LPS-treated animals. Moreover, the levels of ceramide trihexoside (Fig. 4A) and ganglioside GM3 (Fig. 4B), both distal products of GlcCer synthase, are also increased in the liver. Finally, the content of sphingomyelin, the most abundant sphingolipid in the liver, was not altered by LPS treatment (control, 25.6 Ϯ 2.33 versus LPS 26.3 Ϯ 1.6 g/mg neutral lipid, p ϭ not significant). Thus, the LPS-induced increase in GlcCer synthase activity regulates the levels of specific precursors and downstream glycosphingolipid metabolites in the liver. Since pro-inflammatory cytokines mediate many of the met- Northern blots were probed with a GlcCer synthase cDNA as described under "Experimental Procedures." Data are presented as the percentage of control values as quantified by densitometry (mean Ϯ S.E.). n ϭ 5 for each group in A and 4 for each group in B. Where error bars are not visualized, they are within the marker that denotes the mean. CON indicates control (saline injected). *, p Ͻ 0.001 versus control. FIG. 2. Effect of LPS on GlcCer synthase transcription rate in liver. Syrian hamsters were injected with LPS (100 g/100 g BW), and 4 h later, livers were obtained and nuclei isolated. The rate of transcription was measured by nuclear run-on assay as described under "Experimental Procedures." Data are presented as mean Ϯ S.E.; n ϭ 5 for each group. *, p Ͻ 0.01 versus control. abolic effects seen during infection and inflammation (6, 7), we next examined the effect of TNF-␣ and IL-1␤ on hepatic GlcCer synthase mRNA levels in vivo. IL-1␤ produced a 9-fold increase in GlcCer synthase mRNA levels, while TNF-␣ induced a 2-fold increase (Fig. 5). To determine whether cytokines, that mediate the acute phase response, could directly regulate hepatic GlcCer synthase, we determined the effect of TNF-␣, IL-1␤, and IL-6 in HepG2 cells, a human hepatoma cell line. Both TNF-␣ and IL-1␤ increased GlcCer synthase mRNA in HepG2 cells, whereas IL-6 had no effect (Fig. 6). Because IL-1␤ was most effective in inducing GlcCer synthase mRNA in HepG2 cells as well as in the livers of intact animals, we performed additional studies on the effect of IL-1␤ in HepG2 cells. The data presented in Fig. 7A show that IL-1␤ induced a rapid increase in GlcCer synthase mRNA levels, with the earliest increase observed after 2 h and a maximal effect at 8 h. This effect was sustained for at least 24 h (Fig. 7A). Furthermore, the doseresponse studies showed that the maximal increase in GlcCer synthase mRNA levels was observed at 1 ng/ml IL-1␤, and the half-maximal response occurred at ϳ0.03 ng/ml (Fig. 7B), suggesting that the increase in GlcCer synthase mRNA levels in HepG2 cells is a very sensitive response to IL-1␤. Thus, similar to the in vivo results presented above, very low doses of IL-1␤ rapidly increase GlcCer synthase mRNA levels in HepG2 cells in vitro. DISCUSSION In the present study we demonstrate that hepatic GlcCer synthase is markedly up-regulated during the acute phase response. In Syrian hamsters, LPS administration results in a 12-20-fold increase in GlcCer synthase mRNA levels in the liver. This stimulation occurs rapidly (within 2 h), and is a very sensitive response to LPS (half-maximal response at 0.3 g/100 g BW compared with a LD 50 of approximately 5 mg/100 g BW). Moreover, this increase in GlcCer synthase mRNA levels is accompanied by a 3-4-fold increase in hepatic GlcCer synthase activity. Preliminary studies from our laboratory also indicate that LPS acutely produces a marked increase in GlcCer synthase mRNA levels and a modest increase in GlcCer synthase activity in spleen and kidney, 2 but has no effect on brain or small intestine GlcCer synthase mRNA, suggesting that the effects of LPS on GSL metabolism are tissue-specific. The modest increase in LPS-induced GlcCer synthase transcription as compared with a profound increase in mRNA levels suggests that increased transcription only partly accounts for the increase in mRNA levels, and there could be additional regulation at the post-transcriptional level. Whether GlcCer synthase mRNA degradation is altered during the acute phase response is unknown and is very difficult to evaluate in an in vivo animal model. The marked difference in the magnitude of increase in mRNA level versus enzyme activity also indicates additional regulatory mechanisms. It is also possible that GlcCer synthase protein may have a long half-life and therefore an acute increase in mRNA may not reflect a comparable increase in protein mass or activity. The half-life of GlcCer synthase protein in vivo is not known. Additional studies are required to address these issues. LPS administration also produced significant increases in the content of several GSLs in the liver. Specifically, the levels of GlcCer, ceramide trihexoside, and ganglioside GM3 were increased by 1.8-, 2.1-, and 3.3-fold, respectively. In contrast, ceramide, the substrate of GlcCer synthase, decreased in the liver following LPS treatment. This LPS-induced decrease in FIG. 3. Effect of LPS on GlcCer synthase activity in liver. Animals were injected intraperitoneally with either saline or LPS (100 g/100 g BW). Eight and sixteen hours later the animals were killed, liver microsomes were isolated, and GlcCer synthase activity was determined as described under "Experimental Procedures." Data are presented as mean Ϯ S.E.; n ϭ 5 for each group. *, p Ͻ 0.001 versus control. ceramide content in the liver is likely due to the depletion of the ceramide pool secondary to enhanced synthesis of GlcCer and more distal GSLs during the acute phase response. A likely consequence of this increase in GlcCer synthesis in the liver is the increase in lipoprotein GlcCer content that we have reported previously (10). The effects of LPS are mediated by its ability to stimulate a variety of immune cells that increase the synthesis and secretion of a multitude of cytokines, peptides, and lipid mediators of inflammation (31). The interaction of immune cells with LPS is facilitated by a specific LPS-binding protein (32). LPS-binding protein binds with a high affinity to the to the lipid portion of LPS and then interacts with the monocyte differentiation antigen CD14 to up-regulate the synthesis of several cytokines including TNF-␣ and IL-1␤ (33). The stimulation of acute phase protein synthesis and the changes in lipid and lipoprotein metabolism during infection and inflammation are not direct actions of LPS on the liver; rather the hepatic effects are now known to be mediated by cytokines (34). We have shown previously that TNF-␣ and IL-1␤ increase serum triglyceride and cholesterol levels, stimulate hepatic lipogenesis, and enhance VLDL production (35,36). Moreover, both TNF-␣ and IL-1␤ decrease fatty acid oxidation and ketone body production in the liver (37). We have also shown that anti-TNF antibodies or IL-1 receptor antagonist block the effects of LPS on triglyceride and cholesterol metabolism (38), indicating that these cytokines mediate the metabolic effects of LPS. In the present study, we demonstrate that like LPS, both TNF-␣ and IL-1␤ increase GlcCer synthase mRNA levels in vivo. Moreover, both TNF-␣ and IL-1␤ increased GlcCer synthase mRNA levels in HepG2 cells, suggesting that cytokines mediators of acute phase response can directly affect hepatocyte GlcCer synthase. It is believed that changes in the production of specific proteins during the acute phase play an important homeostatic role in the host response to infection, inflammation, and trauma (39,40). For example, increases in both C-reactive protein and complement 3 during the acute phase response may help in the opsonization of bacteria, immune complexes, and foreign particles (39,40). Similarly, an increase in serum amyloid A during the acute phase response has been shown to redirect the metabolism of HDL from hepatocytes toward macrophages at the site of inflammation (41). We have postulated that the changes in lipid and lipoprotein metabolism that occur during the host response to infection and inflammation may also be beneficial (7,42). For example, elevations in serum lipoprotein levels may enhance neutralization of LPS, lipoteichoic acid (a component of the cell wall of Gram-positive bacteria which is analogous to LPS), and viruses (7,42,43). Additionally, alterations in lipid metabolism in the liver and other tissues may allow for the redistribution of nutrients to support the increased energy needs of cells that are involved in host defense and tissue repair such as macrophages and lymphocytes (7,42). The precise role that the increases in intrahepatic or lipoprotein GSLs (Ref. 10 and the present study) might have during the acute phase response is not clear at this time. However, GSLs have been implicated in the growth of lymphocytes and other cells. For example, Platt et al. (44), using an inhibitor of GlcCer synthase, reduced GSL levels in mice by 50 -70% in liver and lymphoid tissue. The GSL-depleted mice grew more slowly, but otherwise did not appear grossly abnormal. Examination of lymphoid tissue, spleen, and thymus revealed that these organs were reduced in size by 50% due to a decrease in cell numbers (44). More recent studies have shown that natural killer T lymphocytes express a T cell antigen receptor that recognizes GSLs as the ligand and GSLs stimulate the proliferation of natural killer T cells (45). Thus, it is possible that the increase in GlcCer synthase activity and the production of GSLs during the acute phase response plays a role in the immune response. In addition to stimulating proliferation of T lymphocytes (45), GlcCer also stimulate the proliferation of other cell types and tissues (46 -49). Conversely, a mutant B16 melanoma cell line (GM-95) that lacks GlcCer synthase activity has a slower growth rate and altered cell morphology as compared with the parental cells (50), suggesting that GlcCer may be required for normal cell growth. Inhibition of GlcCer synthase activity also has been shown to decrease the renal hypertrophy that occurs in diabetic animals (51) and to decrease renal cell proliferation in vitro (52). Finally, inhibition of GlcCer synthase activity decreases keratinocyte proliferation (53). Thus, the increase in tissue GSLs during the acute phase response reported here may play a role in regulating cellular proliferation. Finally, the increase in hepatic GlcCer synthase activity and enhanced synthesis of GSLs in liver could result in the secretion of lipoproteins that are enriched in GSLs. We have shown recently that the content of GlcCer is increased in circulating lipoproteins during the acute phase response (10). The effect of increased glycosphingolipid content on lipoprotein function is not known. Since cell membrane GSLs are exploited as receptors by a number of microorganisms, including bacteria and viruses (54), it is possible that lipoproteins enriched in GSLs might play a protective role by either binding to microorganisms or by interfering with their binding to the cells. In summary, the present study demonstrates that hepatic GlcCer synthase activity and mRNA levels are acutely increased during LPS and cytokine-induced acute phase re-sponse. Coupled with our previously described increase in the hepatic SPT activity and mRNA levels, these changes would allow for the increased hepatic synthesis of GSLs and higher glycosphingolipid content in lipoproteins during the acute phase response.
6,007.4
1999-07-09T00:00:00.000
[ "Biology", "Medicine" ]
Experimental Assessment of IEEE 802.15.4e LLDN Mode using COTS Wireless Sensor Network Nodes The use of Wireless Sensor Networks in industrial environments imposes critical requirements such as low latency, high reliability, and robustness. To address these constraints, the IEEE 802.15 Task Group 4e developed the amendment IEEE 802.15.4e with three new MAC operation modes: TSCH, DSME, and LLDN. This paper aims to assess the feasibility of implementing the LLDN operation mode in low-cost commercial nodes and their capacity to meet industrial applications’ timing and reliability requirements. LLDN services were implemented in COTS nodes using C programming language with Atmel provided stacks. In order to validate this implementation, a set of experimental scenarios was conducted and the measurement results were compared to simulation results available in the state of the art. I. INTRODUCTION I oT technologies enable the transformation of data into information and knowledge, which can be used in different production planning and control sectors, becoming an essential pillar for Industry 4.0. In this regard, wireless sensor networks (WSN) are considered adequate infrastructures for implementing last-link communication for smart IoT devices. IEEE 802.15.4 [1] is the de facto communication standard for WSN nodes by specifying medium access control (MAC) and physical (PHY) layers that provide low-rate and lowpower wireless communication. However, this standard has not adequately addressed critical requirements of industrial IoT applications such as low latency, high reliability and robustness. IEEE 802.15.4e amendment [2] was proposed to meet these issues, including three MAC operation modes: Time Slotted Channel Hopping (TSCH), Deterministic and Synchronous Multi-channel Extension (DSME), and Low Latency Deterministic Network (LLDN). TSCH mode allows time-slotted and channel-hopping medium access, mitigating the effects of collisions, multipath fading, and interferences, aiming at deterministic latency with high network throughput. DSME mode supports frequency multiplexing and adopts a beacon-enabled multi-superframe structure comprising a flexible Collision Free Period (CFP), allowing pairs of nodes to allocate collision-free Guaranteed Time Slots (GTS) for their point-to-point communication. In order to meet stringent low latency communication requirements, the LLDN MAC service is timeslot-based and assumes a star topology. The timeslots can be either reserved for a single node or a shared-group timeslot, in which nodes contend for the medium using a simplified Carrier Sense Multiple Access (CSMA) algorithm to send messages within the timeslot. Furthermore, the superframe may be divided into four distinct groups of timeslots whose sizes can be adjusted to best suit each application. Because of this customization and a newly defined MAC frame of 1-octet, LLDN can achieve short and deterministic latencies required by the industry [2]. Many studies address the feasibility of using these IEEE 802. 15.4e MAC services in industrial applications [3]- [9], but few with implementation in COTS-based prototypes, and among these, only using DSME and TSCH modes [4]- [6], [10]. The MAC behavior of the LLDN mode was designed according to strict time constraints imposed by industrial environments, such as 10 milliseconds latency with up to 20 VOLUME 4, 2016 nodes [2], [11]. Thus, performing real experiments allows us to assess these temporal characteristics, in addition to analyzing the feasibility of using commercially available WSN nodes to ensure the proper functioning of the LLDN mode in an industrial WSN. This paper aims to carry out an analysis of the feasibility of implementing the LLDN mode of the IEEE 802. 15.4e protocol in low-cost commercial nodes, assessing compliance with the requirements imposed by the industrial environment as well as identifying possible improvements in the mode of operation and hardware limitations. This paper is structured as follows: Section II presents state-of-the-art approaches for the LLDN MAC operation mode; Section III briefly describes the LLDN MAC operation mode; Section IV describes an implementation of LLDN in low-cost nodes; and Section V presents the experimental evaluation. Finally, conclusions are drawn in Section VI. In [12] a delay bound model is presented, using network calculus in order to predict the worst-case timing performance of LLDN. The authors concluded that LLDN is best suitable for low latency and dense applications. In [13], the same authors used this model to further analyze the network throughput, varying the arrival rate and delay in the function of number of timeslots and active nodes. They concluded that the delay of LLDN is proportional to the number of timeslots and nodes. There is no definition for the number of retransmission timeslots that should be used in an LLDN network. In this way, [15] and [16] proposed different methods to optimize retransmission timeslot usage. In [15], the authors proposed a novel retransmission mode, where LLDN devices that have a poor communication link with the coordinator have their message retransmitted by a static and mains powered retransmission node. The approach is validated through a probabilistic analysis and showed an improvement in network reliability and power consumption. To maximize the retransmission timeslots available in the LLDN superframe, Willig et al. [16] proposed four schemes. They presented simulation assessments comparing their proposed schemes with the LLDN standard. To address the problem of collisions due to simultaneous channel assessment or hidden terminals, the authors in [17] proposed a new channel access mechanism. The mechanism is similar to CSMA-CA. However, a new signal is defined to act as a preamble to sense the medium and identify if any other node is performing CCA. Thus, collisions are avoided in the case of two nodes starting the CSMA-CA at the same time. This method requires all nodes to have a second antenna as one of them is dedicated to listening to the intended signal. The authors also presented a Markov model to analyze and to optimize the number of retransmissions timeslot. Both analytical and simulation assessments were presented, showing better performance over the standard in terms of throughput, energy consumption, delay, and reliability. In [18], the authors analyzed the LLDN and identified a set of limitations that influence the network survivability. The main problem found was the single-channel operation, which increases the collision rate, complicating the association process. The authors concluded that it is necessary to allocate channels dynamically for the uplink timeslots, keeping the management and the bidirectional timeslots to a single channel. In [19], in order to decrease the Configuration State time and increase its reliability, the authors proposed the use of Reserved Management Timeslots. They also proposed Downlink Reserved Timeslots, if the bidirectional slots are enabled, and an approach to defining the size of each type of timeslot. Through mathematical analysis, the results demonstrated that the modifications increase the determinism in the Configuration State and lower the worst-case latency of the Online State. In [22], a MAC emergency communication scheme is proposed on the LLDN to incorporate emergency communications. A mechanism to enable emergency data requests is presented, where emergency nodes can request immediate access to transmit. When a request is accepted by the coordinator, a message is broadcast, and the scheduled communication is stopped until another message is broadcast advertising the end of the emergency period. A mathematical model is presented for the network with the proposed enhancement and the paper conclude that the approach enables a reduction in message delivery delay up to 90% compared to the standard LLDN for emergency enabled nodes. A Markov chain model of LLDN mode during an association process is presented in [20]. From this model, the authors proposed the mobility-aware LLDN (MA-LLDN) based on two principles: defining the notion of the passive beacon used by the proxy coordinator and modifying the LLDN superframe. They presented a detailed analysis of the network and concluded that the proposed MA-LLDN model was able to reduce the dissociation by 75% and increase the coverage area through the channel hopping mechanisms. In [15], it is presented a modification in the standard topology named as a Extended Topology Mode (ETM) with a goal of enabling nodes outside the range area of the coordinator node to send their data. In the proposal, a relay node is allowed to send two data packets simultaneously through opportunistic coding. In a retransmission timeslot, the relay of the ETM nodes can send at the same time the beacon for the outside device, and the previously received data packet to the coordinator. The authors presented a mathematical analysis of energy consumption and of the probability of losing messages to both methods and compared them with the LLDN standard. In [21], the authors proposed the priority-aware multichannel adaptive framework to solve problems such as the low scalability of the network. A novel message is proposed, consisting of a priority field and a payload. This communication is based on a hierarchical topology so that multiple sub-networks can work in parallel. After receiving the beacon from the LLDN coordinator, each sub-coordinator switches to its respective channel, sends a beacon to all nodes in its channel, and receives their messages. In the respective timeslot, the sub-coordinator changes back to the LLDN coordinator channel and sends the received messages by highest priority. The authors presented an analysis of the response time, scalability and reliability of their proposed framework. In [23], an optimized LLDN extension is proposed for nodes with radios with Ultra-Wideband (UWB) technology. Despite the proposal not being compatible with the LLDN mode, the paper presents an interesting implementation in COTS nodes with UWB transceivers. Analyzing the state of the art of the LLDN operation mode, there are several works that presented mathematical analyses of the protocol [12]- [15], [17], [19]- [22] or simulation assessments [15]- [18], [21]. To the best of our knowledge, the study in this paper is the first that provides an experimental assessment of the complete LLDN mode (including existing overheads in its internal state changes) using COTS WSN nodes. None of the analyzed works showed a real experiment in WSN that address the feasibility of implementation in lowcost devices under the requirements imposed by the industrial environments. Therefore, this paper will contribute to the state of the art by filling this gap. III. LLDN OVERVIEW The industrial automation domain usually consists of a large number of devices to monitor and control factory production [11]. Many of these devices are allocated to robots, machinery, and transport equipment, where it may be necessary to use low-cost wireless sensor nodes. In these application scenarios, saving energy is not an essential issue, but they invariably require low latency. The LLDN mode was specifically designed to meet strict time requirements. Features like 10-millisecond latency capacity, TDMA-based access, and retransmission timeslots ensure a deterministic and robust network beyond the critical time requirement of industrial applications [15], [21]. This section describes the LLDN MAC operation mode, including the network topology, different configurations of the superframe, and the transmission states of LLDN. A. NETWORK TOPOLOGY The LLDN mode assumes a star topology, in which a PAN coordinator node is responsible for the configuration and maintenance of the network. Two types of messages are defined: (i) uplink messages forwarded from nodes to the coordinator; and (ii) downlink messages sent by the coordinator. In typical scenarios, nodes with only sensors can send their data to the coordinator through uplink timeslots, whereas actuator nodes, in addition to sending their data, can also receive signals for actuation through downlink timeslots. B. SUPERFRAME The LLDN superframe was designed with a minimized and static structure [24]. Its period is determined according to the number of nodes already associated after Discovery and Configuration transmission states. In the Discovery transmission state, new devices scan different channels until they detect an LLDN coordinator advertising a Discovery state. After that, a serie of messages is exchanged to ensure communication between the node and the coordinator. The Configuration transmission state is responsible for allocating timeslots to nodes and adjusting the superframe size. The superframe structure is based on a TDMA method. Each timeslot may have an assigned node, which is the only device allowed to transmit in that timeslot. It is also possible to share a timeslot with multiple devices (shared group timeslot). In these shared group timeslots, devices transmit their messages using a simplified CSMA-CA algorithm. Figure 1 show two examples of superframe, Group Acknowledgment (GACK) timeslot disabled and enabled, Figure 1a) and Figure 1b) respectively, highlighting the four types of timeslots: • Beacon timeslot: It is reserved for the LLDN coordinator, and is always situated at the beginning of a superframe. The beacon frame is responsible for synchronizing devices, announcing to nodes the start of a superframe. It also contains important information such as an auxiliary security header, the presence of a GACK timeslot, and the direction of the bidirectional timeslots. • Management timeslots: It is comprised of two management timeslots per superframe, one for uplink and other for downlink, allowing nodes to transmit and receive management commands. These timeslots are implemented as a shared group timeslot and their size is specified in the beacon frame, being optional. • Uplink timeslots: They are reserved for unidirectional communication from nodes to the LLDN coordinator. Each timeslot can be either a dedicated timeslot or a group shared timeslot. These timeslots are also responsible for retransmitting previously lost messages. • Bidirectional timeslots: Localized at the end of a superframe, they can be used either as uplink or downlink timeslots. They are optional in the Online state and are not present in the Discovery and Configuration states. LLDN defines a bitmap to the coordinator acknowledge messages. Each bit represents a slot in uplink timeslots, indicating the correct reception of the previous message. A superframe can be configured in two ways: Group ACK timeslot enabled and disabled. With the Group ACK timeslot disabled, Figure 1a), the acknowledgment bitmap is sent into VOLUME 4, 2016 Beacon Beacon the beacon frame payload. A predefined number of timeslots is reserved at the beginning of the uplink timeslots for retransmitting messages (S r ) not received by the coordinator in the previous superframe. After all S r messages, from S r + 1 to S n where S n represents the total number of uplink timeslots, nodes are able to transmit their messages. Figure 1b) illustrates the case when the Group ACK timeslot is enabled; here, a separate frame containing the acknowledge bitmap is sent in a dedicated Group ACK timeslot, before the retransmission timeslots. The main advantage of the GACK timeslot is the shorter time required to retransmit a data packet. Considering the GACK timeslot, the total number of transmission timeslots is: S n−r−1 = S n − S r − 1. In the case of uplink transmission in bidirectional timeslots, the acknowledge mechanism is the same as described above. However, for downlink transmission, it is necessary that the next superframe uses bidirectional timeslots in uplink mode, so that each node can send an acknowledgment frame to the coordinator. C. TRANSMISSION STATES LLDN networks operates through three transmission states: Discovery, Configuration, and Online states. The Discovery state is responsible for the discovery and association of new devices. The next stage is the Configuration state, in which new associated devices are properly configured. Finally, the Online state allows data packets to be transmitted. The coordinator is responsible for changing the transmission states, which are advertised to all devices operating in the same channel through the transmission state bits of the Low Latency (LL) Beacon frame. As shown in Figure 2, Discovery and Configuration can be restart as needed, to ensure the proper number of new nodes associated and right network configuration. In Online state, the coordinator can change to the Discovery state, reconfigure the network by going through the Configuration state or continue in the current state. In Discovery transmission state, the superframe consists of one beacon timeslot and two management timeslots, one downlink, and one uplink (Figure 3). New devices search in different channels until they find an LL Beacon frame on the Discovery state. After receiving the beacon and synchronizing with the superframe, the device sends a Discovery Re- sponse frame with three parameters: MAC address, timeslot duration and uplink/bidirectional type indicator. In the next superframe, the coordinator shall send an acknowledgment frame to each device. Devices that receive the acknowledgment must wait until the beginning of the configuration state; if an acknowledgment was not received, they continue sending Discovery Response frames until the coordinator changes the transmission state to Configuration. The state is changed to Configuration if the coordinator did not receive a Discovery Response frame within 256 seconds [2]. From Discovery to Configuration state, the type of superframes doesn't change (Figure 4). Devices upon receiving a beacon indicating the Configuration state, send a Configuration Status frame in the management uplink timeslot until they receive a Configuration Request frame. The Configuration Status contains information about the current configuration of the device, such as the complete and short MAC address of the node, required timeslot duration, uplink/bidirectional type indicator, and assigned timeslot, if the node was previously associated. The Configuration Request frame contains the new configuration of the device; the information received includes the full and short MAC address, transmission channel, the existence of management frames, timeslot duration, and assigned timeslot. Finally, all nodes that received the Configuration Request frame are now ready for the Online state and send an Acknowledgment frame to the LLDN coordinator. IV. EXPERIMENTAL SETUP This section presents the software, hardware, the necessary configuration and performance metrics used to assess the implementation of LLDN in low-cost nodes. The set of experiments used fifteen nodes built with WM100-Duino boards [25]. This type of node features an ATMEGA256RFR2 chip that combines an AVR microcontroller and an IEEE 802.15.4 compliant 2.4 GHz RF transceiver in a single integrated circuit. It has an RF with a link budget of 103.5 dBm, 32-bit MAC symbol counter, true random number generator, and antenna diversity support. The radio was configured to promiscuous mode with O-QPSK PHY modulation, as specified by the standard IEEE 802.15.4 [1]. Other parameters used in the boards are presented in Table 1. The LLDN MAC operation mode was implemented using C programming language with the Atmel Studio 7.0 IDE. The implementation was divided into three components: Application, Services, and Physical (PHY). Figure 5 shows the relationships between the components. The physical component is composed of drivers and codes that control physical aspects of the microcontroller, such as the radio module and timers. The code for this component was made available by Atmel through its IDE. The service component is an abstraction between the physical and the application component. In this part, the LwMesh stack was used as a base, which was modified to properly process the frames defined by the LLDN MAC operation mode. Finally, the application component is where the network's operation was programmed, according to the LLDN protocol. Two applications were developed, one for the coordinator and other for the nodes. The project with the implementations carried out can be found in the repository at [26]. In all experiments, the nodes were deployed five meters from the coordinator in a direct line of sight. This study was divided into two stages. First, the different parameter settings for the Association, Discovery, and Configuration states were tested to evaluate the average association time of a node. In the second stage, the Online status operation was evaluated. A. PERFORMANCE METRICS To assess the performance of the LLDN's operation, the following metrics were used: the Packet Error Rate (PER), Packet Loss Rate (PLR), and Success Rate (SR). The PER metric (Equation 1) represents the percentage of packets that are not received in the transmission attempt (Rx t ) and S n−r−1 is the total number of packets that are expected to be received in each superframe. The PLR metric (Equation 2) is the percentage of packets that are not received in both the transmission and retransmission attempts (Rx t + Rx r ). From these equations, it is possible to determine the success rate (SR) in each case, which represents the percentage of packets successfully received in the transmission attempt or considering both transmission and retransmission attempts. B. COORDINATOR APPLICATION Figure 6 illustrates a simplified form of the state machine present in the coordinator's application. Each state on the machine represents a beacon configuration to be sent; for VOLUME 4, 2016 example, 1st Discovery represents the first beacon in the Discovery state. In a simplified way, ignoring the Group ACK mechanism, the machine is always updated after sending the beacon frame. This is done in such a way that at the end of every superframe, the next beacon is ready to be sent. To control the superframe period, a hardware timer was used, without using the abstraction layer provided by the LwMesh stack, to increase accuracy and facilitate synchronization between all nodes. The coordinator is responsible for configuring the superframe, and two different configurations were defined, one for the Online operation state and other for the Association states. C. APPLICATION IN NODES Node operations are totally controlled by the reception of messages from the coordinator and interruptions by timers. The application operation in each node can be seen in the flowchart of Figure 7. Upon receiving any beacon, the node assesses whether the state of the beacon is consistent with its current state. If the node is associated and receives a beacon indicating the start of a superframe in the Online state, it calculates the periods for the superframe, for the GACK frame timeslot, and for its reserved timeslot. The calculations for the superframe period and management uplink timeslot are also performed when the node is disassociated and receives a beacon from the association states. Upon receiving a Group ACK, the node verifies whether its message was received by the coordinator. In negative case, an algorithm is used to determine which retransmission timeslot is appropriate for its use. As in experiments is assigned just one retransmission timeslot for each associated node, the number of uplink timeslots may be calculated as S n = 2 × S r + 1 (see Figure 1). D. TIMESLOT SIZE Frames are processed using the LwMesh stack, and this software inserts delays in receiving and transmitting messages. These delays were measured using a hardware counter, counting intervals of 16 µs. The transmission time (T tx), that is the time needed for a transmission request of the application layer to reach the radio buffer for sending, was measured as T tx = 0.816 ms. The reception time (T rx) was also measured, evaluating the period required for a message from the radio reception buffer to reach the application; T rx = 0.352 ms was obtained. For the Online state, it is also necessary to consider a time for the application to process the message; approximately, T app = 2 ms. These three parameters have direct connection with the timeslot size (T ts), which must respect the limitations of the LwMesh stack and application, represented by Equation 3. where p is the number of octets of a PHY header, sp is the number of symbols per octet in the PHY header, m is the number of octets of a MAC overhead, n is the maximum expected number of octets in a data payload, sm is the number of symbols per octet in a PSDU header, v is the symbol rate, and Interframe Space (IFS) is a constant dependent on frame payload. The T ts is applied to all timeslots, except the beacon timeslot. The uplink and bidirectional timeslots have the same T ts size, and the management timeslots are multiples of T ts, varying from 1 to 7 times T ts. Based on the IEEE 802.15.4e amendment [2], and according to equations 3 and 4, the timeslot size T ts is assumed as 3.168 ms. This value is enough for frames with payload up to n = 70 and will be used in the whole experimental evaluation presented in the next section. V. EXPERIMENTAL EVALUATION This section is divided into two subsections, addressing the association process and Online state. In the first, the impacts of parameter values and comparisons between the experimental results and the mathematical analysis results of [19] are presented. The Online state section discusses the configuration setup used in the superframe, as well as comparisons between the experimental results with the simulation results by [15]. Finally, this paper present a discussion about the the implementation feasibility of the Online state. A. ASSESSMENT OF ASSOCIATION PROCESS An association cycle is assumed as comprised of one Discovery state followed by one Configuration state. In both states, the superframe is composed of one beacon timeslot and two management timeslots; one is used for downlink and another for uplink. The number of nodes and the size of the management timeslots were varied to assess their impact on the association time. To vary the size of the management timeslots, the parameter Number of Base Timeslots per Management Timeslot (NBTMT) present in the Flag field of the LL Beacon Frame was used [2]. The number of nodes asking for association was varied from 4 to 12 and the NBTMT parameter from 2 to 5. More than 100 repetitions were performed for each experiment to obtain the average association time value. In cases where NBTMT value was equal to or greater than 3, it was observed an excessive association time per node. The shortest time for a node association was measured when NBTMT was equal to 2. In this configuration, it was confirmed that the coordinator continued to receive one message per superframe and it was concluded that with the reduction of the superframe period, it is possible to increase the number of cycles without excessively impacting the total association time, ensuring a better association rate. In Dariz et al. [19], the authors carried out an analysis by Monte Carlo simulation on the behavior of LLDN in the Configuration state. Nevertheless, we performed the association time analysis considering the nodes' total time to associate the network, measuring together the Discovery and Configuration states. Fortunately, it is possible to calculate these times separately. Equations 5 to 7 allow the calculation of time for the Configuration state, where tCF is the time for the Configuration state, tD is the time for the Discovery state and tC is the time of a cycle. Equation 5 comes from the principle that the Configuration state has 3 superframes and the Discovery state 2. In experimental case with NBTMT=2, it was measured that approximately 10 nodes managed to associate in tC = 598 ms. From Equation 7, it is calculated that tCF = 358.8 ms. The result found by Dariz et al. [19] is that 10 nodes managed to pass through the Configuration state in 328 ms. It may be observed that the results of experimental evaluation using COTS nodes, with all associated hardware and software overheads, were close to the Monte Carlo simulation results, with a difference of 30.8 ms. B. ASSESSMENT OF ONLINE STATE In order to compare the experimental scenarios with the simulations performed by Berger et al. [15] on the standard LLDN protocol, the network was configured according to Table 2. It is possible to assess the performance of the network in terms of success rate and the impact of the retransmission mechanisms on the LLDN operation mode. All measurements were performed by the coordinator node. At the end of each superframe, the number of messages received by the PAN coordinator were evaluated, and the PER and PLR were calculated. Figures 8 and 9 show the results. In both, the x-axis shows the metric value, and the y-axis shows the percentage of superframes that achieve the indicated packet loss rate or packet error rate, considering 472 consecutive superframes. Assuming only the transmission attempt, it was observed that in 75.21% of superframes, the PER value was equal to zero, i.e., all transmitted messages were successfully received in 355 superframes. In 19.7% of superframes, it was observed that 0 ≤ P ER ≤ 12.5. The average value of PER was equal to 5.4%, with an error of ±1.36%. In the case of PLR, 88.35% of superframes present a PLR equal to zero, indicating that in 417 of the superframes, all nodes successfully transmitted their messages in transmission or retransmission attempts. The average value of the PLR is 2.96%, with an error of ±1.11%. Figure 10 shows a comparison between the SR in the transmission step and in the transmission plus retransmission step. In both cases, the overall SR of the network centralized on a value of 100%. The impact of the retransmission mechanism of LLDN is most visible in cases where only one message was lost in the transmission step (SR = 87.5%). Almost 20% of the superframes measured had one message lost in the transmission stage; however, close to half of these superframes show that the message was later received in retransmission timeslots. This is clear because of the decrease of 11.23% of superframes with one message lost and the increase of 13.14% of superframes where no message was lost while considering both transmission and retransmission attempts as successful. Figure 11 shows the relation between PER and PLR in the experiments we carried out and in the simulations carried out by Berger et al. [15]. To find the relation in the experiments, the PLR values were averaged given a determined PER value. One difficulty at this stage of the experiment was to acquire a number of samples relevant to different PER values. Figures 8 and 9 show the distribution of PER and PLR values. This stage of the experiments validated the current implementation of the protocol since the success rate of the network is similar to the simulations. It is possible to observe in Figure 11 that both the theoretical and the experimental curve approach a line when the PER value is high. The slightly higher PLR in the experiments may be due to the simulations' optimistic approach of [15] that does not account for software and hardware issues such as oscillator clock drift or software overheads. VI. CONCLUSION This paper carried out an implementation of the LLDN MAC operation mode in low-cost commercial nodes, showing the feasibility of using this protocol. The LLDN operation mode was assessed both in the association and in the operation states. For the states, comparing the experimental results with the mathematical analyses performed by Daris et al. [19], we have reached very similar times and the difference is totally justified, given the difficulties generated by the hardware. For the state of operation, we encountered great difficulty in synchronizing the devices. It was necessary to control all timers in the application layer, increasing the period of the final superframe to incorporate the delays resulting from this. However, the behavior of the network's retransmission mechanisms was similar to the behavior in simulations made by [15], which can be seen as a good indication that, once the synchronization problems are solved, the LLDN is a viable protocol to be implemented in low-cost nodes. As future work, we can highlight improvements that can be made in the LLDN MAC operation mode. The association process can be improved by reducing the number of superframes required in the Discovery and Configuration states. In the Discovery state, the Acknowledgment Frame can be transmitted before the Configuration Status messages from the Configuration state. Similarly, the Acknowledgment Frame of the LLDN devices can be transmitted one superframe before, just after the Configuration Request messages. By decreasing the number of Superframes, the total time of each cycle also decreases, ensuring a faster association.
7,295.4
2022-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
A functional variant of IC53 correlates with the late onset of colorectal cancer. The IC53 gene was reported to be upregulated in the colon adenocarcinoma cell line SW480. Here, we show that the expression level of IC53 is positively correlated with the grade and depth of invasion in adenocarcinoma of the colon. Injection of IC53 stably transfected HCT-116 cells into athymic nude mice promoted tumor growth. Furthermore, overexpression of IC53 increased cell invasive growth, which could be dramatically prevented by knocking down IC53 with siRNA. The effects of IC53 on cell-invasive growth were mediated by upregulation of integrins, activation of phosphatidylinositol 3-kinase and phosphorylation of Akt. A single-nucleotide polymorphism rs2737 in the IC53 gene created a potential microRNA379 target site, and microRNA379 expression inhibited IC53 translation. Among 222 patients with colorectal cancer, the C/C rs2737 genotype was associated with late onset of colorectal cancer (median age 63.0 versus 55.3 years, P = 0.003). The frequency of the C/C rs2737 genotype was much lower in patients who developed colorectal cancer below the age of 45 years than in individuals over age 45 years (10.8% versus 26.6%, P = 0.039). These data indicated that IC53 is a positive mediator for colon cancer progression, and IC53-rs2737 may serve as protection from the onset of colorectal cancer. INTRODUCTION Colorectal cancer (CRC) is one of the most common cancers in industrialized countries. In the United States, approximately 145,000 people are diagnosed with CRC annually, and the global figure is >875,000 (1). The genesis of CRC involves a series of steps, starting with environmental and/or endogenous carcinogens inducing or promoting cancer development via the activation of onco-genes, such as ras, and the inactivation of tumor suppressor genes, such as APC, Tp53 and DCC, and genes involved in DNA mismatch repair (2)(3)(4)(5). Genetic factors have been reported to play a key role in the predisposition to CRC as well as in the initiation and progression of the disease. High-penetrance mutations in several genes, such as APC; and DNA mismatch repair genes, LKB1 and SMAD4, confer predispositions to fa-milial cases of CRC (that is, familial adenomatous polyposis, Lynch syndromes and hamartomatous polyposis), which account for <5% of all CRC cases (6). However, low-penetrance variants of these and other genes, such as common alleles at single nucleotide polymorphisms (6,7), account for much of the predisposition, resulting in sporadic cases of CRC and are likely responsible for much of the uncharacterized influence of inherited genetic changes on the development of CRC. In the field of medical genetics, it has become increasingly apparent that few, if any, human diseases are homogeneous and solely the result of one mutation in a single gene. Even when a Mendelian disorder has one obvious predisposing genetic cause, the phenotype may still be subject to wide variation. The lack of a clear observed genotype/phenotype association, even in such single-gene disorders, suggests that additional modifier factors, including both environmental and genetic components, influence clinical phenotypes (8). For example, the phenotypes of FAP (familial adenomatous polyposis) and HNPCC (hereditary nonpolyposis colorectal cancer), with regard to colonic disease, vary considerably, not only between families, but also within families (9,10). The considerable variation in disease expression (age of onset and tumor site) in these disorders cannot entirely be explained by the type and position of the mutation in the respective genes (11). Several reports have shown that genetic polymorphisms may be contributing factors to disease in HNPCC and sporadic cases of CRC (12)(13)(14)(15)(16)(17). MicroRNAs (miRNAs) are endogenously expressed RNAs 18-24 nucleotides in length that regulate gene expression through translational repression by binding to a target mRNA. Accumulating evidence suggests that miRNAs play a role in the pathogenesis of various human cancers (18)(19)(20). Recently, Chen and Rajewsky (21) reported that the variant rs2737 created a potential miRNA379 target site in the IC53 gene, which was highly expressed in eight tumor cell lines, including the colon adenocarcinoma cell line SW480, compared with negligible expression in normal colon tissue (22). IC53 was also overexpressed in tumor tissues of lung adenocarcinoma (23). In contrast, IC53 was reported as a tumor suppressor in Hela, H1299, HT1080 and U2OS cell lines (24)(25)(26). To our knowledge, the association between IC53 and the development of CRC has not been established. These data led us to hypothesize that IC53 could regulate colon cancer progression and the rs2737 in the IC53 gene could modify the incidence of colon cancer as well as the timing of colon cancer onset. Materials Protein kinase inhibitors (LY294002) and antibodies against Akt and phospho-Akt Ser 473 were obtained from Cell Signaling Biotechnology (Beverly, MA, USA). Antibodies against integrin α 2 , α 3 and β 4 and laminin β1 and β2 were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Wortmannin was obtained from Sigma Chemical (St. Louis, MO, USA). Nu/nu mice (BALB/c, 4-to 6-wk-old females) were purchased from the Laboratory Animal Center, Chinese Academy of Medical Sciences (Beijing, China). Colon cancer tissues and their corresponding normal mucosa were obtained from patients who underwent surgical resection of their tumors with informed consent. The human tissue collection protocol was approved by the Fuwai Hospital Ethics Committee. Informed written consent was obtained from patients themselves or their legal representatives. Animal experiments conformed to the guiding principles of China National Law for Animal Use in Medical Research and were approved by the Fuwai Hospital Committee for Animal Care and Use. Cell Lines The colon cancer adenocarcinoma cell lines HCT-116, HT-29 and mouse embryonic fibroblast cell line NIH3T3 were obtained from the Institute of Cell Biology, Academic Sinica, and propagated in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, UT, USA) and 1% penicillin/ streptomycin. Cells were cultured at 37°C in a humidified atmosphere of 5% CO 2 . Production and Purification of the IC53 Monoclonal Antibody The monoclonal antibody to IC53 was produced in BALB/c mice against the keyhole-limpet hemocyanin (KLH)conjugated synthetic peptide CQKQQ EALEEQAALEPKLD corresponding to amino acid residues 369-386 of human IC53. The first N-terminal cysteine residue was added to facilitate covalent KLH conjugation. BALB/c mice were immunized by intraperitoneal injection with the KLH-conjugated synthetic peptide. Murine antibodies were prepared by conventional hybridoma technology as previously described (27), and the re-sulting hybridoma cells were screened for antibody production in an ELISA using bovine serum albumin (BSA)conjugated synthetic peptide. Hybridoma cells producing anti-IC53 monoclonal antibody were grown and subsequently injected into pristaneprimed BALB/c mice. After 10 d, ascites fluid was collected. The IgG was extracted from the ascites fluid by using protein A-Sepharose CL-4B (Amersham Pharmacia Biotech, Amersham, UK). Tissue Microarray Analysis The human tissue microarray of colon cancer tissue was obtained from Cybrdi (Xi'an, Shanxi, China). The array contained 182 dots in total and each dot represented one diseased tissue spot from one individual specimen that was selected and pathologically confirmed. The arrays were fixed with formalin, embedded in paraffin and immunostained with mouse monoclonal anti-IC53 antibody (1:900 dilution) by using the avidinbiotin peroxidase complex method. The intensity of IC53 staining was scored as weak (1+), moderate (2+) or strong (3+). To test the expression of miR-379, the locked nucleic acid (LNA)-modified probe U6, scramble and miR-379 were purchased from Exiqon (Vedbaek, Denmark). In situ hybridization was performed according to the manufacturer's protocol, and the intensity of miR-379 staining was scored as negative (0), weak (1+) or moderate (2+). Expression Plasmid Construction The open reading frame of IC53 was amplified by polymerase chain reaction (PCR) by using the EST clone (accession number AF110322) and the mammalian expression plasmid [pcDNA3.1/Myc-His (-) A-IC53], constructed as previously reported (22). Tumorigenicity Tumorigenicity studies were performed as described previously (28). Briefly, cells from exponential cultures of HCT-116 transfectants and nontransfectants were resuspended in PBS and inoc-ulated subcutaneously into 5-wk-old athymic nude mice (7 × 10 6 /mouse). Mice were maintained in a pathogen-free environment. Growth curves for xenografts were determined by externally measuring tumors in two dimensions. Volumes were determined by using the equation V = (L × W 2 ) × 0.5, where V = volume, L = length and W = width. Stable Transfection Cells were placed in a six-well plate at a density of 2 × 10 4 cells/well and grown for 16 h. The cells were then transfected with the empty plasmid or plasmids carrying the open reading frame of IC53 by using Lipofectamine 2000 reagents (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol. After 24 h of transfection, fresh media were added containing G418 (200 μg/mL; Invitrogen). After 2 wks, stably transfected clones were pooled and propagated in DMEM containing G418 (200 μg/mL). The level of IC53 expression was determined by Western blot analysis. Stealth siRNA Treatment IC53 stealth siRNA (number 111322F11) and the negative control were purchased from Invitrogen. HCT-116 cells were transfected by 20 pmol IC53 stealth siRNA or the negative control with Lipofectamine 2000 transfection reagent (Invitrogen) in 1 mL OptiMEM (Invitrogen) according to the manufacturer's instructions. The level of IC53 expression was determined by Western blot analysis. MTT Assay The MTT assay (Sigma Chemical) was performed according to the manufacturer's instructions, with some modifications. Briefly, the cells (5,000 cells/well) were cultured in 96-well plates with 100 μL media/well. MTT (20 μL, 5 mg/mL) solution was added to each well at 72, 120 and 144 h after plating and incubated at 37°C for an additional 4 h in a CO 2 incubator. The absorbance at 570 nm was recorded with a microtiter plate reader (Bio-Rad, Hercules, CA, USA). Cell Migration Assay Cell migration assays were performed by using 24-well transwell migration chambers (Corning Costar, New York, NY, USA) with an 8-μm pore size polycarbonate filter. Cells were starved in media containing 0.5% FBS for 12 h and then transferred to transwell chambers from the culture flasks by trypsin/EDTA digestion. Briefly, the transwell units were precoated with type I collagen (20 μg/mL), isolated from rat tails overnight at 4°C, washed with PBS and blocked with 0.1% BSA in PBS for 1 h at 37°C. The lower wells of the chamber were filled with 600 μL 0.5% FBS in DMEM. Cells were placed in the top chamber at 1 × 10 5 cells/mL in 0.1 mL DMEM containing 0.1% BSA and allowed to migrate for 4 h at 37°C in a humidified CO 2 incubator. For antibody blocking experiments, the cells were preincubated with media containing antibodies against integrin α 2 (10 μg/mL), α 3 (10 μg/mL), and β 4 (20 μg/mL) and the phosphatidylinositol 3-kinase (PI-3K) inhibitor wortmannin (100 nmol/L) or LY294002 (25 μmol/L), respectively, for 30 min at room temperature. After removing the cells from the upper surface of the membrane with a swap, cell numbers on the underside were determined by using the colorimetric crystal violet assay. Six independent fields per filter were counted, and the mean of six counts was used as the migrated cell number. Adhesion Assay Cell adhesion was performed in 24-well plates (Corning Costar, New York, NY, USA) precoated with matrigel (5 μg/mL in cold DMEM; BD Biosciences, Bedford, MA, USA) overnight at 4°C. Cells were serum-starved in media containing 0.5% FBS for 12 h, washed with PBS and blocked with PBS containing 2% BSA for 30 min at 37°C. The cells were then plated on coated plates at a density of 2 × 10 5 cells/mL in 0.1 mL DMEM containing 0.1% BSA and incubated for 1 h. For antibody blocking experiments, cells were pretreated with or without antibodies against integrin α 2 , α 3 or β 4 for 30 min as described previously. After removing the media, along with the nonattached cells, 0.2% crystal violet was added, and the cells were incubated for 10 min. The plate was gently washed with tap water and then air-dried for 24 h. SDS (5%, 0.1 mL) containing 50% ethanol was added for 20 min, and the plate was read at 570 nm. Microarray Analysis RNA was isolated from cultured HCT-116 cells either stably transfected with plasmids carrying IC53 or empty plasmids, and mRNA was isolated from the total RNA (200 μg) by using the Oligotex mRNA Midi Kit (Qiagen, Hilden, Germany). The mRNA was then used for microarray analysis. Briefly, cDNA was synthesized by in vitro transcription and labeled as a probe according to the manufacturer's manual (Clontech, Palo Alto, CA, USA). Hybridization of the cDNA probes to the Atlas human cancer array (Clontech; category number 7742-1) was performed by using a Hybridization Oven Robbin 1000 (Robbin's Scientific, Sunnyvale, CA, USA), and resultant spots were scanned with Phosphoimage (BAS-MS 2340; Fujifilm, Nakanuma, Japan). Data were analyzed by using ArrayGauge, version 1.0 (Fuji Photo Film, Tokyo, Japan). The data were then sorted to obtain genes differentially expressed ≥2-fold. Immunocytochemistry HCT-116 cells transfected with IC53 and control untransfected cells were plated onto glass coverslips in six-well plates and grown to 75% confluence. The cells were then serum-starved in media containing 0.5% FBS for 12 h, washed twice with PBS and fixed with 3.7% formalin for 20 min, and then rinsed twice with PBS. The cells were immunostained with polyclonal antibodies and detected by using the horseradish peroxidase staining method. Endogenous horseradish peroxidase was inhibited by incubating the cells in 3% H 2 O 2 solution for 10 min at room temperature and removed by washing twice with PBS. The cells were incubated in media containing primary antibodies (anti-integrin α 2 , α 3 , and β 4 and anti-laminin β1 and β2) with a dilution of 1:100-1:400 for 30 min at 37°C, washed 3x with PBS and incubated with the peroxidase-conjugated secondary antibody (Santa Cruz Biotechnology) for 10 min at room temperature. Positive staining was visualized by applying the diaminobenzidine substrate (DAKO, Carpentaria, CA, USA) and then counterstaining with hematoxylin. Western Blot Assay HCT-116 cells, HCT-116 cells transfected with empty plasmid, plasmids carrying the IC53 open reading frame, IC53 stealth siRNA or the negative control were grown to confluence in 75-mm dishes. For Akt and phospho-Akt Western blot analysis, HCT-116 cells transfected with empty plasmid or plasmids carrying the IC53 open reading frame were serum-deprived for 12 h and treated with platelet-derived growth factor BB (PDGF-BB) or LY294002 at a concentration of 30 ng/mL or 10 μmol/L for 2 h, respectively. For analysis of the effects of micro379 on IC53 translation, HT-29 cells (with the T/C genotype) or LOVO cells (with the T/T genotype) were transfected with 20 pmol precursor miR-379 or pre-miR negative control. Cells were harvested with trypsin, washed twice with PBS and directly lysed in ice-cold lysis buffer containing 50 mmol/L Tris-HCl (pH 8.0), 150 mmol/L NaCl, 0.1% SDS, 0.02% sodium azide, 100 μg/mL phenylmethylsulfonyl fluoride, 1 μg/mL aprotinin, 1 mmol/L EDTA, 1 mmol/L EGTA and 1% NP-40. The cell lysate was incubated on ice for 10 min and centrifuged at 10,000g at 4°C for 5 min. Protein concentrations were quantified by using the Bradford colorimetric method (Bio-Rad). A total of 25 μg lysate protein was boiled for 5 min in Laemmli sample buffer with 100 μmol/L dithiothreitol and was then separated by sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE). Separated proteins were then transferred to nitrocellulose membranes (Amersham Pharmacia Biotech) and blocked with 5% nonfat milk. The membranes were then incubated with antibodies against Akt; phospho-Akt; integrins α2, α3 and β4; IC53 (1:1,000); or GAPDH (1:5,000) at 4°C for 16 h and then incubated with a horseradish peroxidase-conjugated anti-mouse IgG antibody at 25°C for 1 h. Protein bands were visualized by enhanced chemiluminescence (ECL; Santa Cruz Biotechnology). Human IC53 3′UTR Luciferase Constructs To construct IC53 3′UTR-luciferase reporter plasmids, a 75-bp sequence (Supplementary Table S1), carrying either the wild-type or the variant genotype of rs2737, was synthesized and cloned into the pMIR-REPORT vector (Ambion, Austin, TX, USA) by using restriction enzymes HindIII and SpeI. The reporter plasmid containing rs2737T was defined as pMIR-TT, and the reporter plasmid containing rs2737C was defined as pMIR-CC. The resulting constructs (pMIR-TT and pMIR-CC) were verified by sequencing. Luciferase Target Assay HCT-116 cells (1 × 10 5 cells per well) were cotransfected with 0.8 μg pMIR-CC or pMIR-TT plasmid, 50 ng Renilla and 20 pmol pre-miR miRNA precursor of miR-379 (Ambion) or pre-miR negative control (Ambion), all combined with Lipofectamine 2000 (Invitrogen). After 48 h, cells were washed and lysed with Passive Lysis Buffer (Promega, Madison, WI, USA), and their luciferase activity was measured by using a luminometer (SIRIUS, Pforzheim, Germany). Firefly luciferase expression levels were adjusted on the basis of Renilla luciferase activity. Three independent experiments were performed for each reporter. Study Subjects We consecutively selected 222 Chinese CRC patients with histologically confirmed colon or rectal adenocarcinoma between 2004 and 2006 in the Beijing Cancer Hospital. A total of 260 controls were se-lected from patients admitted to orthopedics, general surgery or otorhinolaryngology wards, and patients with a prior history of malignant neoplasms were excluded. The age and sex distribution of the two groups was similar in both cases (mean ± SD age, 59.0 ± 13.9 years; 57.2% male) and control groups (mean ± SD age, 59.6 ± 9.3 years; 55.4% male). All subjects were of Han nationality, and all patients and controls provided written informed consent for the genetic studies, which were approved by the ethical committee of Beijing Cancer Hospital, China. Genotyping of rs2737 For IC53-rs2737 genotyping, to test association between rs2737 and risk of CRC, DNA was isolated from blood samples by using the RelaxGene Blood DNA System (TianGen, Beijing, China). To test correlation between expression of miR-379 and that of IC53 in subjects carrying a different genotype of rs2737, DNA was isolated from formalin-fixed paraffinembedded tissue (Cybrdi) by using the MagneSil Genomic, Fixed Tissue Purification Module (Promega), according to the manufacturer's protocol. Variant rs2737 was genotyped by PCR-restriction fragment length polymorphism, analyzed by amplification of a 244-bp sequence with the following primers: forward 5′-CAAGA ACCCCACGAAAACAG-3′, reverse 5′-AAGATGGAAAGCCACAGGAA-3′. The PCR assay was performed by using 50 ng genomic DNA, 5 pmol of each primer and EasyTag PCR SuperMix (TransGen, Beijing, China). The amplification protocol consisted of 35 cycles of 94°C for 30 s, followed by 59°C for 30 s and 72°C for 30 s. The resultant PCR products were digested with AccI (New England Biolabs, Beverly, MA, USA), separated on a 4% agarose gel and stained with ethidium bromide. Two DNA fragments of 147 and 97 bp were expected for the C allele and only one band for the T allele ( Supplementary Figure S1). In each plate (96 reactions), three positive and three blank controls were added; the positive control was from the sample confirmed by sequencing. Repro-ducibility of genotyping was confirmed by sequencing in 50 randomly selected samples with 100% concordance. Statistical Analysis All experiments were repeated 3 or 4x, and each treatment was carried out in triplicate, unless otherwise stated. Similar results were obtained in all cases. Each figure shows 1 of 3 representative experiments. Results were expressed as mean ± SD. The Student t test (two-sided) was used to compare the values of the test and control sam-ples. A Spearman correlation analysis was used to calculate the correlation coefficients between IC53 expression levels and the grade of adenocarcinoma of the colon, degree of invasion or expression level of miR-379. An χ 2 test was used to test genotype frequencies of the single-nucleotide polymorphism rs2737; the associations between the variant and CRC were detected by using unconditional logistic regression models. The one-way analysis of variance (ANOVA) test was used to determine the statistical significance of the age at diagnosis of CRC between the groups with the T/T rs2737 genotype and the C/C rs2737 genotype. P < 0.05 was taken as significant. All supplementary materials are available online at www.molmed.org. Expression Level of IC53 Was Positively Correlated with the Grade and Depth of Invasion in Adenocarcinoma of the Colon To confirm whether IC53 is expressed in normal colon epithelial, we performed immunohistochemistry, showing that IC53 was low or weak when expressed in the normal colon epithelial (Supplementary Figure S2). To evaluate the expression of IC53 in colon cancer and investigate the association of IC53 expression levels with various clinical pathological parameters, we performed tissue microarray analysis. The intensity of IC53 staining was scored as weak (1+), moderate (2+) or strong (3+) (Figure 1). We found a strong correlation between the IC53 expression level and the grade of adenocarcinoma of the colon (correlation coefficient 0.47, P = 1 × 10 -7 , Table 1), a weak association between the IC53 expression level and the degree of invasion (correlation coefficient 0.21, P = 0.018, see Table 1) and no association between the IC53 expression level and the grade or the degree of mucinous carcinoma of the colon (P = 0.13 and P = 0.27, see Table 1). These results indicated that IC53 may contribute to the development of colon cancer. Overexpression of IC53 Promoted the Tumorigenicity of HCT-116 Cells To determine whether IC53 has the ability to transform NIH3T3 cells, IC53 stable transfectants were generated and injected into athymic nude mice. After 37 d, no tumor formation was detected in the animals injected with NIH3T3 cells carrying exogenous IC53 or those carrying control vectors (data not shown). Whereas in the animals injected with HCT-116 cells transfected with IC53 expression plasmids, tumors formed that were twice as large as those in the animals injected with HCT-116 cells carrying the control vector ( Figure 2). These results indicated that IC53 has the potential to promote cancer cell growth but is unable to transform cells. Overexpression of IC53 Promoted Proliferation Migration and Adhesion of the Human Colon Cancer Cell Line The process of cancer development is closely related to the unrestricted invasive growth of cancer cells. To test whether the IC53 gene encodes a colon cancer progression regulator, we first studied the effect empty plasmids ( Figure 3A). Overexpression of IC53 markedly promoted HCT-116 cell proliferation 2.1-fold on day 3, 2.6-fold on day 5 and 1.97-fold on day 7, in IC53 plasmid-transfected cells compared with their untransfected or empty vector-transfected counterparts (P = 9 × 10 -9 ). These results indicated that overexpression of IC53 promoted human cancer cell line proliferation ( Figure 3B). Next, we tested whether IC53 could stimulate cancer cell adhesion to an extracellular matrix. We found that overexpression of IC53 dramatically promoted HCT-116 cell adhesion by 183% after 60 min compared with untransfected controls or with cells carrying the empty vector (P = 9 × 10 -6 , Figure 3C). We then examined the effects of IC53 on HCT-116 cell motility by using trans -well migration chambers. Cells stably transfected with the IC53 expression construct showed increased motility of 300% or 180% compared with the parental or empty vector transfected cells, respectively (P = 2 × 10 -7 , Figure 3D). Knockdown of IC53 Blocked Cell Proliferation, Migration and Adhesion To further examine the effect of endogenous expression of IC53 on cell proliferation, migration and adhesion, the expression of IC53 in HCT-116 cells was suppressed by its siRNA. The expression level of IC53 in the cells transfected with IC53 siRNA was <10% of that in the control cells ( Figure 4A). Additionally, cell proliferation was 36% (P = 5 × 10 -6 , Figure 4B), migration was 15%, (P = 2 × 10 -7 , Figure 4C) and adhesion was 26% (P = 5 × 10 -6 , Figure 4D) of that in the controls cells. IC53 Upregulated Expression of Integrins Because IC53 is upregulated in colon cancer and promotes colon cancer cell proliferation, migration and adhesion, we next investigated the target genes that are important in the mediation of IC53-regulated cell invasive growth. Atlas human cancer array from Clontech was used to identify gene expression profile in response to overexpression of IC53 in HCT-116 cells. Total RNA was isolated from HCT-116 cells that were stably transfected with IC53 or control vector; mRNA was isolated from 200 μg total RNA and used to generate 33 P-labeled cDNA probes for microarray analysis. We found that overexpression of IC53 upregulated the expression of genes encoding various integrins, macrophage stimulating 1 and laminins, which have long been linked to cancer progression (Supplementary Table S2). This observation was confirmed by investigating the integrin and laminin expression profile in HCT-116 cells stably transfected with IC53 or control vector by immunocytochemistry or Western blot ( Figure 5 and Supplementary Figure S3). Integrin-Mediated IC53-Induced Cell Invasive Growth To confirm whether upregulation of integrins mediates IC53-induced tumor cell invasive growth, we treated the cells stably expressing IC53 with antibodies against integrin α 2 , α 3 and β 4 , which are the most upregulated integrins (3.3-to 5.5-fold) in HCT-116 cells. As shown in Figure 6A, antibodies against integrin α 2 , α 3 and β 4 partially but significantly blocked IC53-mediated HCT-116 cell migration by 60%, 37% and 36% (P = 7 × 10 -7 ), respectively, compared with the controls (not treated with specific antibodies). Similar results were observed in the cell adhesion assay, except that treatment with the antibody against integrin α 2 did not result in a significant blockade of cell adhesion ( Figure 6B, P = 0.23). IC53-Induced Cell Invasive Growth via the Phosphatidylinositol 3-Kinase (PI3-K)-Akt Pathway Next, we investigated which signaling pathway is involved in mediation of the process. It is well established that integrin-mediated activation of the phosphatidylinositol 3-kinase (PI3-K) pathway plays an important role in colon cancer invasive growth (29). To determine whether the effect of IC53 on cell invasive growth is also mediated by this Figure 5. IC53 modulated expression of integrin α2, α3 and β4. IC53 modulated the expression of integrin α2, α3 and β4. Total protein was isolated from serum-starved HCT-116 cells treated with IC53 plasmids (IC53) or stealth siRNA against IC53 (siRNA) for 48 h, and the expression of integrin α2, α3 and β4 was analyzed by Western blotting. As shown in the lowest panel, an anti-GAPDH antibody was included in the analysis as a loading control. cell migration and adhesion were tested in the presence or absence of antibodies against integrin α 2 (10 μg/mL), α 3 (10 μg/mL) or β 4 (20 μg/mL). (A) Antibodies against integrin α 2, α 3 and β 4 partially, but significantly, inhibited IC53-mediated HCT-116 cell migration by 60%, 37% and 36%, respectively, compared with the controls (not treated with these specific antibodies) (**P < 0.01). (B) Similar results were observed in the cell adhesion assay, except that treatment with the antibody against integrin α 2 did not result in significant inhibition of cell adhesion (P = 0.23). Three independent experiments were carried out and each sample was tested in triplicate; similar results were obtained in all cases. This graph shows representative results, expressed as the mean ± SD (**P < 0.01). (C) PI3-K inhibitors almost completely blocked IC53-induced HCT-116 cell migration. The migration assay was performed by using serum-starved HCT-116 cell transfectants and control cells. These cells were treated with 30 ng/mL PDGF-BB, the PI3-K inhibitor wortmannin (100 nmol/L), LY294002 (25 μmol/L) or its vehicle (Me 2 SO). The number of migrated cells was counted in the transwell chambers. The values were presented as the mean ± SD (n = 6). (**P < 0.01). (D) Western blotting analysis of Akt phosphorylation. The serum-starved HCT-116 cells stably transfected with plasmid alone or plasmid carrying human IC53 were incubated with PDGF-BB (30 ng/mL) or LY294002 (10 μmol/L) for 2 h. Akt phosphorylation and Akt expression were analyzed by Western blotting with an Akt-phosphorylated antibody (1:1,000) and an Akt antibody (1:1,000), respectively. OD, optical density. pathway, we examined cell invasive growth after blocking the PI3-K pathway in HCT-116 cells stably transfected with IC53. The two PI3-K-specific inhibitors, wortmannin and LY294002, used to block activation of the PI3-K pathway, almost completely abolished IC53-induced HCT-116 cell migration ( Figure 6C, P = 2 × 10 -7 ), indicating the involvement of the PI3-K pathway. Because Akt is an important downstream effector of the PI3-K pathway, we next investigated Akt expression and phosphorylation. As indicated in Figure 6D, Western blot results showed that transfection of IC53 did not alter the expression of Akt protein in HCT-116 cells, but dramatically increased the phosphorylation of Akt. I C 5 3 A N D L A T E O N S E T O F C O L O R E C T A L C A N C E R Immunoblot analysis with antibodies against serine-473 of Akt showed that the level of Akt phosphorylation was comparable in the HCT-116 cells overexpressing the IC53 gene to that in the cells treated with PDGF-BB, a known activator of Akt. To confirm whether Akt phosphorylation induced by IC53 is indeed due to PI3-K activity, we used the PI3-K inhibitor LY294002 to treat HCT-116 cells overexpressing IC53 and found that Akt phosphorylation could be blocked by LY294002 at a concentration of 25 μmol/L (see Figure 6D, last lane). This finding supported the possibility that IC53 upregulates the expression of integrins, which activate PI3-K, and PI3-K then enhances the phosphorylation of Akt, a cell growth signal. rs2737 C/C Genotype Was Associated with the Late Onset of CRC Taken together, our results indicate that the IC53 gene is a new mediator for colon cancer progression. Using a bioinformatics approach, we found that rs2737 (a T-C substitution) in the IC53 gene created a potential miR-379 target site ( Figure 7A), leading us to hypothesize that rs2737 could modify the colon cancer incidence as well as the timing of colon cancer onset. To test our hypothesis, we first examined miR-379 expression in colon cancer tissue. The expression of miR-379 was confirmed by in situ hybridization in 84% (152/182) of independent clinical colon cancer samples in the tissue microarray described above ( Figure 7B). The expression pattern of miR-379 was negatively correlated with that of IC53 (correlation coefficient -0.192, odds ratio 0.37, 95% confidence interval 0.15-0.86, P = 0.018; Table 2), and the correlation was stronger in the subjects possessing allele C of rs2737, which created a potential target site of miR-379 (correlation coefficient -0.284, odds ratio 0.25, 95% confidence interval 0.09-0.66, P = 0.003; see Table 2), whereas the correlation was absent in the subjects possessing the TT genotype (correlation coefficient -0.067, odds ratio 0.70, 95% confidence interval 0.16-3.12, P = 0.66; see Table 2). Next, we tested the interaction between IC53 transcripts and miR-379 directly. We cloned a 75-bp fragment, which contained the predicted binding site of miR-379 around rs2737 into the 3′UTR of the luciferase reporter vector pMIR-REPORT to generate the wild-type (pMIR-TT) or mutant (pMIR-CC) rs2737 constructs. Using HCT-116 cells cotransfected with pMIR-CC reporter constructs and the pre-miR miRNA precursor of miR-379, we found that the pMIR-CC luminescent signal was about 55% that of the control (P = 0.015, Figure 7C). There was no significant difference between the signal from the cells cotransfected with the pMIR-TT reporter constructs and those cotransfected with the pre-miR miRNA precursor of miR-379 (P = 0.86, see Figure 7C). To further explore the effects of miR-379 on the translation of IC53, we used two colon cancer cell lines: LOVO homozygous for the T allele and HT-29 heterozygous for the C allele. Im-munoblot analysis showed that miR-379 (20 pmol) repressed IC53 translation in HT-29 cells but not in LOVO cells ( Figure 7D). This strongly suggested that miR-379 represses IC53 translation in carriers of allele C in vitro and in vivo. To test the hypothesis that IC53-rs2737 can modify colon cancer incidence as well as the timing of cancer onset, we consecutively selected 222 Chinese CRC patients with histologically confirmed colon or rectal adenocarcinoma and 260 age, sex and ethnically matched controls. The results of IC53-rs2737 genotyping are summarized in Supplementary Table S3. No association between IC53-rs2737 and CRC incidence was observed. We next compared the age distribution at CRC diagnosis for patients who had the homozygous wild-type (T/T) genotype with that of patients harboring the homozygous (C/C) variant genotype at rs2737. Among the 222 patients with CRC, the median age at CRC diagnosis was 55.3 years for patients with the T/T genotype and 63.0 years for patients with the C/C rs2737 genotype (95% confidence interval 2.6-12.8 years; P = 0.003 [one-way ANOVA test, two-sided]) (Figure 8A). As seen in Figure 8B, the frequency of the C/C genotype was greatly decreased in those individuals who developed CRC at a young age. Individuals who developed CRC before the age of 45 years showed a homozygous C/C frequency for rs2737 of 10.8%, whereas the homozygous C/C frequency was 26.6% for the whole group (P = 0.039, see Figure 8B). Our data indicated that rs2737 may correlate with age at CRC onset. DISCUSSION In this study, we found that the expression level of IC53 correlated with the grade and depth of invasion of adenocar- cinoma of the colon, and the involvement of IC53 in regulation of colon cancer cell invasiveness occurred via modulation of the integrin-PI3-K-Akt pathway. Importantly, we found that the C allele of rs2737 creates a miR-379 target site in the IC53 gene and correlates with the late onset of CRC. These results provide direct evidence for the biological function of IC53 as a positive regulator for tumor invasive growth and a new target for suppressing colon cancer progression. I C 5 3 A N D L A T E O N S E T O F C O L O R E C T A L C A N C E R It is well known that integrins increase tumor cell adhesion and migration and promote invasive growth of cancer cells. We demonstrated in vitro that IC53 stimulates colon cancer cell line HCT-116 invasive growth via its effects on integrin production. Blockade of integrins with antibodies against integrin α 2 , α 3 and β 4 suppressed invasive growth. These results suggested that IC53 has the potential to promote the migration of HCT-116 cells and colon cancer cells in vivo. The invasive growth effects of IC53 are mediated via integrins. Our results are consistent with several previous studies. For example, in intestinal epithelial cells, the α 2 β 1 integrins mediate Erk activation, which prevents apoptosis induced by serum deprivation. Integrins can also enhance the survival effect of growth factors by facilitating downstream signaling events, such as the effect of α 5 β 1 integrins on intestinal epithelial cells (30). The PI3-K signaling pathway has been shown to play a pivotal role in intracellular signal transduction pathways involved in cell growth, cellular transformation and tumorigenesis. Analysis of colon adenocarcinoma cell lines indicates that the PI3-K signaling pathway is upregulated in colon cancers (31,32), along with the phosphorylation of Akt. Inhibition of the PI3-K pathway with wortmannin resulted in a suppression of the anchorage-independent growth of colon cells in a soft agar assay (33). Integrins have been linked to PI3-K/Akt signaling in promoting tumor cell invasiveness. In the presence of growth factors, integrins can prevent apoptosis of fibroblasts by focal adhesion kinase (FAK) and mediate activation of PI3-K and Akt (34). Phosphorylation of Akt has been shown to effect Wnt (Wnt-1, Wnt-3a) signaling, a pathway central to the initiation of colorectal carcinogenesis. Akt activation leads to inhibition of the proapoptotic glycogen synthase kinase 3B, with a resultant increase in the level of the antiapoptotic β-catenin protein (35), and acts synergistically with the Ras and Raf cascades, which are critical in colorectal carcinogenesis (36,37). We found that IC53 upregulates integrins, stimulates PI3-K activation and increases Akt phosphorylation. Antibodies against integrins could abolish IC53induced PI3-K activation, and PI3-Kspecific inhibitors (wortmannin and LY294002) could block Akt phosphorylation, suggesting that integrins induce PI3-K activation and PI3-K mediates IC53-induced Akt phosphorylation. In our studies, wortmannin and LY294002, two inhibitors of PI3-K, could also inhibit the migration of HCT-116 cells, as they did on Akt phosphorylation, indicating that IC53-induced HCT-116 cell migration in vitro occurs via activation of the integrin-PI3-K-Akt pathway. Our results are consistent with the data reported by Stav et al. (23), which showed that IC53 is overexpressed in tumor tissues of lung adenocarcinoma. In contrast, IC53 was reported as a tumor suppressor in Hela, H1299, HT1080 and U2OS cell lines (24)(25)(26); however, all the cell lines are not originated from colon cancer; these data indicate that IC53 probably has diverse effects in different cancers. We speculated that IC53, as a cytoplasmic protein (data not shown), may act as a regulator in the signal pathway and demonstrate diverse effects by binding to a different partner. IC53 was first identified as protein binding to p35 (38), the regulatory protein of cdk5, overexpression of the cdk5/p35 complex reversed the inhibitory effects of ciglitazone and promoted cell growth in the colon cancer cell line HT29 (39). Thus, IC53 may promote colon cancer progres-sion through regulating activity or expression of cdk5. It is well known that modifier factors, including both environmental and genetic components, influence clinical phenotypes (8). Several reports have shown that genetic polymorphisms may constitute genetic modifiers of age at diagnosis of CRC (12,14,15). In this study, we found that rs2737 in the IC53 gene, which was a positive mediator for colon cancer progression, created a miR-379 target site, and the rs2737 C/C genotype was associated with late onset of CRC. Furthermore, the frequency of the C/C genotype was much lower in patients with an age of <45 years at diagnosis than in the whole group, indicating that the C allele of rs2737 may be protective versus CRC. The function of miR-379 remains to be fully elucidated. However, our results showed that the expression pattern of miR-379 was negatively correlated with that of IC53 in colon cancer tissues, and miR-379 inhibited IC53 translation. We also found that the rs2737 C/C genotype created an miR-379 target and resulted in late onset of CRC. To the best of our knowledge, this is the first report to show that miR-379 functions as a novel mediator for CRC progression. In summary, our results indicate that IC53 is a positive regulator for CRC progression via the upregulation of integrin expression, activation of PI3-K and increase in Akt phosphorylation. Importantly, we demonstrated that the C allele of rs2737 creates a miR-379 target site in the IC53 gene and correlates with the late onset of CRC. The findings from this study may significantly contribute toward the development of improved preventative or therapeutic strategies for CRC. Future prospective studies could extend these findings by using a greater sample size. ACKNOWLEDGMENTS This study was supported by the National High Technology Research and Development Program of China (863 Program, number 2006AA02Z477, to R Hui), the National Science and Technology Major Projects (number 2009ZX09501-026 to R Hui) and the Foundation of Beijing Municipal Committee of Science and Technology (number D0905001040631 to J Ji). DISCLOSURE The authors declare that they have no competing interests as defined by Molecular Medicine, or other interests that might be perceived to influence the results and discussion reported in this paper.
9,030
2011-01-01T00:00:00.000
[ "Biology" ]
Recent Advances in Antibacterial and Antiendotoxic Peptides or Proteins from Marine Resources Infectious diseases caused by Gram-negative bacteria and sepsis induced by lipopolysaccharide (LPS) pose a major threat to humans and animals and cause millions of deaths each year. Marine organisms are a valuable resource library of bioactive products with huge medicinal potential. Among them, antibacterial and antiendotoxic peptides or proteins, which are composed of metabolically tolerable residues, are present in many marine species, including marine vertebrates, invertebrates and microorganisms. A lot of studies have reported that these marine peptides and proteins or their derivatives exhibit potent antibacterial activity and antiendotoxic activity in vitro and in vivo. However, their categories, heterologous expression in microorganisms, physicochemical factors affecting peptide or protein interactions with bacterial LPS and LPS-neutralizing mechanism are not well known. In this review, we highlight the characteristics and anti-infective activity of bifunctional peptides or proteins from marine resources as well as the challenges and strategies for further study. Introduction Gram-negative bacterial infectious diseases pose a major threat to humans and animals, despite the tremendous progress that has been made in medical drugs. Lipopolysaccharide (LPS), well known as an endotoxin, is a major component of the cellular wall of Gram-negative bacteria. It normally occupies up to 90% of the outer leaflet of the outer membrane and is connected by Mg 2+ ions to form an oriented and highly ordered structure [1]. LPS is also a key signaling molecule in the pathogenesis of infection, inflammation, sepsis and multiple organ failure [2]. Septicemia is a serious disease in animal husbandry (including cattle, buffalo, pigs, fallow deer, horses, etc.) [3][4][5] that causes serious economic losses. It is also a serious human disease. Approximately 18 million sepsis cases and 8 million deaths occur in people each year [6]. Clinically, many therapeutic strategies have been applied to try to neutralize the pathogenic activity of endotoxin to prevent further deterioration of sepsis, but no satisfying therapeutic drugs exist to date [7]. Although many antibiotics have a good bactericidal effect, they cannot effectively prevent septic shock caused by LPS due to a lack of ability to neutralize LPS [8]. On the other hand, antibiotic treatment of bacterial infection often causes accelerated release of bacterial LPS [9,10]. It has been reported that a 3-to 20-fold increase in the total concentration of LPS occurs as a consequence of antibiotic action on Gram-negative bacteria [11]. Therefore, there is an urgent need for new drugs to kill bacteria and neutralize LPS at the same time. A series of arenicin-3 analogs, such as NZ17074, N2 and N6, were also designed and synthesized to improve antibacterial activity [35][36][37]. NZ17074, N2 and N6 displayed more potent antibacterial activity against Escherichia, Salmonella, Pseudomonas, Staphylococcus and Listeria, except Bacillus and Candida, than their parents and had a high capacity to bind to LPS by molecular docking and BODIPY'-TR-cadaverine (BC) displacement assays. These peptides could bind to LPS by forming hydrogen bonds between positively charged amino acids (such as His and Arg) and the side fatty acid chain of LPS. In addition, in vivo experiments showed that N2 and N6 significantly enhanced the survival rate of peritonitis-and endotoxemia-induced mice, indicating the antibacterial and detoxifying activity [34]. The results suggest that arenicins and their low cytotoxicity derivatives have potential as therapeutic agents and adjuvants for the treatment of bacterial infections. Furthermore, the antimicrobial activity of chimeric peptides of N6 and Tat11 (CNC) and the C-terminal aminated N6 (N6NH2) against Salmonella typhimurium in RAW264.7 cells was increased by 67.2-76.2% and 96.3-97.6%, respectively, when compared with N6 (data not published in our work). The efficacy of CNC and N6NH2 against S. typhimurium in mice was much higher than that of N6, suggesting that they may be excellent candidates for novel antimicrobial agents to treat infectious diseases caused by intracellular pathogens. Cyclic Peptides from Marine Bacteria Novel cyclic peptides-ogipeptins, including A, B, C, and D, were isolated from the secondary metabolites of marine bacteria Pseudoalteromonas sp. SANK 71903 (Table 1) [38,39]. These peptides are acylated cyclic peptides with basic and hydrophobic motifs. Moreover, four ogipeptins showed strong antibacterial activity against E. coli with MICs of 0.25-1 µg/mL and slightly weaker activity against S. aureus with MICs of 8-128 µg/mL (Table 2). Meanwhile, flow cytometry analysis showed that ogipeptin could block LPS to bind to cell surface receptor CD14 in vitro with IC 50 values of 4.8, 6.0, 4.1 and 5.6 nM, respectively. It was also found that ogipeptins were able to inhibit the release of TNF-α caused by LPS [38,39]. Anti-LPS Factors (ALFs) from Crustaceans Anti-lipopolysaccharide factors (ALFs), the potential AMPs that bind and neutralize LPS, are common effectors of innate immunity in crustaceans [40]. The first ALF was isolated from amoebocytes of the horseshoe crab Limulus polyphemus, named LALF, which has antibacterial activity against Gram-negative bacteria and can neutralize LPS [40,41]. The first ALF of shrimp (named as ALFFc) was reported in Fenneropenaeus chinensis [42]. Since then, other ALFs, such as ALFPm3, SpALF5, and FcALF5, against different bacteria and white spot syndrome virus (WSSV) have been found to be widely distributed in different crustacean species, including mud or the green mud crabs Scylla paramamosain, the black tiger shrimp Penaeus monodon, and the Chinese shrimp Fenneropenaeus chinensis (Table 2) [43][44][45]. RALFPm3 could bind to lipid A of LPS [46]. Recently, a novel ALF named SpALF6, which was isolated from the mud crab Scylla paramamosain, and its variant SpALF6-V (H46R and A110P) possessed strong binding activity to LPS and exhibited broad antimicrobial activity against Gram-negative bacteria (such as Vibrio and E. coli), Gram-positive bacteria (such as S. aureus) and fungi (such as C. albicans) [47]. The phosvitin (Pv) protein from zebrafish had potent antibacterial activity against E. coli, S. aureus and Aeromonas hydrophila and could bind to LPS, lipoteichoic acid or peptidoglycan of bacteria (Table 2) [58]. Pv significantly reduced LPS-induced tumor-necrosis factor (TNF)-α production in RAW264.7 cells, suggesting that Pv has LPS-neutralizing capacity in vitro. Moreover, the TNF-α level in mice was considerably decreased, and the survival of the endotoxemia mice was promoted by Pv [58]. The phosvitin-derived peptide-Pt5, which consists of the C-terminal 55 residues of zebrafish phosvitin, has antibacterial activity and immunomodulatory function. Pt5 also protected zebrafish from A. hydrophila infection [59]. Pt5e, one of the Pt5 mutants, showed a strong antibacterial activity against E. coli and S. aureus with MICs of 1.2-1.8 µM. Pt5e significantly inhibited the release of LPS-induced TNF-α and interleukin (IL)-1β from RAW264.7 cells and from mice, respectively [60]. Additionally, Pt5e remarkably promoted the survival of the endotoxemia mice. It also proves that Pv and its derivatives are non-cytotoxic and non-hemolytic, making them better candidates than polymyxin B and LL-37 for development as novel sepsis therapeutic agents and antibacterials. Zinc finger ZRANB2 protein from zebrafish can function as a pattern recognition receptor, which interacts with LPS and then recognizes Gram-negative bacteria. ZRANB2 and its truncations (Z 1/37 , Z 11/37 ) exhibited bioactivity against, and bound to LPS of E. coli, Vibrio anguillarum, and A. hydrophila in vitro with an IC 50 (8.5-9.7 µg/mL) ( Table 2) [61]. They are also involved in the antimicrobial activity of developing embryos against A. hydrophila in vivo. Truncation of Z 38/198 and the deletion of the N-terminal 37 residues of ZRANB2 resulted in losses of LPS binding activity via lipid A and antibacterial activity against Gram-negative bacteria in vitro and in vivo, indicating that the N-terminal 37 residues play a key role in the activity because of the loss of the Zn 2+ -binding site and the formation of incorrect molecular structures [61]. This provides a new viewpoint for the study of the immunological functions of zinc finger proteins. Ls-Stylicin1, which is characterized by a Pro-rich N-terminal region and a Cys-rich C-terminal region, was isolated from the penaeid shrimp Litopenaeus stylirostris. The recombinant Ls-Stylicin1 exhibited strong antifungal activity against Fusarium oxysporum. Additionally, it displayed potent LPS-binding activity, but lower antimicrobial activity, against Vibrio sp. with MICs of 40~80 µM ( Table 2) [62]. Heterologous Expression of Antibacterial and Antiendotoxic Marine Peptides or Proteins Arenicin-2 was fused with different partner proteins (including ketosteroid isomerase (KSI), cellulose-binding domain (CBD) and thioredoxin A) and overexpressed in E. coli. The yield was up to 5 mg/L (Table 3) [63]. Both recombinant and chemically synthetic arenicin-2 could inhibit Gram-positive bacteria (such as Listeria monocytogenes, M. luteus, B. megaterium, and S. aureus), Gram-negative bacteria (E. coli and Agrobacterium tumefaciens), and fungi, including C. albicans and spore germination of Fusarium solani [63]. A series of arenicin-1 variants were designed and recombinantly expressed in E. coli, and the yield of 1-4 mg per liter of culture was obtained. These variants exhibited low hemolysis and potent antibacterial activity against S. aureus (MICs of 3.13-50 µM), E. coli (MICs of 0.8-50 µM) and P. aeruginosa (MICs of 3.13-50 µM), respectively [29]. Arenicin-1 shortened analogs, ALP1 and ALP2 (17-residue), were obtained by recombinant expression in E. coli, with the yield of 7.5-9 mg per liter of the fermented supernatant. Both ALP1 and ALP2 improved the antibacterial activity and cell selectivity in contrast to arenicin-1 [33]. Piscidin 1 and piscidin 3 were successfully expressed in E. coli, with a yield of 1 mg per liter culture and with a purity of over 90% (Table 3) [64]. The recombinant Pv could significantly inhibit the growth of E. coli, A. hydrophila and S. aureus, with half-inhibitory concentrations (IC 50 ) of 3.1, 3 and 3 µM, respectively [65]. The recombinant zebrafish phosvitin-derived peptide-Pt5 could enhance the survival of zebrafish infected with A. hydrophila [59]. Pv5 derivative-Pt5e was also successfully expressed in E. coli and displayed dual antibacterial and LPS neutralizing function in vitro and in vivo [60]. The expressed SpALF6 in E. coli had potent antibacterial and antifungal activity and also recognized LPS [47]. The recombinant zebrafish ZRANB2 in E. coli BL21 had bacterial activity against A. hydrophila with an IC 50 value of 9.7 µg/mL, but the recombinant Z 38 / 198 did not retain antibacterial activity against E. coli, V. anguillarum and A. hydrophila in vitro and affinity to LPS (Table 3) 56~25 µM), and WSSV [43,66]. The arenicin-3 varient-NZ17074 was also successfully expressed in P. pastoris by fusing with SUMO3, and the expressed peptide had significant antibacterial activity against E. coli, S. enteritidis and P. aeruginosa with MICs of 2~4, 2 and 8~16 µg/mL, respectively [67]. Low yields of these marine peptides or proteins expressed in E. coli and P. pastoris may be attributed to the antibacterial nature of peptides, which makes them potentially fatal to the hosts; additionally, their small molecular size or high cationic properties make them highly susceptible to proteolytic degradation during expression [68]. Hydrophobicity and Charge AMPs are usually a class of cationic amphiphilic peptides. The core oligosaccharide and phosphate group of LPS, a major component of the cell outer membrane of Gram-negative bacteria, confers a negative charge, showing its strong affinity for cationic AMPs [69,70]. How do AMPs bind to LPS and what influences their interaction? A large amount of LPS is released from Gram-negative bacteria and forms micelles. Positively charged AMP and negatively charged LPS will move close to each other through electrostatic interactions [71]. AMPs may insert deeply into the interior of LPS micelles by hydrophobic interactions and finally can interact with the acyl chains of lipid A through specific amino acid side chains [72]. Hydrophobic amino acids are essential for the interaction between peptides and LPS [73,74]. CLP-19, derived from Limulus ALF, and its analogs (CRP-1 and CRP-2) were chemically synthesized to assess the effects of the hydrophobic residues of peptides on the LPS-binding ability [75]. The result showed that there is a strong positive correlation between hydrophobicity and LPS binding ability, which agrees with previous studies [73,74]. It has been demonstrated that the positive charge of AMPs is also an important factor, which confers LPS-binding activity to peptides or proteins [70]. Scott et al. demonstrated that an increase in the positive charge in a cecropin-melittin hybrid (CEMA) significantly improved the affinity of CEMA and LPS [76]. The extra two Lys residues in the N-terminal of Sushi 3 increased the LPS-peptide binding ability [77]. A series of peptides were designed by Rosenfeld Y et al. to determine the common factors contributing to the affinity of AMPs and LPS, and the results showed that, except for an increase in the hydrophobic residues in peptides enhancing its LPS binding ability, more positively charged residues in peptides led to a stronger affinity to LPS [78]. Srivastava et al. reported that Glu substitution in Temporin L for Lys promoted the ability to bind to LPS [79]. Thus, the charge balance of the peptides is a vital parameter in the design of improved LPS-neutralizing peptides [77]. Additionally, the distance between the positively charged residues is very important for the binding of LPS. It was found that a typical distance range of 12~15 Å between the charged amino groups of Lys in Pa4 from sole fish and in MSI-594 (a magainin derivative) agree with the inter-phosphate distance of the phosphate groups of the lipid A domain of LPS, which is geometrically compatible to those of the AMPs and LPS conformation by NMR analysis [49,80]. Phe replacement with Ala in MSI-594 led to a loose peptide structure, which markedly reduced the affinity of LPS (4-fold) [81]. Basic Amino Acid Content With the exception of hydrophobic amino acids, LPS binding to peptides or proteins also requires basic amino acids and their precise structural arrangements [73]. Kushibiki et al. used molecular docking to study the interaction between tachyplesin I and LPS and found that the amino group of Lys1 and the guanidyl group of Arg17 in tachyplesin I were in close proximity to the phosphate groups of LPS, indicating that close packing exists between the basic residues in peptides and the phosphate groups or saccharides of LPS [82]. Similarly, the indole ring of Trp2 and the aromatic ring of Phe4 of the peptide were also close to the acyl chains of lipid A, indicating that close packing also exists between the aromatic residues of tachyplesin I and the hydrophobic region of LPS [82]. NZ17074 and its derived peptides, N2 and N6, could bind to LPS molecules [36]. The results showed that hydrogen bonds were formed between the positively charged side chains of basic residues (His9, Arg14, and Arg19) of NZ17074 and the fatty chains MYR-1014, GLC-1007 and KDO-1003 of LPS. Salt bridges were formed between Arg10, Arg10, Arg14 and Asn21 of N2 and the fat chains FTT-1010, GMH-1005, MYR-1014 and PA1-1000 of lipid A. Arg10, Arg19 and Asn21 of N6 interacted with the aliphatic chain FTT-1010, GLC-1006 and PA1-1000 of LPS by forming hydrogen bonds or salt bridges [36]. Monisha G. Scott et al. found that, compared to CP29, CP208 interacted with LPS poorly and had little antibacterial activity due to a deletion of a Try residue at the N-terminus of CP29 [14]. These results indicate that basic amino acids may be very important for AMPs to neutralize LPS, which may be due to the positively charged basic amino acids that easily form hydrogen bonds with phosphate groups on the lipid head of LPS or with the lipid glycerol groups of LPS [83,84]. Secondary Structure In the study of the Sushi 1 and Sushi 3 peptides, Ding et al. (2008) found that the two peptides exhibited a random structure in the aqueous phase. When interacting with LPS, Sushi 1 formed an α-helix structure, and Sushi 3 tended to form a mixture of secondary structures containing α-helix and β-sheet via an intermolecular disulfide bond, which may be contributed to its high affinity for LPS [85]. Similarly, Mozsolits et al. showed that the high α-helicity of the peptides was related to the high binding affinity with LPS. The results demonstrated that the α-helix formation of peptides plays a key role in the process of the peptides binding to LPS due to the amphipathic helical structures of the peptides [86]. Moreover, it has been reported that Pa4, which was isolated from the pacific peacock sole fish, adopts the characteristic "α-helix-loop-α-helix" structure in LPS micelles, which is essential for its interaction with LPS [49]. Disulfide Bond Apart from the contribution to the structure-activity, disulfide bonds in peptides are very important for LPS binding [87]. The disulfide bonds in Sushi 1 and Sushi 3 are involved in the disruption of LPS micelles, and their reduction led to a significant decrease in the LPS-peptide binding ability by 100-and 10,000-fold, respectively, indicating that the disulfide bond play a vital role in LPS-peptide interaction [87]. A series of 12-residue β-boomerang lipopeptides, which partially contain disulfide bonds, were designed by Mohanram et al. (2014). The MIC assays showed that the incorporation of inter-molecular disulfide bond conferred strong bactericidal effects on Gram-negative bacteria. The lipopeptides analogs (YI13C, C4YI13C and C8YI13C) with disulfide bridged Cys showed 80% neutralization in different concentrations of LPS. The high-affinity interaction of peptides with negatively charged LPS are very important for the permeabilization of the cell membrane of Gram-negative bacteria and the neutralization of endotoxin [88]. Biophysical and Chemical Interaction with LPS Some cationic peptides, such as SMAP-29, CAP-18 and LL-37, which contain both N-and C-terminal LPS-binding sites for the same peptide molecule, can bind to LPS via the first electrostatic interaction and via the displacement of the Mg 2+ ions, which causes increased mobility and disordered packing of the LPS molecules and their acyl chains [89]. It has also been demonstrated that the K 5 L 7 peptide displayed higher LPS binding capacity than its diastereomer 4D-K 5 L 7 [1]. K 5 L 7 bound first to LPS predominantly by electrostatic interactions and self-association, but not traverse into the hydrophobic lipid core of LPS, which confers a relatively weak LPS-permeating ability than 4D-K 5 L 7 . Comparably, after electrostatic interactions with LPS, 4D-K 5 L 7 stays as a monomer, accumulates on the surface of the lipid core, reaches a threshold concentration and is followed by the micellization of the LPS vesicles, which indicates a more potent membrane-permeating ability than K 5 L 7 [1]. This result is consistent with the previous study that polymyxin B could interact with LPS, including a rapid initial association and then a slower insertion into LPS [90]. The hydrophobic residues of Tachyplesin I (TP I), which was isolated from the horseshoe crab, interacted with the acyl chains of LPS. The 1D 1 H NMR and Trp fluorescence analysis showed that the Trp2 residue of TP I is involved in binding to LPS. The docking model showed that cationic residues in the N-and C-terminal of TP I interacted with phosphate groups or saccharides of LPS and aromatic residues, such as Trp2 and Phe4 in the N-terminal of TP I, interacted with the acyl chains of LPS [82]. Moreover, multiple Arg or Lys residues in the peptides or proteins, such as ALFs (ALFH1, ALFH2), bactericidal/permeability increasing factor (B/PI), lactoferrin and lysozyme, may form a structural motif, which most likely binds anionic groups of lipid A or inner core polysaccharide regions of LPS [89,91]. Both Lys and Arg residues in Sushi 3 interacted with the diphosphoryl head groups of LPS, and its hydrophobic residues contributed to interactions with the acyl chains of LPS [77]. The Sushi 1 peptide could bind parallel to the acyl chains of the lipid layer [92]. The Trp residue of Sushi 1 was located in the hydrophobic acyl chains of lipid A in LPS, which resulted in a decrease in LPS interaction with other binding sensors, such as LBP in the host [92]. Additionally, the NMR structure of Pa4 in LPS micelles showed that the Lys residues at the positions 8 and 16 could bind to the lipid A moiety of LPS via a salt bridge or hydrogen bond [49]. These results indicate that lipid A plays a vital role in the interaction between peptides or proteins and LPS, and it is the pharmacophore of the LPS molecule [92]. Inhibition of LPS-Induced Inflammatory Response How does LPS cause an inflammatory response? The released LPS in vivo can be transported to a cell membrane surface receptor-CD14 with the help of LBP (an LPS binding protein in the blood) and form the LPS-CD14 complex. LPS is then recognized by toll-like receptor 4 (TRL4) on the surface of the phagocytic cells, which in turn activates the MAPK and NF-κB pathways, which can induce the production of pro-inflammatory cytokines. The mechanism of peptides or proteins in neutralizing LPS and inhibiting the inflammatory response can be divided into (i) a direct effect: many bifunctional peptides (such as Hc-CATH) or proteins (such as SpALF6) can directly bind with LPS, which thus inhibits the binding of LPS to the TLR4/MD2 complex and the activation of inflammatory response pathways [18,47]; and (ii) an indirect effect: peptides (such as LL37 and defensin) or proteins competitively bind to CD14 or TLR4, thereby indirectly inhibiting LPS-induced reactions downstream ( Figure 1) [82,93]. Challenges and Strategies for Antibacterial and Antiendotoxic Marine Peptides or Proteins Antibacterial/antiendotoxic peptides or proteins have several limitations, including toxicity, stability and cost in application to clinical cases. The following different strategies, including fusion expression, residue substitution, reducing hydrophobicity, cyclization or amidation and costeffective purification method, have been developed to overcome these limitations. Toxicity Mason et al. studied the role of the C-terminal sequence of Chrysophsin-1 from the red sea bream and found that the toxicity of truncated Chrysophsin-1 in human lung fibroblast MRC-5 cells were reduced by 2.9 times after the removal of the RRRH sequence at its C-terminus [21]. D-amino acid substitution at the Leu9 position in Pro (3) TL, the frog skin peptide temporin L (TL, 13-residues long) analog, reduced the hemolytic activity and preserved the strong anti-Candida activity (Table 4) [94]. It has also been reported that the shortening of the peptide chain length of arenicin-1 (17-residues) is an effective assay to diminish its hemolytic activity [33]. Tripathi et al. found that Pro-substituted Chrysohphsin-1 analogs with unchanged physiochemical properties displayed significantly decreased cytotoxicity against human red blood cells and mammalian NIH-3T3 cells [20]. The first disulfide bond in the arenicin-3 derivative-NZ17074 was deleted, fused with a SUMO partner and expressed in P. pastoris. The result showed that the production level of N6 was increased by 1.4-fold compared to NZ17074, indicating the reduced toxicity to host cells after removing the first disulfide bond and fusing with SUMO [37]. Hydrophobicity of LPS-neutralizing antibacterial peptides, such as arenicin-1 or proteins, is closely related to cytotoxicity. Decreases in hydrophobicity can reduce the toxicity of peptides or proteins [32,34]. Additionally, topical application of marine peptides is Challenges and Strategies for Antibacterial and Antiendotoxic Marine Peptides or Proteins Antibacterial/antiendotoxic peptides or proteins have several limitations, including toxicity, stability and cost in application to clinical cases. The following different strategies, including fusion expression, residue substitution, reducing hydrophobicity, cyclization or amidation and cost-effective purification method, have been developed to overcome these limitations. Toxicity Mason et al. studied the role of the C-terminal sequence of Chrysophsin-1 from the red sea bream and found that the toxicity of truncated Chrysophsin-1 in human lung fibroblast MRC-5 cells were reduced by 2.9 times after the removal of the RRRH sequence at its C-terminus [21]. D-amino acid substitution at the Leu9 position in Pro (3) TL, the frog skin peptide temporin L (TL, 13-residues long) analog, reduced the hemolytic activity and preserved the strong anti-Candida activity (Table 4) [94]. It has also been reported that the shortening of the peptide chain length of arenicin-1 (17-residues) is an effective assay to diminish its hemolytic activity [33]. Tripathi et al. found that Pro-substituted Chrysohphsin-1 analogs with unchanged physiochemical properties displayed significantly decreased cytotoxicity against human red blood cells and mammalian NIH-3T3 cells [20]. The first disulfide bond in the arenicin-3 derivative-NZ17074 was deleted, fused with a SUMO partner and expressed in P. pastoris. The result showed that the production level of N6 was increased by 1.4-fold compared to NZ17074, indicating the reduced toxicity to host cells after removing the first disulfide bond and fusing with SUMO [37]. Hydrophobicity of LPS-neutralizing antibacterial peptides, such as arenicin-1 or proteins, is closely related to cytotoxicity. Decreases in hydrophobicity can reduce the toxicity of peptides or proteins [32,34]. Additionally, topical application of marine peptides is preferred as a strategy to overcome host toxicity, immunogenicity and instability when exposed to host proteases (Table 4) [95]. Stability C-terminal amidation of ShK from the sea anemone enhanced resistance to exoproteases (Table 4) [96]. N-terminal acylation of msHemerycin, which was isolated from the lugworm Marphysa sanguinea, showed stronger antibacterial activity due to increasing lengths of fatty acids and improved stability from proteolytic degradation by some peptidases [97]. In our study, the C-terminal amidation of N6 enhanced the antibacterial activity against S. typhimurium in mice and the stability by 3-fold at low pH (data not published), which may be associated with increasing positive charges in the peptide and with stabilizing the amphipathic helix formation [99]. Additionally, cyclization is commonly used as a strategy in the pharmaceutical industry to constrain the conformation of AMPs, such as lactoferricin, which can increase the antibacterial activity [98]. Likewise, backbone cyclization of the conotoxin MII from marine snails of the Conus genus by using a range of linkers that greatly improved the resistance to proteases, such as EndoGluC [100], which may be due to a more constrained structure that is less susceptible to protease degradation [98]. An all D-amino acid analog of 25-residue pleurocidin, which was derived from the winter flounder Pleuronectes americanus, showed improved activity against fungi and breast cancer cells, increased proteolytic resistance against proteases, such as trypsin, plasmin and carboxypeptidase B, and dramatically decreased the hemolytic activity [101-103]. Cost To provide large quantities of peptides for clinical trials, it is necessary to find efficient production methods of peptides. Short and simple structural peptides with sufficient stability and biological activity can be efficiently produced by chemical synthesis but may be limited by expensive costs. Comparably, heterologous expression may be a more effective and practical way to obtain the bioactive peptides at a large scale. Particularly, the fungal fusion expression system is very favorable and helpful to obtain a high yield of target peptides by enhancing the solubility of peptides and the efficiency in production [104]. As seen in the examples of NZ17074 and N6, SUMO is successfully applied to increase the solubility and yield of peptides [37,67]. Additionally, improvement in solvent extraction techniques (such as supercritical fluid extraction, pressurized solvent extraction and microware/ultrasound/pulsed electric field/enzyme-assisted extraction) and the cost-effective purification methods including using monolithic columns and the intein system in AMP production can also be introduced to reduce the production cost for antibacterial/antiendotoxic peptides or proteins (Table 4) [17,[105][106][107]. Of note, chemical synthesis, although relatively complex and costly, enables the incorporation of non-natural functionality, such as D-amino acids and acylation modification, into peptides to improve their activity [38,39,66]. With the development of new technologies, chemical synthesis may receive more and more attention in peptide production in the future. Conclusions A large number of bioactive natural products, including peptides and proteins, have been found from the large marine natural resource library. The slower pace of antibacterial/antiendotoxic peptides or proteins in clinical trials may be due to stability, high production cost, and unknown toxicity. However, topical application of marine peptides or proteins may be the most promising development in clinical practice, with a deeper understanding of the mechanisms of action and cost-effective production progress in the near future.
6,381.2
2018-02-01T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science", "Medicine" ]
Genomic Sequence Diversity and Population Structure of Saccharomyces cerevisiae Assessed by RAD-seq The budding yeast Saccharomyces cerevisiae is important for human food production and as a model organism for biological research. The genetic diversity contained in the global population of yeast strains represents a valuable resource for a number of fields, including genetics, bioengineering, and studies of evolution and population structure. Here, we apply a multiplexed, reduced genome sequencing strategy (restriction site−associated sequencing or RAD-seq) to genotype a large collection of S. cerevisiae strains isolated from a wide range of geographical locations and environmental niches. The method permits the sequencing of the same 1% of all genomes, producing a multiple sequence alignment of 116,880 bases across 262 strains. We find diversity among these strains is principally organized by geography, with European, North American, Asian, and African/S. E. Asian populations defining the major axes of genetic variation. At a finer scale, small groups of strains from cacao, olives, and sake are defined by unique variants not present in other strains. One population, containing strains from a variety of fermentations, exhibits high levels of heterozygosity and a mixture of alleles from European and Asian populations, indicating an admixed origin for this group. We propose a model of geographic differentiation followed by human-associated admixture, primarily between European and Asian populations and more recently between European and North American populations. The large collection of genotyped yeast strains characterized here will provide a useful resource for the broad community of yeast researchers. interest in natural isolates has increased as it has become clear that many nonlaboratory strains (including those adapted to various food/ industrial processes) have properties absent from the laboratory strains, such as the ability of several wine strains to ferment xylose (Wenger et al. 2010). The wider population of yeast strains represents a deep pool of naturally occurring sequence variation that has been leveraged to investigate the genetic architecture of polygenic traits (Swinnen et al. 2012). In addition, the polymorphisms that are observed in the global yeast population have been acted upon by evolution, making this set of sequences a powerful tool for investigating protein and regulatory sequence function as well as evolution (Nieduszynski and Liti 2011). Understanding the genetic diversity of yeast is therefore relevant to both the food/industrial roles of yeast and its role as a model organism in scientific research. The question of the global population structure of S. cerevisiae is itself an ongoing topic of research. In several publications in the past few years, investigators have explored the genetic diversity and population structure of yeast by using techniques such as multigene sequencing (Fay and Benavides 2005;Aa et al. 2006;Ramazzotti et al. 2012;Stefanini et al. 2012;Wang et al. 2012), whole-genome sequencing (WGS; Liti et al. 2009), tiling array hybridization (Schacherer et al. 2009), and microsatellite comparisons (Legras et al. 2005(Legras et al. , 2007Ezov et al. 2006;Goddard et al. 2010;Schuller et al. 2012). These studies demonstrated that S. cerevisiae is not a purely domesticated organism but can be isolated from a variety of natural environments around the globe. Although there appears to be some clustering of yeast genotypes by geography (Liti et al. 2009), it also appears that yeast involved in particular human food-industrial processes often are genetically similar to one another. For example, wine strains isolated from around the world display a very high degree of sequence similarity (Fay and Benavides 2005;Legras et al. 2007;Liti et al. 2009;Schacherer et al. 2009). Unfortunately, several of the most diverged groups identified in these studies were represented by relatively small numbers of strains, suggesting that analysis of additional strains might help clarify the structure of global yeast diversity. WGS of a large, diverse set of individuals is the most comprehensive approach to exploring the population structure and genetic diversity of an organism. However, despite the decreasing costs of DNA sequencing, complete genome sequencing of several hundred yeast strains is still a significant expense. In contrast, methods that compare strains by genotyping relatively small numbers of loci, such as microsatellites or a small number of genes (Fay and Benavides 2005), are less expensive, but the results may not reflect the relationships between strains genome-wide. A genome reduction strategy referred to as restriction site2associated sequencing (RAD-seq; Miller et al. 2007;Baird et al. 2008) directs sequence reads to genomic locations adjacent to particular restriction sites. However, because most restriction sites are common across strains of the same species, nearly the same subset of every genome is sequenced. Thus, RAD-seq permits the genotyping of a set of strains across a large number of positions scattered across the genome at modest cost. In this work, we apply a multiplexed RAD-seq reduced genome sequencing strategy to explore genetic diversity and population structure in S. cerevisiae. Using this approach we sequenced more than 200 strains over~1% of the yeast genome. The strains include multiple representatives from six continents, 38 different countries, and were isolated from disparate sources, including fruits, insects, plants, soil, and a variety of human fermentations, such as ragi, togwa, cacao, and olives. From analysis of the resulting multiple alignment, we observed a clear geographical stratifi-cation of strains along with evidence of admixture between populations and human-associated strain dispersal. MATERIALS AND METHODS The S. cerevisiae strains analyzed in this study were obtained from a variety of sources, including the Phaff Yeast Culture Collection (http://phaffcollection.ucdavis.edu), the Agricultural Research Service (NRRL) Culture Collection (http://nrrl.ncaur.usda.gov/), published strains from individual laboratories or our own isolates from wild or domesticated sources. Details, including references and information about strain requests, are included in Supporting Information, Table S1. While analyzing the data, we came across a small number of anomalies, such as two dissimilar genome sequences for strain 322134S. These are likely to represent errors in strain labeling. Yeast isolation Soil, bark and leaves, or food samples were bathed in medium consisting of 2 g/L Yeast Nitrogen Base without Amino Acids (Difco, BD), 5 g/L ammonium sulfate, and 80 g/L glucose. Chloramphenicol (30 mg/mL) and carbenicillin (50 mg/mL) were added to the medium to suppress bacterial growth, and cultures were incubated at 30°. When necessary to suppress mold overgrowth, cultures were subcultured to liquid medium containing 1-5% ethanol. Cultures were examined by microscopy at 3 and 10 d, and those harboring budding yeast were plated onto CHROMagar Candida (DRG International, Inc.) and incubated at 30°for 325 d. CHROMagar Candida is a culture medium containing proprietary chromogenic substrates that can aid the identification of clinically important yeast (Odds and Bernaerts 1994). On CHROMagar Candida, S. cerevisiae colonies are known to range in hue from white to lavender to deep purple with most exhibiting the "purple" phenotype (C. L. Ludlow and A. M. Dudley, unpublished data;Boekhout and Robert 2003). Colonies exhibiting these color phenotypes were picked and saved for further study. RAD-sequencing and alignment A subset of strains were RAD-sequenced previously (Hyma and Fay 2013). For the rest, RAD-sequencing was performed as previously described (Lorenz and Cohen 2012;Hyma and Fay 2013). In summary, yeast genomic DNA was extracted in 96-well format and fragmented by restriction enzyme digestion with MfeI and MboI. P1 and P2 Adaptors were then ligated onto the fragments. The P1 adaptor contains the Illumina PCR Forward sequencing primer sequence followed by one of 48 unique 4-nucleotide barcodes and finally the MfeI overhang sequence. The P2 adaptor contains the Illumina PCR Reverse primer sequence followed by the MboI overhang sequence. After ligation, the barcoded ligation products were pooled, concentrated, and size selected on agarose gels, with fragments from 150 to 500 bp extracted from the gel. Gel-extracted DNA was further pooled to multiplex 48 uniquely barcoded samples in one sequencing library. The multiplexed DNA library was then enriched with a polymerase chain reaction using Illumina PCR Forward and Reverse primers. Sequencing runs were performed on the Genome Analyzer IIx (Illumina) for 40 bp single-end reads, with one library of 48 multiplexed samples per flow cell lane, yielding 20240 million reads. The read sequences generated for this study are available at the Sequence Read Archive under accession ERP003504, and for the subset of strains that were RADsequenced previously, DRYAD entry doi:10.5061/dryad.g5jj6. Multiple sequence alignments were generated by mapping reads to the S288c reference genome (chromosome accessions: NC_001133.8, NC_001134.7, NC_001135.4, NC_001136.8, NC_001137.2, NC_001138.4, NC_001139.8, NC_001140.5, NC_001141.1, NC_001142.7, NC_001143.7, NC_001144.4, NC_001145.2, NC_001146.6, NC_001147.5, NC_001148.3) and generating consensus reduced-genome sequences for each strain. The tagged reads were split into strain-pools by their 4 base prefix barcodes. Reads with Ns or with Phred quality scores less than 20 in the barcode sequence were removed. Any reads with more than 2 Ns outside the barcode also were removed. Reads were aligned to the S288c reference using BWA (version 0.5.8; Li and Durbin 2009), with six or fewer mismatches tolerated. Samtools (version 0.1.8; ) was then used to generate a pileup from the aligned reads using the "pileup" command and the "-c" parameter. Base calls were retained if they had a consensus quality greater than 20. Positions with root mean squared mapping qualities less than 15 and insertion/deletion polymorphisms were ignored. After filtering there was an average of 209,765 bp for each strain. Sequences from each strain were combined into a multiple sequence alignment via their common alignment to the S288c genome. Sites with more than 10% missing data were removed, resulting in a multiple sequence alignment of 116,880 bp. WGS alignment Previously generated WGS were incorporated into the RAD-seq dataset for population genetic analysis. For genomes with an S288c NCBI coordinate system, sequences were extracted directly based on S288c reference coordinates. For genomes using an alternative coordinate system (Saccharomyces Genome Resequencing Project, SGRP), blat was used to convert from the S288c NCBI reference coordinates to the alternative coordinate system prior to extracting sequences. For assembled genomes without S288c alignments, coordinates were obtained by blast. A fasta file of the S288c reference sequence was generated for each contiguous segment in the multiple sequence alignment. The resulting files were used to query each genome assembly using blast. When quality scores were available, sites with sequence quality less than 20 were converted to "N" before blasting or after sequence retrieval. Duplicated strains Some strains were sequenced by both WGS and RAD-seq. For duplicate strains with pairwise divergence less than 0.0005 substitutions per site, excluding singleton alleles (i.e., found in only 1 strain), only the RADseq data were retained for analysis. For duplicate strains that exceeded the threshold, both RAD-seq and WGS data were retained and strain names were labeled with an "r" and "g," respectively. Differences between the WGS and RAD-seq data could be a result of: (1) sequencing/ alignment errors, (2) different monosporic clones from an originally heterozygous isolate, or (3) mislabeled strains. However, we were not able to confidently distinguish between these possibilities. Population analysis Neighbor-joining phylogenetic tree construction was carried out using MEGA (version 5.0; Tamura et al. 2011), based on P-distance with pairwise deletion. Population structure was inferred using InStruct (Gao et al. 2007). Because InStruct failed to converge using all sites, it was instead run on 759 sites with allele frequency greater than or equal to 10%. Polymorphic sites were made biallelic by treating third alleles as missing data. InStruct was run with the parameters "-u 40000 -b 20000 -t 10 -c 10 -sl 0.95 -a 0 -g 1 -r 1000 -p 2 -v 2" with K (number of populations) ranging from 3 to 15. Although the lowest deviance information criterion (DIC) was obtained from a chain with K = 15, there was substantial variation among independent chains. We chose K = 9 as the optimal model to work with based on the average DIC for K = 10 being nearly identical to that of K = 9 and subsequent decreases in DIC for larger values of K being small compared with the standard deviation in DIC among chains (Table S3). Consensus population assignments for K = 8, 9, and 10 were obtained for the five chains with the highest likelihood using CLUMPP (version 1.1.2; Jakobsson and Rosenberg 2007) with parameters "-m 3 -w 0 -s 2" and with greedy option = 2 and repeats = 10,000. The similarity among the five chains (H') was 0.995 for K = 9, very close to the maximum similarity of 1.0. Compared with K = 9, populations 6 (African, S. E. Asia/Palm, Cocoa, Fruit) and 7 (Israel/Soil) were merged for K = 8, and a new population was inferred within populations 3 (Asian/Food, Drink) and 6 (African, S. E. Asia/Palm, Cacao, Fruit) for K = 10 ( Figure S1). A second InStruct analysis was performed by the use of a pruned dataset to better conform to InStruct's assumption of independence among markers. We eliminated SNPs within 5 kb of another SNP based on the decline in r 2 as a function of distance between SNPs ( Figure S2). The pruned dataset contains 495 SNPs and an average distance between SNPs of 22.4 kb compared with 14.9 kb in the full set of 759 SNPs. In comparison with the full dataset, the pruned data also had an optimum of 9 populations but with more variance among runs as indicated by H' (Table S3). The similarity (H') between the CLUMPP consensus of the full and pruned dataset was 0.90. Seven strains showed population admixture proportions that changed by more than 0.25 for any population. The seven strains are all Israeli strains in the Israeli population (#7) and showed an increase in admixture with European (#8) and Human (#4) populations in the pruned analysis. Most of the strains (211) showed no changes in admixture proportions greater than 0.125. Multidimensional scaling was performed on all 5868 sites and 262 strains using the identity by state distance between each pair of strains and the "cmdscale" function in R with three dimensions. Hierarchical clustering of either sites or strains was performed using the "hclust" function in R with complete linkage and the Euclidean distance of identity by state. RESULTS In an effort to expand the number and diversity of characterized S. cerevisiae strains available to the yeast community, we assembled and characterized a collection of .200 strains (Materials and Methods and Table S1). This strain set covers a diverse range of ecological niches and geographical locations, including strains used in previous studies of yeast global and local population structure (Fay and Benavides 2005;Ezov et al. 2006;Liti et al. 2009;Schacherer et al. 2009;Goddard et al. 2010) and strains with published WGS data. We sequenced each of these strains using a RAD-seq strategy to produce an initial multiple alignment (Materials and Methods). Strains with published WGS data were then added to the alignment to facilitate comparison between the results generated using WGS and RAD-seq data (Materials and Methods). The final dataset contained 262 strains genotyped across 116,880 base positions, of which 5868 sites were polymorphic (File S1). Genetic relationship among strains To visualize the phylogenetic relationships between the strains, we generated a neighbor-joining tree from the reduced genome multiple alignment (Figure 1 and File S2). The tree agrees well with the geographic origins of the strains and, for the subset of strains in common, is also consistent with a previous study that used WGS (Liti et al. 2009). To more directly compare our results to those obtained using WGS, we constructed a phylogenetic tree for only the subset of strains (38) analyzed in both our study and the previous whole-genome analysis ( Figure S3). The structure of the resulting tree is very similar to that produced in the previous study ( Figure 1C in Liti et al. 2009) and shows the same clustering of "Wine," "West African," "Malaysian," "Sake," and "North American" strains. Similarly, using our full dataset, these groups are found in clear and well-separated clusters on our tree ( Figure 1). We also identified a small isolated cluster of strains from Ghana involved in cacao fermentation and another discrete cluster of strains from the Philippines. A clear exception to this geographical stratification is the dispersal of European/wine strains around the globe, a result that is also consistent with the previous study (Liti et al. 2009). We identified two clusters of strains that appear closely related to the European/wine cluster, one isolated from European olives and another consisting primarily of a collection of environmental isolates from New Zealand (Goddard et al. 2010). Results for this second group are consistent with the hypothesis that the strains largely reflect a population brought to New Zealand as a consequence of European settlement. Together with the main "European/Wine" cluster, these two groups of strains appear to identify a "greater-European" region of the tree. Strains isolated from North America fell into two highly diverged regions of the tree. One set of strains ( Figure 1, "North America Wild") defines a cluster of strains almost universally isolated from North America (largely environmental samples from soil and vegetation). The second set is genetically similar to the European/wine strains, with strains scattered within the main European/wine cluster and related groups ( Figure 1 and Table S1). There are also a small number of strains isolated from North American environments in the "New Zealand" cluster. As previously observed (Hyma and Fay 2013), North American strains isolated from even the same locale (e.g., a single vineyard) split into subsets from both the North American Wild cluster and greater-European regions of the tree. These results are consistent with the assertion that in many locations across North America (particularly vineyards), a native population of yeast strains coexists sympatrically with a population introduced by European settlement (Hyma and Fay 2013). Another instance in which highly diverged strains were isolated from a single small geographical location is provided by the set of strains isolated from "Evolution Canyon", a well-studied location in Mount Carmel National Park of Israel (Ezov et al. 2006). These strains fell into one large and two smaller clusters on the tree (Figure 1; Israel 1, Israel 2, and a third cluster within a diverse set of strains labeled "Mixed"). The genomic diversity of these strains is remarkable, given that they were collected within a few hundred meters of each other. Strains widely used in the laboratory Included in the multiple alignment and phylogenetic tree is a group of seven strains widely used in the laboratory (S288c, W303, RM11-1a, FL100, Sigma 1278b, SK1, Y55), several of which are known to be closely related (Winzeler et al. 2003). The strains SK1 and Y55 are closely related to the West African cluster while S288c, FL100 and W303 are related and close to the European/Wine cluster. The position of these strains on the tree agrees with two previous studies (Liti et al. 2009;Schacherer et al. 2009), both of which described the limited sequence diversity of the lab strains. For example, none of the commonly used lab strains are derived from certain major populations, including the Asian group and the North America Wild group ( Figure 1). Together, these results suggest that the total sequence diversity of the yeast global population is poorly sampled by this set of strains in common laboratory use. To compare the total sequence diversity captured by the full set of 262 strains relative to that present in the subset of laboratory strains, we analyzed all alleles (defined as single base pair polymorphisms) Figure 1 Neighbor-joining tree of the 262 S. cerevisiae strains based on multiple alignments of 116,880 bases. Branch lengths are proportional to sequence divergence measured as P-distance. Scale bar indicates 5 polymorphisms/10 kb of sequence. Geographical and environmental clusters of strains are named and are indicated by black-outlined/gray-filled ovals. Colored ovals with numbering refer to strain populations identified in Figure 2. Seven strains widely used in the laboratory are labeled. that occurred in more than one strain. Alleles found in only one strain (singletons) were ignored to reduce the effect of sequencing errors, as were heterozygous calls. The results show a total of 3321 polymorphic loci with 6680 total alleles (3283 biallelic, 38 triallelic, and 0 tetraallelic positions). Only 1703 of these 6680 alleles were observed in the set of lab strains, and thus the set of strains assembled in our panel represents a significant increase (~4-fold) in sequence diversity over the set of laboratory strains. Population structure The infrequent sexual cycle of S. cerevisiae, combined with its high rate of self-mating, promotes the establishment of strong population structure and enables clonal expansion of admixed populations. To infer population structure and admixture between populations while accounting for selfing, we applied a Monte Carlo Markov chain algorithm, InStruct (Gao et al. 2007), to the 759 sites with an allele frequency of 10% or more. On the basis of the deviance information criterion, we inferred the most likely number of populations to be nine (Materials and Methods) and labeled each population by the most common geographic location and/or substrate from which the strains were originally isolated (Table S2). The relevant genotypes of each strain along with their inferred population ancestry are shown in Figure 2 and Table S1. The nine populations consist of two North American oak populations, an Asian food and drink population, a European wine and olive population, an African/S. E. Asian population, a New Zealand population, an Israeli population, and two populations associated with industrial/food processes. These populations match well with the major groupings seen on the phylogenetic tree, with the two North American populations identified by InStruct corresponding to the "North America Wild" grouping ( Figure 1 and Figure S1). It is notable that these two subdivisions do not reflect a clear geographic pattern within North America (Figure 2 and Table S1). The New Zealand population clearly shares many alleles with the European strains, but harbors a small number of sites that make it unique. One of the two human-associated groups contains the majority of laboratory strains, emphasizing the uneven sampling of yeast populations represented by the set of laboratory strains. Admixture For each population, strains were observed with high levels of ancestry to that population. However, 38% of strains showed appreciable levels of admixture, defined as less than 80% ancestry from a single population. To assess the overall coincidence of mixture between pairs of populations we tabulated the number of strains with at least 20% ancestry from each pair of populations (Figure 3). Most admixed strains involved the European, Asian or African populations. Figure 2 Clustered genotypes with inferred population structure and membership. Sites were clustered by complete hierarchical clustering by use of the Euclidean distance of allele sharing (identity by state). Strains were grouped by population structure and memberships inferred using InStruct. Minor alleles are shown in red, heterozygous sites in yellow, common alleles in black, and missing data are gray. Populations are labeled by the most common source and/or geographic location from which they were originally isolated. However, not all pairs of populations were equally likely to admix. Admixture was detected between the European population and the first North American (InStruct #1), but not the second North American (InStruct #2) populations. More generally, admixture with the two North American populations was largely restricted to the African and European populations or to admixture between the two populations themselves. Like the European population, the Asian population showed admixture with most other groups. The two human-associated populations were largely admixed with either the Asian or European populations. Finally, the New Zealand population only admixed with the European population, and the Israeli population was largely admixed with the Asian and one of the human-associated populations. Heterozygosity Matings within or between populations can result in strains with a large proportion of heterozygous sites. Most strains in this study had zero or a relatively small number of such sites. These strains could be naturally occurring homozygotes, haploids, or converted to homozygous diploids, a standard practice in some laboratories. However, we did identify 65 strains with more than 20 heterozygous sites (Table S1). The two strains with the greatest number of heterozygous sites DCM6 (n = 305) and DCM21 (n = 288) were isolated from cherry trees in North America and appear to be hybrids between the European and North American populations (Hyma and Fay 2013). Other strains with a large number of heterozygous sites (Table S1) also were isolated from fruit-related sources, including three from cacao fermentations, one from banana fruit, one from fruit juice, and one from a spontaneous grape juice fermentation. Across these 65 strains, 42 also exhibit notable admixture, defined by less than 80% ancestry from a single population. The proportion of heterozygous strains exhibiting appreciable admixture (65%) is significantly greater (Fisher's exact test, P = 1.5 · 10 24 ) than strains with little or no heterozygosity (38%), suggesting that heterozygosity was derived in part by admixture between populations. The proportion of admixture in heterozygous strains (71%) compared with strains with little or no heterozygosity (27%) is even more significant if strains with WGS are removed. Among the heterozygous strains, the greatest proportion of ancestry comes from one of the human-associated populations (#4, 31%), followed by the European (20%), Asian (17%) and African (14%) populations. To examine rates of heterozygosity across populations, we compared expected to observed heterozygosity within each population (Table S1). Although most populations exhibit a deficit of observed, compared to expected heterozygosity, the two humanassociated populations show noticeably more heterozygosity than the other populations. Relatedness between populations Whereas heterozygosity and admixture can provide information about strain ancestry, relatedness between populations can provide information about the history of entire populations, some of which may themselves be derived from historical admixture events. To examine relatedness between populations, we applied multidimensional scaling (Materials and Methods) to the entire dataset ( Figure 4). The first principal coordinate differentiates the European population from the other populations; the second principal coordinate distinguishes the two North American populations from the Asian population; and the third principal coordinate differentiates the African/S. E. Asian population from the others. The remaining populations and most of the admixed strains lie between these four major groups (Figure 4). Consistent with their positions on the neighbor-joining tree (Figure 1) and their genotypes (Figure 2), the New Zealand and Israeli populations are most closely related to the European population, and the two human-associated populations lie between the European and Asian populations. The results, combined with its high rates of heterozygosity, also suggest that the first human-associated population (population #4) appears to be a recently derived population originating from hybrids between the European and Asian populations. Subpopulations Low frequency alleles (,10%) can sometimes define subpopulations not captured by inference of population structure based on common alleles. Two-dimensional hierarchical clustering of the low frequency alleles identified a number of such subgroups ( Figure 5 and Table S1). Figure 3 Coincidence of admixture between pairs of populations. Each bar shows the number of strains with at least 20% ancestry from a reference population (bar labels) and 20% ancestry with another population (indicated by color in the legend). For comparison, gray filled circles show the number of strains with more than 80% ancestry from each population. Whereas the average number of derived low frequency alleles shared between any two strains is 3.5, there are 86 strains that share at least 100 derived low frequency alleles with another strain. These subgroups include a previously described Malaysian/Bertram Palm population (Liti et al. 2009), but also groups of strains from Philippines/ Nipa palm, togwa, olives, sake, and cacao. Although the number of strains in each group is small, the number of sites defining the groups is not. In support of these groups representing populations that have been isolated from other populations for some time, many of the rare variants that define these groups are not present in other strains but, in at least some cases, are variable within the subgroup. Interestingly, the subpopulations defined by the largest numbers of alleles are strains with primary membership to the African/S. E. Asian population, suggesting that there may be undiscovered subpopulations and diversity among strains of African or S.E. Asian origin. DISCUSSION Genetic variation within S. cerevisiae has been shaped by a complex history, influenced by human-associated dispersal and admixture. Understanding this history and the resulting patterns of diversity is important for capturing and harnessing its fermentative capabilities as well as for quantitative and population genetics research. In this study, we used a reduced genome sequencing strategy to characterize the genetic diversity among a global sample of 262 strains isolated from a wide range of ecological habitats and environmental substrates. Our findings indicate that the major axes of differentiation correspond to broad geographic regions. In addition to previously described populations and patterns of differentiation (Fay and Benavides 2005;Legras et al. 2007;Liti et al. 2009;Schacherer et al. 2009;Goddard et al. 2010;Warringer et al. 2011), two new patterns indicative of human influence also emerge. First, we find a population represented by multiple human-associated strains that contains a mixture of European and Asian alleles. Second, we find human-associated subpopulations from togwa, olive, cacao, and sake fermentations that are defined by a unique set of variants not present elsewhere in the global population. While inferences of population structure can depend on sampling, indeed our analysis points to areas of uncertainty, the structure of S. cerevisiae described here is based on the largest collection of strains typed across the genome. This work also provides a foundation for studying the genetic underpinnings of complex traits, the origin and evolution of strains used by humans, and the relationships between such traits and population history. Geographic differentiation A major unanswered question in the study of yeast population structure has been the relative importance of geography vs. ecological niche. Although the strains in this study were isolated from many different ecological habitats, a number of lines of evidence suggest that the groups they form are defined better by geographic differentiation than by ecological niche. The two North American populations contain predominantly oak-associated strains, but they also contain strains from plants and insects. Similarly, the European population contains primarily vineyard-associated strains, but also contains a number of European soil and clinical isolates (Liti et al. 2009). The Asian population also includes strains isolated from multiple countries and several different habitats, including strains used in Sake fermentation and several strains isolated from food. The Asian population shares many alleles with the North American populations, but is genetically distinct and includes only a handful of strains from outside of Asia. What is less clear is how this Asian population is related to a number of diverged lineages represented by strains from primeval and secondary forests in China (Wang et al. 2012). In comparison with the European, North American and Asian populations, the African/S. E. Asian population is not as well defined. Figure 4 Relatedness among strains and the inferred populations to which they belong. The first and second principal coordinates (A) and the first and third principal coordinates (B) obtained from multidimensional scaling. Each circle shows a strain with color indicating the population contributing the largest proportion of ancestry and size indicating the proportion of ancestry from that population (see key). Circles ringed in black show strains with more than 20 heterozygous sites. The first, second, and third coordinates explain 29%, 9.3%, and 3.9% of variation among strains, respectively. Most of the strains are inferred to have mixed ancestry, and the strains that are most representative of the population (.80% ancestry) combine previously separated populations (Liti et al. 2009) of West Africa and Malaysia, two populations that are also separated on our tree ( Figure 1). Because the trees are consistent, the different results of the two population analyses could be a result of differences in the methods of analysis (e.g., Structure vs. InStruct), the larger number of strains used in this study, or the larger number of sites used by Liti et al. (Liti et al. 2009). Admixture Evidence of admixture was seen in a large fraction of strains and in every population. Although admixture is most common among the European, African, and Asian populations (Figure 3), the smaller number of admixed strains from the North America and New Zealand populations may represent the more recent establishment of European strains in these locations or may be related to the frequency of mating in the oak tree or soil environment. Some of the admixed strains also exhibit high rates of heterozygosity, indicating a relatively recent mating between strains with different ancestries. Interestingly, many of the heterozygous strains were isolated from fruits or orchards, an observation that is consistent with the isolation of admixed (mosaic) strains from fruits and orchards in China (Wang et al. 2012). Because yeast can grow asexually, entire populations can arise as a consequence of even rare admixture events. The two humanassociated populations bear a strong signature of an admixed origin as they carry alleles from both European and Asian populations and lie between these two groups in the principal coordinate analysis ( Figure 2 and Figure 4). Human-associated population #4 bears the additional signature of high rates of heterozygosity, implying relatively recent mating events in the origin of this group. In contrast, human-associated population #5 harbors fewer heterozygous strains, but also contains multiple laboratory strains (Sigma 1278b, FL100, W303, S288C, and FY4), some of which show mosaic patterns across their genome indicative of an admixed origin (Winzeler et al. 2003;Doniger et al. 2008;Liti et al. 2009). The New Zealand and Israeli populations may also have an admixed origin. These two populations carry a large subset of the European alleles, similar to many of the admixed European strains, but also carry a small number of alleles present at high frequency in the North American or Asian populations. This pattern is consistent with New Zealand and Israeli populations being derived from an admixture event between the European and these other populations followed by clonal (or nearly clonal) expansion. However, the New Zealand and Israeli populations also carry a small number of alleles that are not present in either the North American or Asian populations ( Figure 2). This raises the possibility that the New Zealand and Israeli populations were derived from admixture between the European and as yet undiscovered populations, or instead, rather than derived from an admixture event, that they represent lineages with roots in an ancestral European population (similar to the "Olive" grouping). The diversity of strains sampled from Evolution Canyon in Israeli is particularly notable. Of the 15 Israeli strains, seven define the nearly clonal Israeli population, three are assigned with 100% ancestry to the human-associated population #4, and four show comparable percentages of ancestry from the Asian, Israeli and human-associated (#5) populations. Derived subpopulations The use of common sites to infer population structure eliminates the detection of small populations defined by rare variants. With clustering based solely on rare variants, we identified a number of such subpopulations ( Figure 5). Although many of these groups were isolated from human-associated fermentations, the number of strains is too small to clearly indicate whether they are related by geographic or environmental origin. For example, the olive strain group contains isolates from Spanish olives imported to Seattle and one from olives in Spain. Yet, this group does not contain two strains isolated from the brine of olives from Mexico and one from an olive tree in California. The two North American groups contain strains from different states, and the togwa and cacao strains were each sampled from the same country. Although some of these subpopulations may be the result of recently expanded clones, several of them are defined by sites that are variable within the subpopulation. This latter observation points to the establishment of small groups that have remained isolated due to either geographic or ecological barriers to gene flow. Prospects for future studies As our understanding of S. cerevisiae population history increases, so does the need to incorporate such information into quantitative and population genetic studies. Our results highlight the complex relationships between strains and populations, but also characterize a set of strains and sequences that can be used by the community. Using WGS or a reduced genome sequencing strategy, such as the RAD-seq Subpopulations defined by clustering of low frequency alleles. Two-dimensional hierarchical clustering of low frequency sites and strains. InStruct assignments, from Figure 2, are shown on the left, clustered genotypes are shown in the middle, with minor alleles in red, heterozygous sites in yellow, common alleles in black, and missing data in gray. Selected subpopulations are labeled on the right. method used here, new strains can be readily placed in the context of global population structure. We anticipate that new genetic diversity will be discovered, particularly in Africa for which we found less certain relationships and a number of derived subpopulations. Our results may also prove useful to studies of existing strains, either by controlling for population history in genome-wide association studies or by aiding the selection of strains for linkage analysis. In both cases, strain choice is an important consideration as the results can depend on what variation is captured and the structure of this variation across strains. Although many quantitative genetic studies have been based on crosses with laboratory strains, our results underscore the presence of additional variation that is available beyond those strains. Finally, the global diversity and increased variation uncovered by our study highlight the potential for identifying novel properties which could prove valuable to the improvement of existing strains or the engineering of new strains for use in industrial fermentations.
8,685.6
2013-03-20T00:00:00.000
[ "Biology" ]
Effective theory for neutral resonances and a statistical dissection of the ATLAS diboson excess We classify the complete set of dimension-5 operators relevant for the resonant production of a singlet of spin 0 or 2 linearly coupled to the Standard Model (SM). We compute the decay width of such states as a function of the effective couplings, and provide the matching to various well-motivated New Physics scenarios. We then investigate the possibility that one of these neutral resonances be at the origin of the excess in diboson production recently reported by the ATLAS collaboration. We perform a shape analysis of the excess under full consideration of the systematic uncertainties to extract the width $\Gamma_{\rm tot}$ of the hypothetical resonance, finding it to be in the range 26 GeV $<\Gamma_{\rm tot}<$ 144 GeV at 95\% C.L. We then point out that the three overlapping selections $WW$, $WZ$, $ZZ$ reported by ATLAS follow a joint trivariate Poisson distribution, which opens the possibility of a thorough likelihood analysis of the event rates. The background systematic uncertainties are also included in our analysis. We show that the data do not require $W\!Z$ production and could thus in principle be explained by neutral resonances. We then use both the information on the width and the cross section, which prove to be highly complementary, to test the effective Lagrangians of singlet resonances. Regarding specific models, we find that neither scalars coupled via the Higgs-portal nor the Randall-Sundrum (RS) radion can explain the ATLAS anomaly. The RS graviton with all matter on the infrared (IR) brane can in principle fit the observed excess, while the RS model with matter propagating in the bulk requires the presence of IR brane kinetic terms for the gauge fields. Introduction New particles with TeV masses, neutral under the Standard Model (SM) are a common prediction of various New Physics (NP) scenarios. Examples include the Kaluza-Klein (KK) graviton and the radion in warped extra dimensions [1], the dilaton in theories of strongly coupled electroweak breaking [2], Goldstone bosons of extended composite Higgs models [3], mesons and glueballs of strongly-coupled theories [4], extra scalars breaking the global symmetry of composite Higgs models [5], Higgs portal models [6], and many more. Among the various SM-singlet resonances, those of spin 2 and spin 0 have strikingly similar couplings to the SM fields, and it is tempting to treat them in a common framework. Recently, the ATLAS collaboration has presented a search for narrow resonances decaying to electroweak bosons with hadronic final states using the 8 TeV LHC dataset [7]. The weak bosons are highly boosted and are thus reconstructed as a single jet each. A moderate but intriguing excess has been observed near the dijet mass m jj = 2 TeV. It is thus an interesting question whether the diboson excess could be explained by neutral resonances as those predicted in the above scenarios. The goal of this work is thus to present a unified approach for spin-0 and spin-2 resonances coupled to the SM, and apply it to the search performed in Ref. [7]. In a first part, we develop a complete effective field theory (EFT) for neutral resonances of spin 0 and 2. This general analysis is contained in Sec. 2. As it turns out this EFT consists of only few operators, which can further be restricted by theoretically well-motivated assumptions, such as approximate flavor and CP conservation. All the different neutral resonances listed above then have a simple common description in terms of this effective theory. Explicit examples of some of these new physics scenarios are then presented and matched to the EFT Lagrangian in Sec. 3. Given the concise description of a large class of models in terms of few parameters, our EFT can serve as a model-independent framework that can be applied to any search for resonances at the LHC. In a second part we then perform a detailed statistical analysis of the ATLAS excess. A basic characterisation of the diboson excess is performed in Sec. 4. Local discovery significances are computed in both frequentist and Bayesian frameworks, showing a moderate evidence for the existence of a signal. The shape of the excess is then analysed, taking into account all systematic uncertainties. The total width of the hypothetical resonance is found to be 26 GeV < Γ tot < 144 GeV at 95% C.L. Section 5 contains a comprehensive analysis of the total production rates of the excess. The conditional probabilities for tagging a true W , Z and QCD jet as either W or Z are obtained from the ATLAS simulations, and provide the tagging probabilities for the W W , W Z, ZZ selections reported by ATLAS. We further observe that these three overlapping selections follow a joint trivariate Poisson distribution, which opens the possibility of a thorough likelihood analysis of the event rates. The tagging probabilities are checked against the full dataset. The estimation of dijet background is treated in a way such that the correlations among the three selections are taken into account. The uncertainty on this background estimation is then included as a systematic in the total likelihood. Using an actual hypothesis testing, we show that the data do not require WZ production and could thus in principle be explained by neutral resonances. Finally, in a third part, Sec. 6, we test the effective Lagrangians of neutral resonance using both the information from the width and from the cross sections. It turns out that these pieces of information imply stringent contraints on the EFT parameter space once put together, even after including the uncertainty from the background. These exclusion bounds further imply that various popular scenarios appear to be totally incompatible with the ATLAS diboson excess. Effective Field Theory for neutral resonances In this section we introduce the EFT of SM-singlet resonances of spin 0 (CP even and odd) and spin 2 coupled linearly to the SM. We denote the mass of the resonance with m and assume that it is much heavier than the electroweak (EW) scale, m 2 v 2 , m 2 Z , m 2 h , m 2 t etc, which is an excellent approximation for a hypothetical 2 TeV resonance. We will use field redefinitions (or, equivalently, equations of motion) to reduce the number of independent operators. The leading interactions will be dimension-5 operators and we denote them generically by O X with coefficients f −1 X where the f X have dimension of mass. The region of validity of the EFT is set by the condition that one can neglect higher dimensional operators. The most severe restrictions come from operators with additional derivatives on φ, such as ∂ 2 φ G 2 µν , which require us to impose the condition where M denotes the cutoff of the theory, at which the nonrenormalizable dimension-5 operators become resolved by new states of mass M . In order to estimate the maximal size of the couplings f −1 X , we can use Naive Dimensional Analysis (NDA) which gives Using Eq. (2.1), the maximal allowed size f −1 X is at most of the inverse EW scale for m ∼ 2 TeV. In many UV completions, the coupling is expected to be weaker than the bound (2.2). For instance, if the nonrenormalizable coupling is resolved in the UV by a heavy fermion of mass M , then one expects where α s is the strong coupling at the scale M , and the estimate is obtained by taking the coupling of φ to the fermions 4π. We now list the complete EFT's for the cases of spin 0 (CP odd and even) and spin 2. Spin-0, CP-even The effective Lagrangian for a neutral, CP even, spin-0 resonance reads 5) whereH = iσ 2 H. In order to avoid issues with flavor violation, the operators including fermions are expected to be roughly proportional to the SM Yukawa couplings, hence here we show only the one involving the top quark, denoted by q L and t R . A priori, one could have written two more operators (that are also relevant for diboson production at the LHC): The operator O H generates a mass mixing after EWSB as well as a tadpole for φ that induces a vacuum expectation value (VEV) for this field (or shifts an existing one (2.7) The partial widths can easily be extracted from Γ, see App. B. A brief comment about the operator O T is in order. The latter can generate couplings of φ to gluons and photons at one-loop. Even though these cannot be written as local operators, for our purposes (i.e., on-shell production) we can represent this diagram by a complex contribution to e.g. the φGG coupling 2 where we have taken m = 2 TeV. Eq. (2.8) can simply be obtained from the corresponding expressions of the Higgs couplings to gluons, see e.g. [32,33]; note the presence of the imaginary part due to the tt-mass threshold. Using NDA, 4πf T m, we obtain the estimate 1 We use that m 2 W , m 2 Z , m 2 h , m 2 t m 2 . The partial decay width to top quarks is suppressed by a relative factor of m 2 t /m 2 , see below. 2 A similar expression can be given for the φγγ and φγZ couplings which also receive contributions |∆f −1 G | (430 TeV) −1 , and we can safely neglect this contribution to the production of φ. Moreover, O T can induce decays to top quarks with partial width Γ tt = 3 m 2 t m 32πf 2 T , which is suppressed by a power of m 2 t /m 2 compared to the other decay channels. Only if f −1 T is much larger than all the other couplings will this contribution matter. An upper bound can again be derived using NDA, one finds that Γ tt 70 GeV for m = 2 TeV. Spin-0, CP-odd The Lagrangian for a CP odd, spin-0 resonance is where ψ runs over the chiral SM fermions (ψ = u i R , d i R , e i R , i L and q i L ), can all be eliminated by appropriate field redefinitions 3 in favor of O T . Models giving rise to this effective theory have recently been considered in Ref. [29] in the context of the ATLAS results. Notice that for O T , the same comments as in the CP-even case apply. The decay width is given by which is identical to the CP even case, except for the absence of the operator O H . Notice that our results agree with those of Ref. [31] whereas w.r.t. Ref. [29] we find a discrepancy of a factor of 4. The partial widht to top quarks is again given by Γ tt = 3 m 2 t m 32πf 2 T and can be neglected. Spin-2 We now give the effective Lagragian for CP-even spin-2 fields. A massive spin-2 resonance is described by a symmetric-traceless (ST) field φ µν . As is well known [34,35], a consistent description requires in addition a scalar field (denoted here by χ), which enforces transversality and removes the unphysical longitudinal degrees of freedom, i.e.sets ∂ µ φ µν = 0, such that only the five physical polarizations remain. Its equation of motion are algebraic, i.e. χ is a non-propagating auxiliary field (in the absence of sources it simply vanishes, χ = 0). We do not write the free Lagrangian here (see however Sec. 3.2) but rather directly give the propagator, which in the basis (φ µν , χ) reads 4 3 Without loss of generality we have assumed the fermion operators to be flavour-diagonal, though not necessarily flavor-universal (e.g. ft R = fu R ). It should be kept in mind that the degree of flavornonuniversality is highly constrained by data. In any case we take only the top-Yukawa coupling to be nonzero. In making field redefinitions of the chiral fermions, one should keep track of anomalies which will generate contributions also to the coefficients of OG,W,B. 4 Typically the propagator of the massive spin-2 case is given for the reducible representation φµν + ηµν χ, see e.g. [36]. Here we prefer to display explicitly the decomposition into the irreducible components. where the curly brackets denote ST, i.e.X {µν} ≡ 1 2 X µν + 1 2 X νµ − 1 4 η µν X ρ ρ . In particular, P mixes the scalar and tensor degrees of freedom, but only the tensor degrees of freedom have physical poles. Notice that on-shell Π is simply the projector on transverse, symmetric, traceless fields, in particular at k 2 = m 2 one has Π µν ρσ k ρ = 0 etc. As usual, the projector can be written in terms of polarization tensors, wich for completeness we collect in App. A. The following three observations will further simplify the analysis. 1. As we are only interested in amplitudes for processes near k 2 = m 2 , only the tensortensor part of the propagator will matter. In particular, the source of the field χwhich is non-zero in general -will not contribute. 2. As the tensor-tensor propagator above is transverse on-shell, any source that is just a total derivative of the kind ∂ µ J ν , (e.g., ∂ µ ∂ ν |H 2 |) will not contribute near the pole. 3. Any source that is conserved (such as F ρ {µ F ν}ρ ) will only receive contributions from the term proportional to the identity δ while the most general fermionic source Lagrangian reads where the sum over chiral fermions (ψ = u i R , d i R , e i R , i L and q i L ) is understood. We remark that unlike in the scalar cases, even the light SM fermions need to be kept, as one cannot eliminate them via their equations of motion. Without loss of generality we have diagonalized the operators O ψ , but in principle allow non-universal couplings f ψ = f ψ . One should keep in mind though that flavor-nonuniversality (e.g. f d R = f s R ) is highly constrained by data. It is crucial that one uses the above Fierz-Pauli propagator for the computation of the scattering amplitudes arising from Eqns. (2.14) and (2.15). 5 For the decay width resulting from the above Lagrangian we find (we review in App. A the relevant polarization tensors) Another consistent possibility would be to introduce Goldstone fields φ and φµ to render the Lagrangian gauge-invariant under linearized general coordinate transformations, and then adopt a gauge in which the propagators simplify. However, we stress that this procedure also fixes the sources for the auxiliary and Goldstone fields, which cannot be ignored in this case. We will make some more comments on this in Sec. 3.2 in the context of a specific example. where N ψ denotes the gauge-multiplicity of ψ (QCD plus EW), e.g. N = 6 for a LH quark doublet. Our results agree with Refs. [31,[37][38][39]. We refer again to App. B for the decomposition of Γ into partial widths. For completeness, we mention that all symmetric-traceless CP-odd sources up to dimension four made from the SM fields are total derivatives of the kind mentioned in point 2 above, and as such do not contribute to resonant production from dimension-5 operators. 6 On the other hand, dimension-7 operators are always suppressed by additional powers of m 2 Z,W /M 2 , giving only very small cross sections and widths. We therefore do not include a CP-odd spin-2 particle in our analysis. Scenarios The purpose of this section is to give a few well-motivated scenarios for the effective theories described in Sec. 2. Higgs portal Consider a neutral scalar field, of mass m, interacting through the Higgs via the interaction (3.1) The parameter µ has dimension of mass and might itself be an effective interaction resulting from some renormalizable coupling gΦ 2 |H| 2 after Φ obtains a vacuum expectation value Φ = u + φ. We now make the shift φ → φ − µ m 2 |H| 2 . After this field redefinition the Lagrangian becomes where the ellipsis denotes unobservable modifications of the Higgs potential. The first term is the effective interaction discussed below Eq. (2.5) with the identification The second term leads to modifications of the Higgs couplings. In order to avoid too-large deviations inconsistent with experiment, we will impose that µv m 2 φ , which implies that v f H . To arrive at our standard basis, we use the Higgs equations of motion (or, equivalently, make the field redefintion H → H + µ m 2 φH) to find The last two terms (that results from the Higgs potential V = −m 2 H |H| 2 + λ|H| 4 ) can be neglected, as they are suppressed w.r.t. to the original interaction (3.1) by a factor of We then have with the remaining f −1 X vanishing. Spin-2 Lagrangians from warped extra dimensions. In this section we derive the massive interacting spin-2 Lagrangian from a warped extra dimension [1]. According to the general discussion in Sec. 2, we expect the presence of an auxiliary field. Moreover, in the extra dimensional construction we arrive naturally at a theory containing Goldstone modes as extra-dimensional components of the metric, which one can simply set to zero in a "unitary" gauge. We then consider a 5d compactification in the metric background where z denotes the 5th coordinate z 0 < z < z 1 and k = z −1 0 the Anti-de-Sitter curvature . After decomposition in Kaluza Klein (KK) modes, the kinetic Lagrangian of the fluctuations of the 5d metric becomes 7 Notice that the Lagrangian is completely diagonal in the KK modes. Here, φ n µ and φ n denote the Goldstone modes originating from the extra-dimensional components of the 5d metric, 8 and χ n is the above mentioned auxiliary field. Setting to zero the Goldstone fields one arrives at the unitary gauge, which is precisely the Fierz-Pauli Lagrangian [34,35] leading to the propagator (2.12). Instead, one could adopt the Feynman gauge, in which the terms in the second line of Eq. (3.7) are cancelled by an appropriate Fadeev-Popov procedure. In Feynman gauge, the propagators are especially simple, in particular, all fields have the same mass and do not mix; observe that the field χ becomes propagating and has a "wrong sign" kinetic term. We however stress that in this case, the sources for all fields, φ n µν , χ n , φ n µ and φ n have to be taken into account. In the following we will employ the unitary gauge (3.8) in which case we only need to consider the source for the ST field φ n µν . 7 We refer the reader to Ref. [40] for details, in particular the precise relation of the various fields to the 5d metric. Eq. (3.7) is obtained from Eq. (3.5) of [40] by use of the 5d wave functions fs = √ 2z s Js(mnz)/z1J2(mnz1), where the Jν denote Bessel functions. The masses are solutions to J1(mnz1) = 0 and φµν , χ have wave functions f2, φµ has wave function f1, and φ has wave function f0. 8 The field φµ does not have a zero mode, while φ has a zero mode that is not eaten and corresponds to the radion. As for the interactions, typically two scenarios are considered. In the brane model, all SM fields are localized on the IR brane [1], while in the bulk model they are allowed to propagate in the bulk [38,41,42]. In the latter case, the gauge fields have flat 5d profiles, the RH top and the Higgs fields have profiles peaked towards the IR brane, and the remaining matter fields have profiles that are flat or peaked towards the UV brane. 9 For our purposes it is good enough to approximate the bulk model by IR brane localized RH top and Higgs fields and completely ignore the other quarks and leptons. The interaction terms for IR-brane localized fields are given by where M P is the reduced Planck mass. In the scenario with all SM fields localized on the IR brane, there are identical contributions to the remaining SM fermions. Gauge fields couple as (for any n = 0): where x n = z 1 m n and the quantities r 0 and r 1 denote possible brane kinetic terms (BKT) [43], and V = log(kz 1 ) ≈ 36 is the volume of the extra dimension. An IR brane-localized gauge field is described by the limit r 1 → ∞, or ζ n = 1. 10 For the bulk model, the effective Lagrangian for the first KK mode of mass m is then given by L 2 + defined in Eqns. (2.14) and (2.15), with the couplings where x 1 = 3.83, and κ = k/M P the RS coupling parameter. For the brane model, one has instead According to our general formula Eq. (2.16), the bulk and brane model's total widths are respectively In the RS bulk model, the terms proportional to ζ 2 1 contribute 0.2%, 26%, and 56% to the total width for r 1 /V = 0, 0.2 and 0.5 respectively. 9 We remark that such a scenario features other states, typically lighter than the KK graviton, whose phenomenology will severley constrain the model. We will not further consider these model-dependent constraints in this work. 10 We remark that in this limit the KK modes of the gauge fields become strongly coupled g 2 KK ∼ g 2 (V +r1), hence to avoid the non-perturbative regime one would demand √ V + r1 < 4π/g. Note also that the gauge KK modes decouple from the IR brane in this limit. Radion/Dilaton In the warped extra-dimensional scenario considered in the previous section, the field φ 0 corresponds to the radion which describes the fluctuations of the size of the extra dimension. It is massless in the background (3.6) but by a suitable stabilization mechanism it acquires a mass [44][45][46]. Athough its five-dimensional wave-function is deformed by the stabilization mechanism 11 we will assume that these effects are small and its couplings are thus approximated by those of the massless case. One finds (see e.g. Ref. [49]) There is an additional coupling proportional to the Higgs potential V(H) = −m 2 H |H| 2 + λ|H| 4 . Eliminating the operator φ|H| 2 will result in negligible corrections to the operator It is customary to treat the radion interaction scale defined as as a free paramter. In the bulk model one then finds the couplings where we have assumed r i V . The brane model is again obtained by sending r 1 → ∞. The couplings to the gauge boson field strength vanishes in this case, and one is left with only Interestingly, the brane model effective Lagrangian precisely conincides with the Higgs portal scenario with the identification f rad = m 2 /µ. In either case, the field φ just inherits the Higgs couplings suppressed by a factor v/f rad . As the couplings to gauge bosons are always small, the decay width comes entirely from f H in both models (3.18) As explained in Sec. 2.5, the decay to tops are suppressed by m 2 t /m 2 and do not contribute to the total width. Finally we recall that the radion is closely related to the dilaton of nearly conformal extensions of the SM, so that very similar results hold in this case. Data, background and local significances The ATLAS collaboration has recently presented a search for narrow resonances decaying to electroweak bosons with hadronic final states using the 8 TeV LHC dataset [7]. This dataset has 20.3 fb −1 integrated luminosity. The weak bosons from massive resonances are highly boosted and are thus reconstructed as a single jet with large radius using advanced reclustering, grooming and filtering algorithms. The expected background is dominated by dijets events from the QCD background, which is huge but does not feature potential resonance structures. Boson-tagging cuts are applied to the selected dijet events, asking for subjet momentumbalance and low number of associate charged particles tracks. Each jet is then tagged using a narrow window on the jet mass m j , asking for m j to be close to the W or Z mass. In the analysis, a jet is identified as a W if m j ∈ [69.4, 95.4] GeV, and is identified as Z if m j ∈ [79.8, 105.8] GeV. The W and Z masses being close, these two ranges overlap. There are thus three disjoint tagging regions, that we label as W -only, W or Z (noted W/Z), and Z-only. A local excess of observed events appears in the dijet spectrum near 2 TeV. The numbers reported in Ref. [7] (and its extra material [50]) in the three bins m jj ∈ [1850, 1950], [1950,2050], [2050,2150], that we refer to as the excess region, are shown in Tab. 1. The expected dijet background in each bin is also shown. The background is partly determined from a fit to the whole dijet spectrum, and is thus subject to some uncertainty. As a first step, one should check the statistical significance of this excess. Assuming Poisson statistics for the observed events in each bin, we first compute the p-value of a discovery test in every bin. This computation is done with and without taking into account the background uncertainties, that we model using a nuisance parameter θ ∈ [θ a , θ b ] with a flat "prior" distribution. The likelihood for one of the bins r simply reads The nuisance parameter is eliminated by maximising this likelihood with respect to θ for a given s r ,L(s r ) = max θ L(s r , θ). The statistical significance Z 0 for the existence of an excess is obtained by computing the probability density f q for q = −2 log[L(s r )/ max srL (s r )] and evaluating the observed p-value p = ∞ q obs dq f q . The p-value is further translated into a standard significance by Z 0 = Φ −1 (1 − p), where Φ is the standard cumulative Gaussian distribution. One allows for both upward and downward fluctuations. The significance of this discovery test is computed for each bin. The values, shown in Tab. 2, typically go beyond two sigmas in the central bin. We also introduce a Bayesian discovery test, defined as This expression takes the simple form It turns out that the prior for the signal π(s r ) is entirely fixed from general considerations. Indeed, the measurement being a counting experiment, we already know a priori that b r +s r follows a Poisson distribution. The parameter of this Poisson distribution has to be chosen to be b r , which is known a priori, in order not to bias the discovery test. This then fixes π(s r ) to be 12 The values of the discovery Bayes factor are shown in Tab. 2. One can see that the values of B 0 are beyond the threshold of moderate evidence for the central bin. It follows that both frequentist and Bayesian discovery tests provide a moderate evidence for the existence of a local excess over the QCD dijet background. We conclude that this excess is significant enough to deserve attention, so that we proceed in the analysis. Mass and width reconstruction As the data are provided in several bins, it is possible to analyse the shape of the hypothetical signal. Even though the statistics of the excess is fairly low, we emphasize that there is no reason that prevents to apply a rigorous shape analysis. Whether or not the data are informative enough should be decided by the outcome of the analysis. Notice that, as the excess is observed in more than one bin, one can expect both an upper and lower limit on the width of the resonance. In what follows the bins of the m jj distribution are labelled by the index r. Contrary to the analysis on the total event numbers, here we do not combine the events of the 12 In [51] it will be shown that this particular prior provides a good connexion between discovery Bayes factor and frequentist statistical significance. Also, notice that we do not implement the background systematic error in the Bayes factor. This is because this type of systematic uncertainty approximately cancels out in the Bayes factor, as will be shown in [51]. three selections W W , W Z, ZZ, and rather perform the shape analysis for each selection separately. It will be clear from next section that a more evolved analysis combining the three selections would bring only little extra information. The likelihood containing the shape information appears naturally from the full likelihood L = r L r , by factoring out the likelihood for the total event number, L = L tot L shape . Explicitly, the shape likelihood reads L shape = r (bins) n r n tot nr . (4.5) Note that the factorisation L = L tot L shape makes clear that a shape analysis of the diboson excess is truely complementary from the total event number analysis, because each analysis rely on mutually exclusive pieces of information. We denote the shape of the expected signal by a distribution f m jj normalised to one (i.e. a density). The shape of the signal is modelled assuming a resonant amplitude, and the background is assumed to be flat near the peak of the resonance. The narrow-width approximation is assumed, i.e. one takes Γ/m 1, that will be well verified a posteriori. Given these standard assumptions, the m jj distribution is then distributed following a Breit-Weigner shape, (4.6) The expected content of the bins is obtained by integrating over this distribution, an one will note f r the shape density integrated over a bin, f r ≡ bin r dm jj f m jj . We consider the three bins centered around 2 TeV, and assume no signal event elsewhere. We also take into account the systematic uncertainties relevant for the shape of the signal. These are the uncertaintites on the jet reconstruction (see [7]), that tend to smear the resonance shape. The sources of error are the jet p T resolution, the jet p T scale and the jet mass determination, associated respectively to the nuisance parameters δ res , δ scale , δ m , affecting the m jj mass. The magnitude of these errors is small with respect to one, so that they can be written in the linear form m jj (1 + δ res + δ scale + δ m ) . (4.8) All these uncertainties are modelled using Gaussian nuisance parameters δ res , δ scale , δ m with zero mean and respective standard deviation σ res = 0.033, σ scale = 0.02, σ m = 0.03 (see [7], Tab. 4). These three nuisance parameters being independent, they can be rigorously combined into a single Gaussian nuisance parameter δ with zero mean and variance given by Table 3. One-dimensional confidence intervals at 68% and 95% confidence level for the mass (m) and the width (Γ) of the hypothesized resonance. The intervals are computed independently for each subchannel. f r (m, Γ, δ) nr π(δ) (4.10) For the mass and width of the hypothesized resonance, one assumes log priors π(m) ∝ m −1 , π(Γ) ∝ Γ −1 , which are the most objective priors for dimensionful quantities. The confidence regions are drawn from the posterior density, which is given by p(m, Γ) = L(m, Γ)π(m)π(Γ). The one-dimensional confidence intervals for mass and width are given in Tab. 3. The systematics errors increase the mass CL bounds by roughly ∼ 5% and the width CL bounds up to ∼ 20%. Using a flat prior instead of a log prior changes the bounds by roughly ∼ 10%. The two dimensional confidence regions in the m − Γ plane are shown in Fig. 1. In the following, we shall quote the results from the W Z selection, which contains the largest event number. Statistical analysis of the diboson rates Having studied the shape of the diboson excess, we now turn to the analysis of the overall event numbers, i.e. the total rates over the excess region. The likelihood analysis for a set of overlapping selections is a somewhat unusual exercise to carry out, so that we shall provide a detailed explanation of the statistics involved. For clarity, in the following we will use rigorous probability notation. The hypothetical event number in a given selection is taken as random variable, denoted by N . Specific values of event numbers are denoted by n, and P (N = n) is the probability of N for taking the value n. The expected event numbers are denoted by λ, and the observed event numbers are denoted byn. The statistics of hadronic weak-boson tagging The mass distribution of a fat jet coming from a W or Z is peaked at the boson mass, m W or m Z . The jets can be therefore tagged as W and Z by requiring m j to be close to m W,Z . In the analysis of [ These tagging regions will be labelled by I. The expected m j distributions have been provided in Fig. 1c of [7]. These distributions as well as the tagging regions are shown in Fig. 2. Note that the distributions for true W and Z have been generated assuming a bulk RS KK graviton signal. From Sec. 3 .2, it is clear that V , so that the bulk RS KK graviton decays mostly to longitudinally polarized W and Z. However, the weak boson widths being narrow and the final shape being strongly widened by the detector effects, we expect the W , Z distributions of Fig. 2 to hold for any polarisation of the weak bosons to a very good approximation. Using the distributions of Fig. 2, it is possible to estimate the tagging probabilities, given one of the two hypothesis for the underlying true boosted particle, {True W, True Z} that we will label by X. What we compute is thus the conditional probability p(I|X). The conditional probabilities for tagging a true W and a true Z are computed from Fig. 2 and shown in Tab. 4. These numbers are consistent with the ones found in [21]. Moreover, fat jets can also arise from the QCD interactions. The distribution for a jet coming from the QCD dijet background has been simulated in [7] (see Figure 1 c there), and appears to be nearly constant over the tagging regions. Using the simulated distributions, we can deduce the probabilities for mis-tagging a jet from the QCD background as a weakboson jet. Finally, the total probability for tagging a W , Z or j as a weak boson V is just obtained by summing the probabilities over the three region. One gets P (V |W ) = 65%, P (V |Z) = 72%, P (V |j) = 8%. Before closing this subsection, it is instructive to focus on the counting statistics for the tagging of a single jet. This part can serve as a statistical toy-model for the upcoming analysis of the diboson excess. Indeed, most of the ingredients for the diboson analysis are already there, though applied to a simpler problem. Let us denote the tagging regions W -only, W/Z, Z-only as 10, 11, 01, and labelled by I ∈ 10, 11, 01. The first number of the region name means that the region potentially contains a W if equal to one, and does not contain a W if equal to zero. The second number of the name works similarly for the Z. These notations will be convenient later. The event numbers in each of these regions are denoted N 10 , N 11 , N 01 . These events follow independent Poisson statistics with parameter λ 10 , λ 11 , λ 01 , Assuming an expected event number λ X = (λ W , λ Z ) for the true W and Z, the λ I are expressed as Equations (5.1),(5.2) put together provide P (N I = n I |λ X ), the probability of observing n I events in the region I for given expected event numbers λ X . Taking this probability as a function of λ X provides the likelihood function for λ X , for an observed event number n I . Let us now assume that only the number of events that contain all possible W -tags and all possible Z-tags are reported. These numbers are defined as This configuration is pictured in Fig. 3. Clearly, the statistics of N W and N Z are not independent, because of the common region 11 where the jet is either W or Z. Rather, the N W , N Z follow a bivariate Poisson statistics, given by P (N W = n W , N Z = n Z |λ I ) = n 10 +n 11 =n W n 01 +n 11 =n Z λ n 01 01 λ n 10 10 λ n 11 11 n 01 !n 10 !n 11 ! e −λ 10 −λ 01 −λ 11 (5.4) The mean of (N W , N Z ) is given by (λ 10 + λ 11 , λ 01 + λ 11 ), and the covariance matrix is Plugging Eq. (5.2) into Eq. (5.4), one gets the probability of getting (n W , n Z ) events for given expected event numbers λ X . Taking this probability as a function of λ X provides the likelihood function for λ X , for an observed event number n W and n Z . Statistics for the ATLAS diboson excess The probability for the tagging of two fat jets are obtained by combining the probability of tagging a single jet, see Tab The true events can be either a pair of weak bosons, a QCD jet mis-identified as a weak boson or two QCD jets mis-identified as weak bosons. The list of the hypothesis of true events, is then where j stands for background jet. The conditional probabilities P (I|X) are given in Tab.5. The numbers for true W W , W Z, ZZ are consistent with the ones reported in [21]. The dijet background corresponds to the true event jj. Pileup effects are assumed to be small, so that we do not consider the possibility of having true events as (W, j), (Z, j). On the other hand, one may consider a new physics signal giving rise to a W and a jet or a Z and a jet. We include therefore the probabilities P (I|W j s ), P (I|Zj s ) in our table, assuming that the distribution of this signal jet j s is roughly the same as from a QCD jet. This case will not be considered in the rest of this work, as the decay of singlet resonances does not give rise to such signal. True W j s 6.33 · 10 −3 6.7 · 10 −3 0.78 · 10 −3 18.0 · 10 −3 9.61 · 10 −3 12.8 · 10 −3 True Zj s 2.80 · 10 −3 7.85 · 10 −3 4.85 · 10 −3 13.9 · 10 −3 16.5 · 10 −3 13.9 · 10 −3 The number of events N I in each of the disjoint tagging regions I follows a Poisson distribution with parameter λ I , which is related to the expected number of true events (i.e. events before tagging) as The background expected event number λ jj will be obtained later on from the ATLAS analysis, once we know the statistics for the events. The λ W W , λ W Z , λ ZZ are assumed to come only from the signal, i.e. the SM diboson background is neglected, following the ATLAS analysis. The W W , W Z, ZZ expected event numbers are related to the total cross-sections by The efficiencies X for selecting and tagging the signal are reported in [7], Fig. 2b. One gets roughly X ∼ 0.10, 0.13, 0.09 with about 20% of relative uncertainty. Note that these efficiencies are obtained assuming particular models. 13 Slightly different efficiencies can be expected for different spins and couplings. This model-dependence should be taken as an extra systematic uncertainty on the efficiencies. As the weak-boson tagging probability based on the jet mass is already taken into account through the P (I|X), it has to be removed from the X by dividing by P (V |X). The efficiencies X we will use are therefore given by so that X ≈ {23%, 28%, 17%}. In the ATLAS note [7], the expected event numbers λ I on the disjoint tagging regions are not reported. Rather, only the number of events that contain all possible W W -tags, W Z-tags and ZZ-tags are quoted. We denote them by N W W , N W Z , N ZZ . It is convenient 13 For example, the bulk RS graviton used for the spin-2 simulation and treated in Sec to label the tagging regions with respect to their contribution to one or several of these reported rates. The labels are shown in Fig. 3. Using this parameterisation for the events, the observed events read Clearly, these events are not independent. They rather follow a trivariate Poisson statistics [52], The mean of this distribution is given bȳ N W Z = λ 010 + λ 110 + λ 011 + λ 111 (5.14) The covariance matrix is given by The likelihood associated with the measured values ofn W W ,n W Z ,n ZZ is obtained by taking Eq. (5.12) as a function of the hypothesis (i.e. λ I ) and using Eqs. (5.8). Dropping an irrelevant constant factor, the likelihood is a function of the the various event numbers before tagging λ X (recall that X = {W W, W Z, ZZ, jj}), where the observed event numbers appear through the domainD ≡ D(n W W ,n W Z ,n ZZ ) and nowhere else. The λ W W, W Z, ZZ from the new physics signal are further related to the total production cross-sections by Eq. (5.9). The evaluation of the expected event number from dijet background λ jj is discussed in the next subsection. Consistency checks and the background likelihood As a consistency check of our analysis, we can verify whether the event numbers tagged over the full range m jj ∈ [1, 3.5] TeV in the observed sample are consistent with the tagging rates determined in Tab. 5 and with our statistical model leading to Eq. (5.14). The observed number of events for the W W , W Z, ZZ selections are given in Tab. 8 of [7]. These aren full W W = 425,n full W Z = 604,n full W Z = 333. Regarding the expected rates, the complete region being overwhelmed by the dijet background, we can neglect the signal to a good approximation, so that the contributions to all tagging regions are simply proportional to the dijet expected event number over the full range, λ full jj . The ratios of the expected event numbers in the W W , W Z, ZZ selections are then obtained using the tagging rates Tab. 5, Eq. (5.8) and Eq. (5.14). It comes n full W W /n full W Z = 0.63, n full ZZ /n full W Z = 0.57 which are in agreement with the observed ratios within ∼ 10%. The statistical error on the ratios of then full being roughly about 10%, this consistency check seems to be fulfilled within one standard deviation. However, this naive observation is too optimistic, because the event numbers of the W W , W Z, ZZ selections are actually strongly correlated. The joint statistics of the three selections is a trivariate Poisson, already described above, that now describes the whole dataset (i.e. m jj ∈ [1, 3.5] TeV). The mean and covariance matrix are thus given as in Eqs. (5.14), (5.15 where one used the values of P (I|jj) obtained in Tab. 5. The event numbersn full being large, one can adopt the Gaussian approximation so that the likelihood reads The maximum likelihood gives −2 log L(λ full jj ) = 10.6. This value can readily be interpreted as a compatibility test, whose statistics is a chi-square distribution with 3 − 1 degrees of freedom. The equivalent statistical significance obtained is Z = 2.6. The compatibility is thus lower than the 1σ deviation naively found when neglecting correlations. This level of compatibility can nevertheless be considered as acceptable for high-energy physics standards, so that we pursue our analysis. After these preliminary sanity checks, we now aim at building a consistent likelihood for the dijet background event number λ jj over the excess region [1.85, 2.15] TeV. The shape of the dijet background has been been estimated in [7] using a smoothly falling distribution fitted to the observed dataset over the m jj ∈ [1, 3.5] TeV range. A different fit is done for each of the three selections W W , W Z, ZZ. To the best of our understanding, each of these fits should give close results, because the only difference between the selections lies in the m j ranges selected. Comparing the m j intervals with the slope of the m jj shape, it appears that only a slight decrease with m j of the efficiency of the boson-tagging cuts might be expected when going from the W -only to the Z-only region. The outcome of the three fits can be seen in Tab. 6 and in Fig. 5 of [7]. Comparing the central values obtained from the various fits using the quoted error bars, it appears that these fits are compatible with each other only within roughly three standard deviations. Again, this naive comparison does not take into account the correlations, i.e. it assumes that the fits are independent from each other. These fits being partly based on the same dataset, their outcome are actually correlated, which implies that the actual uncertainty is smaller than the one naively expected. This implies that the compatibility between the fits is worse than what naively expected. 14 The shape systematic uncertainties evaluated in [7] are found to be small, so that they cannot help solving this discrepancy. In order to establish the dijet background likelihood using both a consistent and conservative approach, we shall (i) take the correlations among the fits into account and (ii) assume somewhat larger uncertainties than the ones quoted in [7]. The likelihood for the expected dijet event number in the excess region [1.85, 2.15] TeV before tagging is approximately given by 15 where the P (I|jj) are given in Tab. 5 and the expected values of b I obtained from the fits are given in Tab. 1. The covariance matrix V b is proportional to Eq. (5.15), where α is a parameter that we tune to obtain a reasonable level of compatibility between the fits. As a criterion for the compatibility, we ask that −2 log L b (λ jj ) be equal to the level of compatibility obtained between the selections over [1, 3.5] TeV (see above). The criterion is thus − 2 log L b (λ jj ) ≈ 10.6 . Note that the 68% range translates as error bars +1.20 −0.98 , +1.92 −1.55 , +1.13 −0.91 on the expected background events b W W , b W Z and b ZZ respectively. As expected, these errors are larger than the ones quoted [7], that are shown in Tab. 1. In order to model the systematic uncertainty on the background, the likelihood L b will be included into the full likelihood, Eq. (5.28). Analysis of total rates In the previous subsections, we have gradually derived the total likelihood that should be used to analyse the ATLAS diboson excess. It is given by the product of the likelihood derived from the counting statistics, Eq. (5.16), times the likelihood constraining the background, given in Eq. (5.19). In addition, as noted in [21], information on the counting of W W + ZZ and W W + W Z + ZZ are available in the additional material of [7]. The valueŝ n W W +ZZ = 17,n W W +W Z+ZZ = 17 are reported. This introduces two new constraints on the event numbers n I of the disjoint tagging regions, n 100 + n 001 + n 110 + n 011 + n 111 =n W W +ZZ (5.25) n 100 + n 001 + n 010 + n 110 + n 011 + n 111 =n W W +W Z+ZZ , (5.26) that have to be added to the previous constraints already contained inD, see Eq. (5.13). It turns out that only three combinations are allowed, so that the domainD is given bŷ These numbers agree with the ones reported in version 3 of [21]. The final likelihood that we shall use to constrain the cross-sections for a hypothetical signal σ Y , with Y = {W W, W Z, ZZ} are thus (5.28) The expected event number for the dijet background λ jj is constrained by L b (λ jj ) and is treated as a nuisance parameter. The term in the second row agrees exactly with the likelihood used in version 3 of [21]. Our interest being in neutral resonances, one should first compare the H(λ W Z = 0) = {λ W W = 0, λ ZZ = 0, λ W Z = 0} hypothesis with the general hypothesis H = {λ W W = 0, λ ZZ = 0, λ W Z = 0}. A consistent way to carry out such hypothesis testing is to compute the Bayes factor (5.29) For the prior of the λ Y , as described in Eq. (4.4), we use Poisson distributions with the Poisson parameter identified as the expected number of background events b Y , i.e. P (b Y + λ Y |λ Y ). These priors arise from physical considerations and are conservative as they favour the background-only hypothesis. We find B(λ W Z = 0) = 0.96 , (5.30) which implies that the λ W Z = 0 hypothesis is essentially as credible as the λ W Z = 0 hypothesis. This conclusion remains true for the λ W Z = 0 and λ ZZ = 0 hypothesis as well. On the other hand, the hypothesis with only λ W Z non-zero is highly disfavoured, with a Bayes factor of 2 · 10 −5 . We then proceed by drawing the best-fit regions for σ W W , σ ZZ from the posterior L(σ W W,ZZ )π(σ W W,ZZ ). The priors for the cross sections are taken flat. If one does not taken into the uncertainty on the background, the regions obtained are shown in the left pannel of Fig. 5. Finally, these regions of λ W W,ZZ are readily translated into regions for total cross sections σ W W , σ ZZ , that are shown in Fig. 5. Interpreting the diboson excess with the neutral resonances EFT In the new physics scenarios considered in Sec. 3, the neutral resonance couplings to field strengths are universal. We will therefore make a simplifying assumptions and use a single parameter both in the spin-0 and spin-2 cases. We then focus on the tree-level production induced by the O H , O V operators via gluon fusion (GGF) and weak boson fusion (VBF). We find that VBF is subleading to GGF for most of the parameter space. For the spin-0 case, these two operators also completely fix the width Γ SM into SM particles (up to suppressed contributions from O T ), while for the spin-2 case, one can have contributions from the operators O ψ if present. 16 In addition one can allow for an invisible width, 16 In our analysis we do not take into account production of the spin-2 resonance via quark-fusion (which can be induced by the operators O ψ for ψ = qL, uR, dR), nor NLO-QCD effects. Both effects have recently been considered in Ref. [53] and were shown to lead to O(1) modifications of the production rates. i.e. Γ tot = Γ SM + Γ inv into non-SM particles. The total width estimation from the shape analysis of Sec. 4.2 then provides us with another constraint in the f V − f H plane, i.e. where the Γ H,V are the partial widths induced by the O H , O V operators. We write FeynRules [54] models for the EFT of neutral resonances described in Sec. 2. The signal expected from the spin 0 and 2 resonances pp → φ → W W, ZZ, and pp → φ µν → W W, ZZ at the 8 TeV LHC are then computed using MadGraph 5 [55]. The main cuts are p T > 540 GeV and |η| < 2 for each of the outgoing vector bosons, which we implement using MadAnalysis 5 [56]. The limits on the spin-0 CP even resonance are shown in Fig. 6. We choose Γ tot within the 95% confidence interval provided in Tab. 3, i.e. we fix it to Γ tot = 150 GeV (left panel of Fig. 6) and Γ tot = 20 GeV (right panel of Fig. 6). The orange shaded regions are those allowed by the condition (6.2). We also display how various scenarios fit into the shown parameter space. The Higgs portal scenario, which is indistinguishable from the radion in the RS brane model, is shown as the blue point. The line emerging from that point would correspond to a hypothetical model where the scalar boson can decay into invisible states. Similarly, the radion of the RS bulk model is depicted as the red point. Neither scenario can fit the observed excess, mainly because the operator O G is not available for GGF production. Generating sufficient contribution from the VBF process would require too small values of f H , in conflict with the measured width. Finally, the green line shows a hypothetical scalar with universal f H = f V ≡ f . This scenario could explain the required width and production rate for f = 7 TeV (19 TeV) for Γ tot = 150 GeV (20 GeV). We present the analogous limits on spin-2 resonances in Fig. 7. The RS brane model is shown as the green point corresponding to the value of κ obtained from the chosen width of the resonance, again 150 GeV (left) and 20 GeV (right). The implicit values for the coupling are κ = 0.23 (for Γ tot = 150 GeV) and κ = 0.09 (for Γ tot = 20 GeV). Note that Γ H + Γ V ≈ 0.52 Γ tot , the rest being contributed by the fermions. The RS bulk model with universal brane kinetic terms for the gauge fields has two parameters (r 1 and κ), and hence for fixed width it corresponds to one-dimensional curves in the parameter space, shown as the red curves. We vary 0 < r 1 < 90 (see footnote 10), corresponding to implicit values of κ = 1.09 − 0.42 (for Γ tot = 150 GeV) and κ = 0.40 − 0.16 (for Γ tot = 20 GeV). One can see that the brane model can in fact explain the observed excess, while the canonical RS bulk model (with r 1 = 0, the blue point) cannot, because the values of κ needed to fit the correct width and production cross section are in conflict with each other. On the other hand, allowing for IR BKT's, one can fit both width and total production rate of the excess. The required size of the BKT's is only r 1 ∼ 1 − 4, depending on Γ tot . Conclusion New particles, singlets under the SM interactions and with masses near the TeV scale can arise in many well-motivated extensions of the SM, including extra dimensions, strongly- coupled scenarios as well as the Higgs portal. Such particles can be linearly coupled to SM operators, and can thus appear as resonances in s-channel processes. In this paper we first lay down the complete effective Lagrangians for neutral resonances of spin 0 and 2. It turns out that this EFT consists of only few operators, which can further be restricted by theoretically well-motivated assumptions, such as approximate flavor and CP conservation. Given the concise description of a large class of models in terms of few parameters, our EFT can serve as a model-independent framework to study the implications for any resonance searches at the LHC. We compute the generic widths of the resonances and present explicitly the matching to the new physics scenarios quoted above. We then investigate the possibility that a new heavy resonance be at the origin of the excess in diboson production recently reported by the ATLAS collaboration. We compute the local significances in both frequentist and Bayesian frameworks, showing a moderate evidence for the existence of a signal. We perform a shape analysis of the excess under full consideration of the systematic uncertainties to extract the width Γ tot of the hypothetical resonance, finding it to be in the range 26 GeV < Γ tot < 144 GeV at 95% C.L. Turning to the study of total event numbers, we first evaluate the conditional probabilities for tagging a true W , Z and QCD jet as either W or Z from the ATLAS simulations. From these one deduces the tagging probabilities for the W W , W Z, ZZ selections reported by ATLAS. We further observe that these three overlapping selections follow a joint trivariate Poisson distribution, which opens the possibility of a thorough likelihood analysis of the event rates. The tagging probabilities are checked against the full observed sample. A conservative treatment of the dijet background is adopted, that includes the correlations among the three selections W W , W Z, ZZ. The uncertainty on this background estimation is then taken into account as a systematic error in the total likelihood. Finally, using an actual hypothesis testing, we show that the data do not require W Z production and can thus in principle be explained by neutral resonances. Finally, we test the effective Lagrangians of neutral resonance using both the information from the width and the cross section of the analysis of the ATLAS data. It turns out that these pieces of information imply stringent contraints on the EFT parameter space once put together, even after including the background uncertainty. These exclusion bounds further imply that various popular scenarios appear to be totally incompatible with the ATLAS diboson excess. We find that neither scalars coupled via the Higgs-portal nor the RS radion can explain the ATLAS anomaly. The RS graviton with all matter on the IR brane can in principle fit the observed excess, while the RS model with matter propagating in the bulk requires the presence of IR brane kinentic terms for the gauge fields. As an outlook, we emphasize that it would be interesting to constrain the EFT for neutral resonances using other LHC searches. As the effective Lagrangians are rather predictive, powerful conclusions can be expected by combining the information from various channels. B Partial widths In Sec. 2 we gave the total widths of the various resonances in function of the effective couplings. The partial widths, if required, can easily be obtained from these formulae. For the field-strength couplings, one can use the decomposition
14,002.8
2015-08-19T00:00:00.000
[ "Physics" ]
A Bivariate Chance Constraint of Wind Sources for Multi-Objective Dispatching The economic emission dispatch (EED) problem minimizes two competing objective functions, fuel cost and emission, while satisfying several equality and inequality constraints. Since the availability of wind power (WP) is highly dependent on the weather conditions, the inclusion of a significant amount of WP into EED will result in additional constraints to accommodate the intermittent nature of the output. In this paper, a new correlated bivariate Weibull probability distribution model is proposed to analytically remove the assumption that the total WP is characterized by a single random variable. This probability distribution is used as chance constraint. The inclusion of the probability distribution of stochastic WP in the EED problem is defined as the here-and-now strategy. Non-dominated sorting genetic algorithm built in MATLAB is used to handle the EED problem as a multi-objective optimization problem. A 69-bus ten-unit test system with non-smooth cost function is used to test the effectiveness of the proposed model. Introduction Wind energy is the most attractive clean and fuel-free solution to the world's energy challenges.Well established in more than 50 countries all over the world, supplying more than 250 GW as total installed capacity and forecasted to provide 30% of the world's electricity by 2030 [1].One of the challenges is how to appropriately characterize Wind Power (WP) in the load dispatch model.A conventional economic dispatch problem uses deterministic models, which can not reflect situations considering the WP injection.Since wind farms connected to power systems have characteristics of dynamic and stochastic performance, stochastic models are more suitable.There are several studies intended to investigate the injection of WP into conventional power networks and its impact on the generation resource management due to its stochastic and non-dispatchable characteristics. A conventional way was to use the average WP.The probabilistic conventional approaches tried to find probabilistic characteristics of solutions of the problem under investigation [2][3][4][5][6][7][8].This approach is called the wait-andsee (WS) strategy in the context of stochastic programming.Although these approaches can be easily implemented, it has a less-known pitfall, called the probabilis-tic infeasibility.The probabilistic feasibility of the average WP is 0.25, or equivalently, the probabilistic infeasibility is as large as 0.75 [8,9]. For this reason, one of the more appropriate strategies in contrast, the here-and-now (HN) strategy introduces the probabilistic characteristics to the model of optimization problem itself.A here-and-now model of a power system with wind energy generators was developed [10][11][12][13].The authors introduced the stochastic distribution of wind speed into the economic dispatch issue considering both reserve cost of overestimation and penalty cost of underestimation of available wind power.The scheduled wind power output was an estimation value of available wind power output and it was treated as an optimization variable, which was dependent on several factors such as the reserve cost and the penalty cost.But these costs are very difficult to be exactly determined [10]. The probability of stochastic WP is included in the model as a constraint [9,[14][15][16][17].This strategy, the hereand-now approach, avoids the probabilistic infeasibility appearing in conventional models and avoids the dependency of the solutions on the reserve cost and the penalty cost.In particular, a threshold parameter p a was introduced into the WP constraint to characterize the tolerance that the total load demand cannot be satisfied.Choosing small pa will mitigate the risk of insufficient WP, while increasing the demand for thermal power. To analytically remove the assumption that the total WP is characterized by a single random variable, the correlated Weibull distribution (Multivariate Distributions according to Probability Theorems) of the sum of WP is derived from the Weibull distribution model of each WP cluster.This correlated Weibull distribution is used as a chance constraint in the proposed model.With increasing concern over global climate change, policy makers are promoting renewable energy sources, predominantly wind generation, as a means of meeting emissions reduction targets.Although wind generation does not itself produce any harmful emissions, its effect on power system operation can actually cause an increase in the emissions of conventional plants [18].Thus, the economic dispatch problem can be handled as a multiobjective optimization problem with non-commensurable and contradictory objectives. In this paper, an EED model is developed for the system consisting of both thermal generators and wind turbines with more realistic and practical considerations.A nondominated sorting genetic algorithm based approach was used for solving the proposed EED model.The problem was formulated as a nonlinear constrained multi-objective optimization problem where fuel cost and environmental impact are treated as competing objectives.Two runs were carried out on a standard test system with non-smooth cost function and the results are analyzed and compared to those of previous works.The effectiveness and potential of the proposed multi-objective EED model are demonstrated. Economic Emission Dispatch Model The EED problem is to minimize two competing objective functions, fuel cost and emission, while satisfying several equality and inequality constraints.Generally the problem is formulated as follows.  Minimization of Fuel Cost: In the past, to solve economic dispatch problem effectively, most algorithms require the incremental cost curves to be of monotonically smooth increasing nature and continuous.The generating units with the multivalve steam turbines exhibit a greater variation by the fuel-cost functions, where the valve point results in the ripple form of the heat-rate curve and the cost function contains higher order nonlinearity due to the valve-point effects, as shown in Figure 1.The more general fuel cost function of each thermal generator considering the valvepoint effect in terms of real power output is expressed as the sum of a quadratic and a sinusoidal function as follow [16]:  Minimization of Emission: The atmospheric pollutants such as sulphur oxides (SO x ) and nitrogen oxides (NO x ) caused by conventional thermal units can be modeled separately.However the total emission of these pollutants which is the sum of a quadratic and an exponential function can be expressed as [19]: Stochastic Chance Constraint The power balance constraint is expressed as following: In chance-constrained programming in the context of stochastic programming, the probability distribution functions of the random variables are used as constraints in the optimization problem.The main goal with chance constraints is, therefore, to determine deterministic equivalents [20].If the total WP is characterized by a single random variable, the stochastic WP constraint and power balance constraint can be expressed as following [9,[14][15][16]: In particular, a threshold parameter p a is introduced into the constraint to characterize the tolerance that the total load demand cannot be satisfied [9].Choosing small pa will mitigate the risk of insufficient WP, while increasing the demand for thermal power.Using Weibull PDF of wind power (5) will be: This assumes that all wind turbines are located in a coherent geographic area.To analytically remove this assumption, the correlated Weibull distribution is needed.Practically, a large wind farm can be divided into multiple clusters.Hence, from the Weibull distribution model of each cluster, the correlated Weibull distribution of the sum of WP will be derived to be used in proposed models in the next section.For simplification, we only start with two random variables.It implies that we assume that total WP is characterized by two random variables or that wind turbines are located in two different geographic areas. The Correlated Distribution Function Often an experiment involves measuring two or more random numbers, say X and Y.The fact that we know the distribution of X, and the distribution of Y separately doesn't determine probabilities of events that involve both X and Y simultaneously [21].The distribution functions F X (x) and F Y (y) of the given random variables determine their separate (marginal) statistics but not joint statistics simultaneously [22]. Joint Cumulative Distribution and Probability Distribution Functions The joint (bivariate) cumulative distribution function (CDF) F XY (x, y), or simply, F (x, y) of two random variables X and Y is the probability of the event [22]: The joint probability distribution function (PDF) of X and Y is by definition [22]: One Function of Two Random Variables Given two random variables X and Y with Z is the sum of them, we want to find the random variable Z joint statistic probability distributions [21].  The Joint CDF is obtained as following:  And Joint PDF is obtained as following:  Statistically Independent Events and Convolution: If X and Y are statistically independent then: where : Convolution  If two random variables are independent, then the PDF of their summation equals the convolution of their PDF [21]. Bivariate PDF of WP with Two Weibull Random Variables Since the probability distribution of the WP random variable W i : Hence the joint statistic probability distributions of random variable W, where W is the sum of two random variables W 1 and W 2 , is: Then the Stochastic WP Constraint will be: where: All integrations, differentiations, and convolution operations required in the previous derivation or in the optimization problem solution are executed by using the symbolic MuPAD built in MATLAB. Hence the economic emission dispatch model can be mathematically formulated as follows: C p a b P c P d e P P E p P This model, referred to as the here-and-now approach, avoids the probabilistic infeasibility appearing in conventional models and used the derived correlated Weibull distribution of the sum of WP as constraint to avoid the assumption that the total WP is characterized by a single random variable.The transmission losses in terms of B-coefficients in the power balance constraint and more practical cost functions for thermal units were considered in the proposed model. Results and Discussion The practical EED problems have non-smooth cost functions with equality and inequality constraints in addition to the wind power chance constraint that make the problem of finding the global optimum difficult using any mathematical approaches, so a numerical optimization procedure is needed.In this paper, therefore, we implemented the nondominated sorting multi-objective genetic algorithm in MATLAB to deal with proposed model, the flow chart can be found in the Appendix B [23].A 69bus ten-unit test system with non-smooth fuel cost function is used in this paper to demonstrate the performance and the effectiveness of proposed model.Thermal units' data was taken from [24] and can be found in Appendix C. Kalyanmoy introduced full details about Multi-objective Genetic Algorithm, but are beyond the scope of discussion here [25].The proposed model was tested with the 69-bus 10-unit at forecasted load 1800 MW.Threshold parameter pa = 0.4.The Pareto front population fraction was considered in two different cases as follow:  Case (1) with Pareto front population fraction = 0.7. Case (2) with Pareto front population fraction = 0.35. Figures 2 and 3 show a set of nondominated optimal Pareto solution of the proposed model with Pareto front population fraction 0.7 and 0.35, respectively.As shown in Figures 2 and 3, there is no single solution that is optimal with respect to all objectives of the multi-objective optimization problem.Instead, there is a set of solutions that are superior to the rest of the solutions in the search space considering all objectives.Further, there is no solution in this set is absolutely better than the other solutions.This set is called the Pareto optimal set.It can be seen that the most left side Pareto solution of Area A shows the Pareto solutions that emphasize the economy, while the area B gives the Pareto solutions that emphasize the environmental protection.Table 1 gives values of objective functions and the generators outputs for solutions S, T and U in Figure 2 with Pareto front population fraction 0.7.Table 2 gives values of objective functions and the generators outputs for minimum fuel cost and minimum emission with Pareto front population fraction 0.35 at the most left side and the most right side of Figure 3, respectively.In Figure 2, it can be seen that the Pareto solution of solution U succeeds in reducing 1.42% of amount of emission and degrading 1.95% of the fuel cost in comparison with solution S. In addition, solution T succeeds in reducing 0.93% of amount of emission and degrading 0.71% of the fuel cost in comparison with solution S. Table 1 also shows outputs of generators in Pareto solutions.It can be seen that 1st and 2nd generators have low emission output and high fuel cost because their powers are increasing from solution S to solution T to solution U. Case 1, with Pareto front population fraction 0.7, preserves the diversity of the nondominated solutions over the trade-off front and solve effectively the problem. The results of the proposed EED model were compared to previous work [15] which obtained the effect of emission constraint and representation of losses.It can be seen that the minimum fuel cost in the proposed model is more than that by Elshahed et al. [15] without considering emission constraint by about 1.9 % and the savings with the proposed model in fuel cost are about 1.4 % when the emission constraint is considered by Elshahed et al. [15].In addition, the proposed model gives more efficient and noninferior solutions of multi-objective optimization problems. In contrast with single objective optimization problem found by Elshahed et al. with emission constraint [17], the single solution that is optimal with respect to all objectives of the multi-objective optimization problem does not exist.Instead, there is a set of solutions that is superior to the rest of the solutions in the search space considering cost and emission objectives, so there is no solution in this set is absolutely better than the other solutions.The final decision will be taken by the system dispatchers according to the dispatcher's attitude.The considered model in this paper achieved saving in minimum fuel cost about 5.8 % when compared with that with single objective function and emission constraint [17].It can be seen that the wind power results in all solutions are almost constant, because the wind units' parameters are not changed and also the two units are identical. Conclusions In this paper, an accurate multi-objective EED model is presented including:  The transmission losses in terms of B-coefficients,  Non-smooth cost functions due to valve-point effect, and  The correlated Weibull probability distribution of the WP constraint for a system consisting of both thermal generators and wind turbines.The use of the correlated Weibull probability distribution of the WP analytically removes the assumption that the total WP is characterized by a single random variable in the proposed model.The proposed model minimizes the risk due to uncertainty and can result in minimizing the required spinning reserve.Hence this model is more realistic, practical, and accurate economic emission dispatch model. Figure 1 . Figure 1.Non-smooth cost function with five valves. Figures 2 and 3 Figures 2 and 3 show a set of nondominated optimal Pareto solution of the proposed model with Pareto front population fraction 0.7 and 0.35, respectively.As shown in Figures2 and 3, there is no single solution that is optimal with respect to all objectives of the multi-objective optimization problem.Instead, there is a set of solutions that are superior to the rest of the solutions in the search space considering all objectives.Further, there is no solution in this set is absolutely better than the other solutions.This set is called the Pareto optimal set.It can be seen that the most left side Pareto solution of Figures 2 and 3 gives the Pareto solution of the minimum fuel cost and the most right side Pareto solution of Figures 2 and 3 denotes the Pareto solution of the minimum emission.Also, there is the Pareto solution that means the turning point of a set of optimal Pareto solutions. Figure 2 . Figure 2. The set of Pareto solutions of the proposed system Case 1. Figure 3 . Figure 3.The set of Pareto solutions of the proposed system Case 2.
3,628.8
2013-07-04T00:00:00.000
[ "Computer Science" ]
Maxwell group and HS field theory We consider the master fields for HS multiplets defined on 10-dimensional tensorial extension \tilde{\cal M} of D=4 space-time described as a coset \tilde{\cal M}={\cal M}/Sl(2;C) of 16-parameter Maxwell group {\cal M}. The tensorial coordinates provide a geometrization of the coupling to constant uniform EM fields. We describe the spinorial model in extended space-time \tilde{\cal M} and by its first quantization we obtain new infinite HS-Maxwell multiplets with their massless components coupled to each other through constant EM background. We conclude our report by observing that three-dimensional spinorial model with a pair of spinors should provide after quantization D=3 massive HS-Maxwell multiplets. Introduction In order to introduce the field-theoretic description of infinite number of relativistic quantum fields with all spins it is convenient to consider the enlargement of space-time. There were proposed various extensions of Minkowski space-time with vectorial [1,2] and tensorial [3,4,5,6,7] auxiliary coordinates, with master fields describing the infinite-dimensional spin multiplets by Taylor expansion in additional variables. In such a way one connects the master HS fields with enlarged D = 4 Poincare algebra, with new generators describing the shifts in auxiliary variables. In this talk we shall follow old derivation of HS field multiplets by quantization of spinorial superparticle model, invariant under SUSY with six tensorial central charges 1 [4,5,7]. Analogous tensorial charges occur e.g. in D = 11 in M-algebra [8] which as it is postulated describes the algebraic structure of eleven-dimensional M-theory. The master fields in D = 4 are described in such approach by fields on extended space-time (x µ , z µν ), where z µν = z [µν] are six auxiliary translations generated by tensorial central charges. Further in order to describe integer and half-integer spins it is convenient to express the tensorial central charges as bilinears in D = 4 Weyl spinorial variables λ α ,λα = (λ α ), α = 1, 2 2 Z µν = λ α (σ µν ) α β λ β +λα(σ µν )αβλβ (1.1) in analogy to the Penrose formula for massless fourmomenta Finally we will arrive at master fields depending on spinorially extended space-time (x µ , λ α ,λα), with their Taylor expansions describing HS fields with arbitrary spin. The description of HS with tensorial coordinates has been further generalized by M. Vasiliev [6,9,10] who followed Fronsdal observation [3] that the infinite HS free multiplets can be described by single irrep of D = 4 generalized conformal algebra Sp (8), containing as its subalgebra the generalized Poincare algebra with tensorial central charges. Subsequently the extended space-time method was applied to the description in AdS space-time and there were derived the multiplets of free HS fields on AdS space [11,12]. At present only two free HS field multiplets -in Minkowski and AdS space-time -are explicitly described, and both were derived by the method of quantization of spinorial model of Shirafuji type 3 . . 3 The formula (1.2) was firstly inserted in massless spinorial Shirafuji model [13] which was the first one describing the link between the model describing standard relativistic massless superparticle and the one describing free twistorial dynamics in D = 4 supertwistor space. The formula (1.1) has been firstly employed in the superparticle model in [4]. [15,16,17]. Using Maurer-Cartan (MC) forms for corresponding Maxwell group one can introduce [7] the Maxwell-invariant spinorial particle model and perform its first quantization. 4 By analysis of the constraints in extended phase space we shall obtain in Schrödinger realization the master fields for Maxwell-HS free fields which are coupled to each other by constant uniform EM field. The plan of our talk is the following. In Sect. 2 we will recall the notion of Maxwell group, Maxwell algebra and present the corresponding MC one-forms. Using this geometric framework we present in Sect. 3 the spinorial model [7] and its first quantization with complete discussion of occurring first and second class constraints. We will use the conversion method [18] which interprets canonical pair of second class constraints as describing the system with gauge-fixed local gauge transformations (one constraint from the canonical pair is considered as first class constraint and generating the gauge transformations, second as introducing the gauge-fixing condition). In such a way we get gauge-equivalent formulation of our model with nine first class constraints. In Sect. 4 we consider in detail the wave function of such a model satisfying on enlarged space-time nine wave equations. Finally we express the solution of quantum-mechanical model in terms of local D = 4 HS fields, which will be named free Maxwell-HS fields. In such a way we obtain three infinite-dimensional coupled sets of Maxwell-HS fields: one describing all HS bosonic fields (s = 0, 1, 2, . . .) and two infinite set of chiral and antichiral fermionic HS fields with half-integer spins (s = 1/2, 3/2, 5/2, . . .). In Sect. 5 we will discuss the considered model in dual representation as the one describing more explicitly Minkowski HS fields interacting with constant EM field. Sect. 6 is devoted to the outlook. In particular we conjecture that suitable reduction D = 4 → D = 3 of considered in [7] D = 4 spinorial model can provide the infinite-dimensional D = 3 Maxwell HS massive multiplets. Maxwell algebra and covariant Maurer-Cartan oneforms The Maxwell algebra [15,16] is the semi-direct sum of Lorentz algebra with the generators and the ten-dimension sector with generators of the Poincare translations P αβ = (P βα ) + and six tensorial generators Z αβ = Z βα ,Zαβ = (Z αβ ) + . The last ten generators transform under Lorentz algebra as it is indicated in the algebraic relations Differently than in the Poincare algebra with commuting translation operators, the commutators of the quantities P αβ yield new tensorial generators (see also (1.3)) [P αα , P ββ ] = 2i e ǫαβZ αβ + ǫ αβZαβ , where e is a constant interpreted further as describing EM coupling. Six tensorial charges Z αβ , Zαβ commute with each other and with the Poincare translation generators, i.e. the following commutators (see also (1.4)) are valid. The D = 4 Maxwell algebra define the Maxwell group M, in standard way by exponential representation. We define the D = 4 proper Maxwell group M 0 , determining Maxwell tensorial space, as ten-dimensional coset with generators P αβ , Z αβ ,Zαβ. The coset coordinates have the following transformations under the space-time translations (parameters a αβ ) and tensorial Maxwell shifts (parameters b αβ ,bαβ) δx αβ = a αβ , (2.6) The Lorentz transformations with the parameters ℓ αβ ,lαβ look as follows δx αβ = ℓ αγ xβ γ +lαγx β γ , δz αβ = 2ℓ αγ z β γ , δzαβ = 2lαβzβ γ . (2.7) Using the parametrization (2.5) and the algebraic relations (2.3), (2.4) we can define the Maurer-Cartan (MC) one-forms Explicit formulae for MC one-forms defined by (2.8) are (2.9) Corresponding covariant derivatives have the form (2.10) Since the space-time and tensorial translations are the shifts of the group parameters in Maxwell tensorial space, they do not change the MC one-forms (2.9) and the form of operators (2.10). We add that the Lorentz symmetry acts on MC one-forms and covariant derivatives in standard way, by the linear transformations. Particle action, constraints and the Casimirs We will consider the model of massless HS particle which propagates in the Maxwell tensorial space X = (x αα , z αβ ,zαβ) enlarged by the pair of spinorial variables. Our model is described by the following Maxwell-invariant particle action First term in (3.1) is the generalization of the D = 4 spinorial particle model defined on flat tensorial space-time in [4,5,6] (it is a tensorial generalization of Shirafuji model [13]). The components of the commuting Weyl spinor λ α ,λα = (λ α ) can be further considered as parts of the D = 4 twistor, and the model can be rewritten also as free D = 4 twistor particle model [4,5]. In the action (3.2) the parameter a is complex. Because e in (3.2) is dimensionless, we should choose the tensorial coordinates (z αβ ,zαβ) having mass dimensionality equal to −2, 1 2 , and one can deduce that the complex parameter a is mass-like, [a] = 1. This mass-like parameter can be chosen real, a =ā = m, if we take into account U(1) phase transformations of the spinors λ α = e iϕ λ α ,λα = e −iϕλα . Inserting (3.8), (3.9) in (3.7) we obtain the following representation of the constraints (3.7)-(3.9) where are the classical counterparts of the Maxwell-covariant derivatives (2.10). We see that the constraints (3.10) are the Maxwell-covariant generalization of the constraints leading to so-called unfolded equations for HS fields [5,6]. We stress that present model has important difference in comparison with the one describing the HS particle of [5,6], because the vectorial constraints do not commute (do have nonvanishing PB). There are the following nonvanishing Poisson brackets of the vectorial constraints (3.7) Other tensorial constraints (3.8), (3.9) are the same as in [5,6] and commute with all the constraints (3.7)-(3.9). Thus, the tensorial constraints (3.8), (3.9) are first class whereas the vectorial constraints (3.7) are the superposition of two first class and two second class constraints. To perform quantization of our model it is important to project out the first and second class constraints present in (3.7). If we wish to preserve the Lorentz covariance this separation requires the use of additional spinorial variables. In order to have a basis in two-dimensional spinor space, we introduce second spinor, u α , as firstly proposed in [5]. This auxiliary spinor satisfies the normalization condition The nonvanishing PBs of u α {u α , y β } P = u α u β (3.14) preserve the normalization (3.13). Using this spinorial basis (λ α , u α ) we can introduce in Lorentz-covariant way the following projections Their unique nonvanishing PB is the following and one can conclude that the constraints are first class, whereas are second class. Introduction of the Dirac brackets for the second class constraints leads to complicated structure of the quantum-mechanical algebra. As a way out we use the conversion method [18] in which the canonical pair of second class constraints can be considered as system where one second class constraint is interpreted as gauge-fixing condition for the gauge transformations which are generated by the other constraint. In the following we will consider the constraint T uū ≈ 0 as gauge fixing condition, and the constraint T λū + T uλ ≈ 0 as generating new gauge degree of freedom. Finally we will consider the classical gauge-equivalent system which is described by the constraints replacing the vectorial constraints (3.7). The constraints (3.19) are in fact linear combinations of the projections T αβλβ , λ β T βα of the constraints (3.7) on the Weyl spinor components λ α ,λα. So, we can avoid using the auxiliary spinor u α in definition of the constraints and as the result, we describe our model by the following set of first class constraints We observe that the four constraints (3.20), (3.21) are not independent, because they satisfy the relation i.e. we get only three independent first class constraints. It appears that the condition (3.24) does not enter into the derivation of the spectrum of our model. In the transition to this new system of the constraints, we should be careful not to loose any of the constraints. In particular, performing the projections (3.20), (3.21) we are omitting the contribution to the vectorial constraint (3.7) which does not depend on spinor variables. Such contribution is described by the following new constraint This quadratic constraint is of first class. Indeed, the constraint (3.25) can be represented by the formula T = T λλ T uū − T λū T uλ ≈ 0 and therefore due to (3.19) it is first class. Thus, we should add the constraint (3.25) to the first class constraints (3.20)-(3.23). After quantization this constraint will provide the Maxwell extension of the Klein-Gordon (KG) equation. One can find a physical interpretation of the system of the first class constraints (3.20)-(3.23), (3.25). For that purpose we will look for the values of the Casimir operators for the symmetry algebra in our model, i.e. the Casimirs of the the Maxwell algebra [16,19,20] C M ax 1 = P αβ P αβ + 4e M αβ Z αβ +MαβZαβ , (3.26) Using the transformations (2.6), (2.7) and the following transformations of spinors we obtain from the action (3.4) the Noether charges in our model First quantization of the particle model and interacting HS fields Phase space coordinates after quantization become the operators, and for simplicity we will denote them further by the same letter (without hats). The Poisson brackets algebra (3.5), (3.6) generates the following quantum-mechanical algebra Further we consider Schrödinger-type representation in which the operators x αβ , z αβ ,zαβ, y α ,ȳα are realized as multiplications by c-numbers whereas the operators of quantized momenta are represented as partial derivatives The physical spectrum of the wave function Φ = Φ(x αβ , z αβ ,zαβ, y α ,ȳα) (4.5) is defined by the quantum counterpart of first class constraints (3.20)- (3.22): where D αβ is the Maxwell-covariant derivative (see (3.11), (2.10)). The solutions of eqs. (4.8) are described by compact formula Φ(x, z,z, y,ȳ) = e −im(z αβ ∂α∂ β +zαβ∂α∂β ) Φ 0 (x, y,ȳ) . From expression (4.9) follows that the tensorial coordinates z µν = (z αβ ,zαβ) are the auxiliary gauge degrees of freedom and the gauge-independent degrees of freedom are present in the HS master field Φ 0 (x, y,ȳ). Residual equations (4.6), (4.7) for unconstrained field Φ 0 (x, y,ȳ) take the form Spinor variables y α ,ȳα are besides space-time coordinates, the additional spinorial variables. We consider the following Taylor expansion of HS master field with respect to the additional spinor variables where Maxwell-HS component fields φ Let us find now the solution of the equations (4.10), (4.11). In the beginning we present two simple equations, which are a direct consequence of the equations (4.10), (4.11). 2) By considering the difference of the equations (4.10) multiplied by∂α and the equations (4.11) multiplied by ∂ α we obtain From (4.15) we get the following equations for the component fields The equations for lowest component fields described by Lorentz spins (2, 0) + (0, 2) are as follows They represent half of the Maxwell equations (self-dual part) for the free electromagnetic field strength. Full set of the equations for component fields of the HS master field (4.12), generated by the conditions (4.6), (4.7), are They represent the Maxwell-invariant generalizations of well-known Dirac-Pauli-Fierz equations [21,22]. For completing the analysis of constraints it is necessary to impose on the wave function the scalar constraint (3.25). This additional condition leads to additional equation for first scalar component of the wave function (4.12). The constraint (3.25) implies the following equation (see details in [7]) for the HS master field (4.12). The equation (4.20) provides the following infinite set of field equations for the HS component fields for the scalar field φ (0,0) (x). which have the form of the Dirac equations in a constant electromagnetic field, with electromagnetic potential A µ = f µν x ν . As generalization of the standard approach for Dirac spin-half field, the wave functions in (5.13), (5.14) depend also on continuous electromagnetic field strength coordinates f αβ ,fαβ. We do not see yet the relation of our description of HS fields interacting with constant EM field to the approaches proposed in recent papers on EM coupling of HS fields [23,24], however to find such a link would be desirable. Generalized spin-zero fieldΨ (0,0) (x, f,f ) is described by generalized Klein-Gordon equation, which follows from the constraintT ααT ααΨ ≈ 0 . (5.15) Taking into account that we obtain the following generalized Klein-Gordon equation for "generalized spin zero" field It should be emphasized that due to the equations (5.14) and the constraints (5.12) the last term in the operator (5.16) does not contribute to the equation (5.17) and finally we obtain standard Klein-Gordon equation with coupling to constant EM field. From the equations (5.13), (5.14) and (5.17) follows that the link of different spins is due to EM coupling proportional to electric charge e. Further one can show that if the torsion in six tensorial dimensions of Maxwell space-time depends only on the D = 4 space-time coordinates it can be interpreted as a coupling to D = 4 Abelian gauge potential. Let us introduce the following "block-diagonal" 10-bein E AB CD = δ γ α δδβ, E αβ γδ , E αβγδ in the tensorial space x αβ , z αβ ,zαβ and corresponding covariant derivatives where eA αβ (x) = E αβ γδ (x)f γδ + E αβγδ (x)f˙γδ. In Maxwell tensorial space additional tensorial coordinates are twisted by a constant torsion proportional to e, the functions E αβ γδ (x) and E αβγδ (x) are linear in x, and we obtain in (5.20) the Abelian gauge field four-potential A αβ describing constant electromagnetic field strength (fαβ = (f αβ ) † ) A αβ = f γ α x γβ +fγ β x αγ . where have polynomial dependence on z,z and the component HS fields φ Outlook In this paper we considered the spinorial particle model in ten-dimensional tensorial spacetime with torsion described by the tensorial coset space X = (x αα , z αβ ,zαβ) of D = 4 Maxwell group and additional spinorial variables λ α ,λα. We performed the canonical quantization of the model with supplemented kinetic term for λ α ,λα. By using the phase space formulation we specified the set of first and second class constraints. It appears that in first quantized theory the first class constraints will describe the set of field equations for new higher spin multiplets in the tensorial space X defining new HS Maxwell dynamics. Such equations describe the generalization of the known "unfolded equations" [5,6,9] for massless HS free fields with flat space-time derivatives ∂ αβ replaced by the Maxwell-covariant derivatives D αβ (see (2.10)). Note that the Maxwell-covariant description of D = 4 Maxwell-HS fields requires the presence of particular space-time-dependent coupling terms between different spin fields which can be also interpreted as following from the electromagnetic covariantization of space-time derivatives in the presence of constant EM background field strength. As an interesting question which we plan to study is the derivation of massive HS fields from twistorial model of Shirafuji type, with two spinors which are necessary in D = 4 in order to define in spinorial framework the time-like four-momentum. The idea of describing massive particles by multispinors is well-known from the consideration of Penrose and his collaborators [25,26,27,28,29] and further was considered as well as in the supersymmetric case in [30,31,32,33,34,35,36]. In order to illustrate the derivation of massive spinorial model we shall argue following [30] that D = 3 massive Shirafuji model can be obtained by dimensional reduction of D = 4 spinorial model considered in [7]. We plan to consider in our next publication the massive extension of the model considered in [7]. The construction of massive Shirafuji model one can obtain for D = 3 (real Majorana spinors), D = 4 (complex Weyl spinors) or D = 6 (quaternionic Weyl spinors), by using respective complex and quaternionic generalization of the formulae (6.1)-(6.4) (see also [37,38]).
4,559.8
2013-09-26T00:00:00.000
[ "Physics" ]
The role of final-state interaction in tensor polarization effects of the reaction \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma d \rightarrow p n \pi ^{0}$$\end{document}γd→pnπ0 Tensor analyzing-power components \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{20}$$\end{document}T20, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{21}$$\end{document}T21, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_{22}$$\end{document}T22 for the reaction \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma d\rightarrow np\pi ^0$$\end{document}γd→npπ0 have been studied for the first time in the photon energy range from 280 to 500 MeV. The data are extracted from the experimental statistics accumulated at the VEPP-3 storage ring in 2002–2003. The measured asymmetries are compared with the results of statistical simulations performed with the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma d\rightarrow np\pi ^0$$\end{document}γd→npπ0 amplitude from a spectator model, taking into account corrections for the final-state interaction. The comparison demonstrates quite good agreement between the experimental results and the theory. Photoproduction of π mesons on nucleons and nuclei is one of the main sources of information about nucleon resonances. The special role of these processes in meson-nuclear physics is due to certain advantages of using photons as sensitive probes. First, the electromagnetic interaction is well understood within the framework of quantum electrodynamics. Second, photons can penetrate deep into nucleons and nuclei, thus making it possible to obtain more complete information about their internal structure. This property distinguishes photoproduction from, for example, scattering of pions, which experience intensive absorption in a nuclear environment due to strong coupling to inelastic channels. In the region of the photon energies below 500 MeV, several theoretical models were developed to study pion photoproduction on a deuteron, where the impulse approximation is typically used. In this model, the deuteron is actually considered as a system of two nucleons on which the pion is produced like on free nucleons apart from kinematic and binding corrections. The reason for this is the weak binding of the deuteron and its relatively large size. The reaction amplitude is then expressed in terms of photoproduction on a single free nucleon, whereas the second nucleon acts as a spectator. The final-state interaction effects associated with rescattering of the final particles are described in terms of two-body, NN and πN , t matrices. In most cases, models developed according to this scheme provide consistent description of unpolarized differential and total cross sections, but demonstrate larger deviation for polarization observables. In this regard, polarization characteristics are often used as a sensitive test of various model approaches. It is also well known that polarization measurements give more complete information about the dynamics of the process under study, compared to what can be achieved with an unpolarized cross section. For these reasons, despite the general technical difficulties in conducting experiments with a polarized beam and/or target, measurement of polarization observables are among the most important parts of many research programs aimed at studying photonuclear reactions. From the theoretical point of view, the influence of the final-state interaction (FSI) has been discussed in detail in a fairly large number of publications [1][2][3][4][5] . It should be noted that, unlike, for example, deuteron photodisintegration γ d → np , where two-body mechanisms are of crucial importance, the incoherent reactions γ d → NNπ are dominated by the single-nucleon mechanism. Although the corrections due to final-state interaction are, www.nature.com/scientificreports/ apparently, most important to the spectator model, their contribution is usually at the level of a few percent of the total cross sections. In particular, as it has been shown in the works cited above, interaction between the final nucleons in the neutral channel npπ 0 leads to an approximately 15 % decrease of the total cross section in the �(1232) region, which is in general agreement with the experimental results 6 . Other mechanisms in which both nucleons are involved (like, e.g., meson-exchange currents) play a minor role, unless the leading mechanisms are suppressed. Despite many theoretical analyses and quite high precision of the available experimental results for γ d → npπ 0 , so far there are no experimental data that could be used to study those FSI features which are directly related to the dynamical properties of the interaction between the final particles. The reason is that the noticeable FSI effects which can be distinguished by comparing theoretical predictions with experimental data are mainly an artifact generated by the plane-wave approximation. For example, due to FSI, a sizable decrease of the pion angular distribution dσ/d� π in the extreme forward direction 1 is simply a trivial consequence of the fact that the resulting cross section contains non-physical contribution from the coherent channel dπ 0 if the plane-wave approximation is used for the final np system. The latter appears due to nonorthogonality of the plane wave e iqr and the wave function φ(r ) of the coupled np system (the deuteron). As demonstrated in 3 , after eliminating this ghost admixture, the remaining FSI effect turns out to be relatively small. In other words, the significant influence of FSI in the reaction γ d → npπ 0 is basically just an unavoidable consequence of using the plane-wave impulse approximation, so that it does not provide any interesting information about the role of np rescattering in this process. One of the ways to minimize the influence of such ghost FSI effects is to study polarization asymmetries. The latter are expressed in terms of the ratio of different hermitian combinations of spin amplitudes, so that these undesirable effects, entering the numerator and denominator with approximately equal weights, are to a large extent cancelled. In addition, it is obvious that FSI should play a primary role in the kinematic regions that are characterized by a large momentum transfer and where the mechanisms with the participation of both nucleons become important. However, the available experimental data mainly cover the region of small momentum transfer, where the FSI effects are quite insignificant (after eliminating the mentioned influence of non-orthogonality of the wave functions). The only exceptions are the data for γ d → π + nn 7 , obtained near the threshold, and the data for the distribution d 2 σ/(d� π dE nn ) in the same reaction 8 , demonstrating a clear maximum at E nn → 0 coming from the 1 S 0 virtual nn state. In this work, we present experimental results for three components T 20 , T 21 , and T 22 of the tensor analyzing power for the reaction γ d → pnπ 0 . The data are extracted from the experimental statistics accumulated on the VEPP-3 electron storage ring in 2002-2003. VEPP-3 is an accelerator-storage complex located at the Budker Institute of Nuclear Physics, Novosibirsk. It is designed to accumulate and accelerate electrons and positrons. Presently, VEPP-3 is mostly used in various nuclear physics experiments with internal gas targets and as injector for the VEPP-4 accelerator. It contains the internal target equipment whose key element is the Atomic Beam Source with superconducting sextuple magnets, providing a flux of polarized deuterium atoms with high degree of tensor polarization and a negligibly small vector polarization admixture. The results of measurements are compared with the theoretical predictions provided by statistical simulation based on the spectator model, which also takes into account the contribution of NN and πN interaction in the final-state. The paper is organized as follows. In the next section, the method and the formalism used to obtain the components T 2M are described. In section "Results", the data obtained in the present experiment are compared with the results of statistical simulation. A brief discussion of the results and conclusion are given in the last two sections. Research method The present experiment was performed with an internal target filled with gaseous deuterium, only for which a high degree of tensor polarization can be obtained. A relatively small thickness of the target was compensated by a high beam current inside the accelerator chamber. A jet of polarized deuterium atoms entered the internal target from an atomic beam source (ABS) installed in the median plane of VEPP-3. At the exit of ABS, the degree of deuteron polarization was close to 100% . A detailed description of the atomic beam source can be found in Ref. 9 . A general expression for the cross section of pion photoproduction on a tensor-polarized deuteron reads where dσ 0 is the corresponding unpolarized cross section, T 20 , T 21 , T 22 are the components of tensor analyzing power, and d J M ′ M are the Wigner d matrices: The orientation of the target polarization axis with respect to the direction of the photon beam is specified by the polar and azimuthal angles θ H and φ H . The target polarization is determined by the degree of tensor polarization P zz where n 0 is relative population of the deuteron state having zero spin projection on the target-polarization axis. www.nature.com/scientificreports/ The presented work reports on the analysis of data obtained from the experiment conducted in 2002-2003. The recoil proton and neutron were detected by coincidence of the two, lower and upper, systems of detectors (Fig. 1). The lower system, which was used to detect recoil protons, consisted of a set of drift chambers and three plastic scintillators. Recoil neutrons were detected in the upper system by using the time-of-flight method. Six thick scintillators were installed at a distance of 3 m from the target, and a thin scintillation counter was installed at a distance of 1.5 m. The polar angle for recoil protons and neutrons varied between 50 • and 90 • , with their azimuthal angles being within ±30 • for the lower arm and ±12 • for the upper arm. During the data collection, the polar angle θ H was periodically changed to be one of the three values 54.7 • , 125.3 • and 180 • , whereas the azimuthal angle φ H remained close to 0 • in all cases. The sign of the tensor polarization was switched every 30 s. Such a procedure allowed simultaneous measurement of three asymmetries with respect to the sign reversal of the tensor polarization: is the number of events detected for i-th value ( i = 1, 2, 3 ) of the angle θ H and the target polarization P + zz (P − zz ) . From Eqs. (1) and (4), the required expressions for all three components of the tensor analyzing power T 2M ( M = 0, 1, 2 ) are obtained as The details of the experimental setup, detectors for registering the reaction products, and the methodology adopted for identifying the reaction channel under study are given in Refs. [10][11][12][13][14][15][16] . Results The experimental results obtained for all three components T 2M , M = 0, 1, 2 , are presented in Fig. 2 as functions of the laboratory photon energy E γ and of the invariant mass M πn of the π 0 n system. As seen, the asymmetries are quite small and do not exceed 0.2 in absolute value. Because the acceptance corrections for experimentally observed events N + and N − are canceled in the ratio (Eq. 4), they were neglected in the analysis of the experimental data, as well as for the simulated data. The magnitudes of statistical and systematic uncertainties for each data point can be seen in Fig. 2, illustrating a strong dominance of statistical uncertainties. The magnitude of the systematic uncertainties dominates by the uncertainty in determining the degree of the deutron tensor polarization. www.nature.com/scientificreports/ The corresponding statistical-simulation results are plotted in the same figure. It was performed using the Monte-Carlo algorithm described in Refs. 17,18 , which makes it relatively easy to take into account complex boundaries of the experimental kinematic domain, as well as the inhomogeneity of the spatial distribution of the deuteron target. Statistical simulation was carried out in full accordance with the experimental conditions, including the same constraints on the energies and emission angles of the final-state nucleons. To match the experimental target conditions, the components of the deuteron density matrix were simulated with the same probability, 1/6, for www.nature.com/scientificreports/ each of the six polarization states. Similar to the experimental events, the components T 2M were extracted by using the same formulas (4) and (5) and identical intervals for averaging of the kinematic variables. Such an approach allows a direct comparison of the experimental and theoretical results. To calculate the reaction amplitude that was embedded into the Monte-Carlo algorithm, we used the γ d → NNπ model developed in Ref. 3 . The amplitude is built within the approximation in which the process on the deuteron is reduced to the sum of single-nucleon photoproduction amplitudes. As already mentioned above, for a process like γ d → NNπ , where the deuteron breaks up, such an approach is called as the spectator model (diagram (a) in Fig. 3). The elementary amplitude γ N → Nπ (shown as a circle in Fig. 3) was taken from the MAID2007 model 19 . It contains contributions from the nucleon born terms, t-channel vector-meson exchange, and a set of s-channel baryon resonances. The latter includes all resonances with masses up to 2 GeV that are classified with four stars in the Review of Particle Physics 20 . Between the two mechanisms generating final-state interaction, the most important in the kinematical region of the present experiment is the nucleon-nucleon rescattering [diagram (b) in Fig. 3]. This is due to both a larger intensity of the NN interaction, compared to πN , and a fairly high kinetic energy of the final-state nucleons. An additional NN scattering effectively fills the missing-energy balance between the fast active nucleon and the spectator. The πN interaction is less significant here, primarily because the pion is unable to transfer any large amount of kinetic energy between the nucleons due to its small mass. For the deuteron wave function, as well as for the NN scattering state, the separable version of the Paris potential from 21 was adopted, in which the partial waves were taken into account up to 2S+1 L J = 3 G 3 . The calculation of the diagram with pion rescattering [diagram (c) in Fig. 3] was carried out using the separable model for the πN amplitude from 22 with all πN partial waves up to l = 2. Note that in the present calculation, the final-state interaction effects were included only up to the first-order terms in the two-body NN and πN , t matrices, neglecting the higher order terms in the corresponding multiple scattering series. The latter could be taken into account, for example, within a three-body πNN model, as it is done in 23 . As shown in 23 , despite the smallness of the contribution from higher-order multiple-scattering diagrams to the unpolarized cross section, it could still be important. The main reason for neglecting the higherorder terms is in a significant increase of the time required to run statistical simulations with three-body calculations. Thus the question about importance of the terms beyond the first order in the two-body NN and πN rescatterings remains open. This work also did not address the question about how much the results of simulation in Fig. 3 depend on a model used to construct the elementary amplitude γ N → Nπ . In general, such dependence should not be very crucial because, in the energy range under the consideration, E γ < 500 MeV, the existing multipole analyses for γ N → Nπ differ little from each other. However, because of quite specific kinematic conditions of the experiment (in particular, a large momentum transfer to the spectator nucleon), theoretical results may even be sensitive to small differences between the model amplitudes. As shown in Fig. 2, taking in to account the final-state interaction effects significantly improves the agreement between the model predictions and the experimental data points, even if some features of the observed tensor asymmetries are not fully reproduced. This is especially important after taking into account quite high sensitivity of T 2M to various model details, as well as to the special kinematic conditions of the present experiment that were discussed in the text above. Such an observation can be reviewed as an indirect confirmation of the general assumption that the spectator model, including NN and πN rescatterings as the next order terms, may be considered as an adequate theory of the process under study. Conclusion The present work reports on the first measurements of the tensor analyzing-power components T 2M , M = 0, 1, 2 , for incoherent π 0 photoproduction on a deuteron in the range of the incident-photon energies from 280 to 500 MeV. The experimental results were obtained from analysis of the data accumulated on the VEPP-3 electron storage ring at the Budker Institute for Nuclear Physics in 2002-2003. The present results are compared to the predictions from statistical simulation performed with and without final-state rescattering, which demonstrates that taking such interactions into account significantly improves their agreement. The results presented in Fig. 2 seems so far to be the only case when the importance of including interaction effects in the incoherent photoproduction of π mesons on the deuteron is unambiguously and quantitatively
4,030.6
2023-05-09T00:00:00.000
[ "Physics" ]
Cis-Effects Condition the Induction of a Major Unfolded Protein Response Factor, ZmbZIP60, in Response to Heat Stress in Maize Adverse environmental conditions such as heat and salt stress create endoplasmic reticulum (ER) stress in maize and set off the unfolded protein response (UPR). A key feature of the UPR is the upregulation of ZmbZIP60 and the splicing of its messenger RNA. We conducted an association analysis of a recombinant inbred line (RIL) derived from a cross of a tropical founder line, CML52 with a standard temperate line, B73. We found a major QTL conditioning heat-induced ZmbZIP60 expression located cis to the gene. Based on the premise that the QTL might be associated with the ZmbZIP60 promoter, we evaluated various maize inbred lines for their ability to upregulate the expression of ZmbZIP60 in response to heat stress. In general, tropical lines with promoter regions similar to CML52 were more robust in upregulating ZmbZIP60 in response to heat stress. This finding was confirmed by comparing the strength of the B73 and CML52 ZmbZIP60 promoters in transient maize protoplast assays. We concluded that the upstream region of ZmbZIP60 is important in conditioning the response to heat stress and was under selection in maize when adapted to different environments. Summary: Heat stress has large negative effects on maize grain yield. Heat stress creates ER stress in maize and sets off the UPR. We searched for factors conditioning heat induction of the UPR in maize seedlings by conducting an association analysis based on the upregulation of unspliced and spliced forms of ZmbZIP60 mRNA (ZmbZIP60u and ZmbZIP60s, respectively). ZmbZIP60u was upregulated more robustly by heat stress in the tropical maize line, CML52, than in B73, and a major QTL derived from the analysis of RILs from a cross of these two lines mapped in the vicinity of ZmbZIP60. We conducted a cis/trans test to determine whether the QTL was acting as a cis regulatory element or in trans, as might be expected for a transcription factor. We found that the QTL was acting in cis, likely involving the ZmbZIP60 promoter. ZmbZIP60 promoters in other temperate and tropical lines similar to CML52 showed enhanced expression of ZmbZIP60u by heat. The contribution of the CML52 promoter to heat induction of ZmbZIP60 was confirmed by analyzing the CML52 and B73 promoters linked to a luciferase reporter and assayed in heat-treated maize protoplasts. Adverse environmental conditions such as heat and salt stress create endoplasmic reticulum (ER) stress in maize and set off the unfolded protein response (UPR). A key feature of the UPR is the upregulation of ZmbZIP60 and the splicing of its messenger RNA. We conducted an association analysis of a recombinant inbred line (RIL) derived from a cross of a tropical founder line, CML52 with a standard temperate line, B73. We found a major QTL conditioning heat-induced ZmbZIP60 expression located cis to the gene. Based on the premise that the QTL might be associated with the ZmbZIP60 promoter, we evaluated various maize inbred lines for their ability to upregulate the expression of ZmbZIP60 in response to heat stress. In general, tropical lines with promoter regions similar to CML52 were more robust in upregulating ZmbZIP60 in response to heat stress. This finding was confirmed by comparing the strength of the B73 and CML52 ZmbZIP60 promoters in transient maize protoplast assays. We concluded that the upstream region of ZmbZIP60 is important in conditioning the response to heat stress and was under selection in maize when adapted to different environments. Summary: Heat stress has large negative effects on maize grain yield. Heat stress creates ER stress in maize and sets off the UPR. We searched for factors conditioning heat induction of the UPR in maize seedlings by conducting an association analysis based on the upregulation of unspliced and spliced forms of ZmbZIP60 mRNA (ZmbZIP60u and ZmbZIP60s, respectively). ZmbZIP60u was upregulated more robustly by heat stress in the tropical maize line, CML52, than in B73, and a major QTL derived from the analysis of RILs from a cross of these two lines mapped in the vicinity of ZmbZIP60. We conducted a cis/trans test to determine whether the QTL was acting as a cis regulatory element or in trans, as might be expected for a transcription factor. We found that the QTL was acting in cis, likely involving the ZmbZIP60 promoter. ZmbZIP60 promoters in other temperate and tropical lines similar to CML52 showed enhanced expression of ZmbZIP60u by heat. The contribution of the CML52 promoter to heat induction of ZmbZIP60 was confirmed by analyzing the CML52 and B73 promoters linked to a luciferase reporter and assayed in heat-treated maize protoplasts. INTRODUCTION Maize is the most widely produced crop in the world. It has been adapted to many different environments and now faces changing climate conditions. Adverse environmental conditions present a major constraint in preventing crops such as maize from reaching their genetic potential. It has been estimated that each year 15-20% of the potential maize production is lost due to drought and heat (Lobell et al., 2011;Makarevitch et al., 2015) and that each degree increase in global mean temperature reduces maize yields worldwide by 7.4% (Wang et al., 2017). Through domestication, many agronomic, plant architecture, and seed quality traits in maize have been subject to selection at thousands of loci (Tian et al., 2009). In the process, small regions surrounding selected genes have been substantially reduced in genetic diversity (Meyer and Purugganan, 2013). The reduction in genetic diversity in the regulatory elements limits the adaptability of maize to different environmental conditions (Freeman and Herron, 1998). Both cis-and trans-regulatory element variation contribute to this diversity, however, cis-regulatory variation is more common for both steady-state and stress-responsive expression differences (Waters et al., 2017). Limited adaptability can exacerbate the effects of the environment on fragile cellular processes such as the folding of proteins in the endoplasmic reticulum (ER) creating a condition called ER stress (Thomashow, 1999;Bray, 2004;Liu et al., 2007;Deng et al., 2011). This happens when the demand for protein folding under adverse environmental conditions exceeds the cell's capacity, setting off the unfolded protein response (UPR). The UPR is a homeostatic response to lighten the load of misfolded proteins in the ER and to protect plants from further stress (Deng et al., 2013a;Howell, 2013). The UPR communicates the status of protein folding in the ER to the nucleus. In Arabidopsis, the UPR signaling pathway has two arms: one arm involves ER membrane-associated transcription factors (AtbZIP17 and AtbZIP28) that are mobilized to the nucleus in response to stress and another arm that involves a dual protein kinase, RNA-splicing factor, IRE1, and its target RNA, AtbZIP60 (Deng et al., 2013a;Howell, 2013;Korner et al., 2015). Both arms of the UPR are involved in abiotic stress responses where they have been examined most extensively in Arabidopsis. Loss-of-function mutations in AtbZIP17 or in factors involved in its processing, such as S1p (Site-1 protease), are salt sensitive, while overexpression of AtbZIP60 confers more tolerance to salt stress (Liu et al., 2007;Deng et al., 2013b;Henriquez-Valencia et al., 2015). Double ire1a ire1b mutations in Arabidopsis are more sensitive to ER stress agents such as dithiothreitol (DTT) and pollen production in the double mutant is heat sensitive (Deng et al., 2016). The UPR in maize seedlings is induced by heat stress, i.e., by exposing seedlings to elevated temperatures (Li et al., 2012). There are many studies on heat stress in maize that differ in a number of ways including whether heat stress is applied as heat shock or simply as growth at elevated temperature (Ristic et al., 1998;Qu et al., 2017). Reproductive development in maize is particularly vulnerable to heat stress. Elevated temperature during pollination has profound effects on maize pollen shed and viability (Schoper et al., 1987;Lizaso et al., 2018) and at later stages on grain filling (Badu-Apraku et al., 1983;Wilhelm et al., 1999). Heat stress also affects photosynthesis during vegetative growth (Berry and Bjorkman, 1980;Edwards et al., 2001;Crafts-Brandner and Salvucci, 2002;Sinsawat et al., 2004). Photosynthesis is sensitive to heat stress largely attributed to the inactivation of Rubisco and denaturation of Rubisco activase at elevated temperature (Law and Crafts-Brandner, 1999;Salvucci et al., 2001;Crafts-Brandner and Salvucci, 2002;Salvucci and Crafts-Brandner, 2004) For reasons which have not been further explored, heat tolerance varies among inbred lines (Chen et al., 2012;Cairns et al., 2013;Naveed et al., 2016), particularly in the comparison between temperate and tropical lines (Edreira and Otegui, 2012). In this paper, we examined the induction of the UPR by heat stress in maize seedlings. We evaluated the ability of several maize inbred lines belonging to different sub-families to upregulate and splice ZmbZIP60 mRNA in response to heat stress. We found that the upstream region of ZmbZIP60 plays an important role in regulating the gene's response to heat stress and is under selection for maize adapted to different environments. QTL Analysis of Variation in the UPR The upregulation of the canonical ER stress response genes has served as a molecular signature for the UPR. We examined two other outputs of the UPR, the production of unspliced and spliced forms of ZmbZIP60 mRNA [ZmbZIP60u and ZmbZIP60s (spliced form, mobilized to the nucleus), respectively] that are upregulated by ER stress agents, such as tunicamycin and dithiothreitol, and also by heat (Li et al., 2012). We observed greater heat induction of ZmbZIP60u in four tropical inbreds, Ki3, CML69, CML103, and CML322 compared to the temperate lines, B73 and IL14H, (Supplementary Figure S1). We also observed higher levels of ZmbZIP60s in response to heat stress in the tropical lines. Therefore, we utilized ZmbZIP60u and ZmbZIP60s as biomarkers to study the differences in UPR in tropical vs. temperate lines. To search for factors conditioning ZmbZIP60 induction in response to heat stress, we analyzed a set of RILs from the nested association mapping (NAM) population derived from a cross of B73, a temperate line, x CML52, a lowland tropical yellow maize inbred (McMullen et al., 2009). The heat induction of ZmbZIP60u varied more widely among the NAM RILs than the induction of ZmbZIP60s ( Figure 1A). The levels of ZmbZIP60u appeared to contribute in part to the levels of ZmbZIP60s as demonstrated by the correlation between the induced expression of ZmbZIP60u and ZmbZIP60s (correlation coefficient of 0.30) ( Figure 1B). An association analysis was conducted to map heat-induced ZmbZIP60 QTLs onto the maize B73 genome. Heat induction of ZmbZIP60u showed broad heritability estimated to be 0.79 while the heritability of heat induction of ZmbZIP60s was estimated to be 0.83. The association analysis showed that 46 SNPs contributed significantly to the upregulation of ZmbZIP60u while 48 SNPs contributed to ZmbZIP60s (Figures 1C,D). Phenotypic variation explained by individual SNPs ranged from 4 to 10% for ZmbZIP60u and from 3.0 to 17% for ZmbZIP60s. Interestingly, there was a significant association haplotype block located on chromosome 9, in the vicinity of ZmbZIP60, suggesting that polymorphisms likely upstream of ZmbZIP60 in the tropical line contribute significantly to the upregulation of ZmbZIP60 expression in response to heat stress. Cis/Trans Test To determine whether the QTL in the vicinity of ZmbZIP60 functions in cis or in trans with respect to ZmbZIP60 induction, a cis/trans test was performed. To conduct the test, F1 hybrids were selected from crosses of B73 to five different RILs (Z008E0012, Z008E0105, Z008E0127, Z008E0135, and Z008E0143), which bore ZmbZIP60 from CML52 and that induce ZmbZIP60u to high levels in response to heat. In the F1 lines, one ZmbZIP60 allele was derived from B73 and the other from CML52 (Figure 2A). The test was to determine whether induction conditions act in trans to elevate the expression of both alleles equally or whether the response acts in cis to raise the expression of one allele over the other. To carry out the test, six SNPs in the 5 -UTR of ZmbZIP60 were used to distinguish the RNA transcripts from the two different alleles ( Figure 2B). The ratios (CML52/B73 allele) of the six SNPs were used to calculate the relative abundance of the transcripts derived from the two different alleles. Before heat stress, ZmbZIP60u expression was quite low, but after heat stress, ZmbZIP60u expression increased 9.1∼22-fold in the various lines. Both alleles were induced, however, the contribution was unbalanced. CML52 allele contributed more to the heat-induced transcript population, contributing nearly 70% of the total transcripts (Figures 2C,D). We interpret this to mean that the elevated heat induction response of ZmbZIP60 is mostly due to cis elements that regulate expression of the gene. The Analysis of the Upstream Region of ZmbZIP60 The contribution of cis effects to the heat induction of ZmbZIP60 gave us cause to look upstream from the gene for possible control elements. The upstream sequences for ZmbZIP60 in six reference maize lines (B73, B104, EP1, F7, CML247, and PH207) were quite different in comparing tropical and temperate lines ( Figure 3A). FIGURE 2 | Cis/trans test of heat-induced ZmbZIP60 expression. (A) F1 hybrid lines were obtained from crosses of B73 × high expressing RILs with CML52 type promoters. ZmbZIP60 expression was heat induced and the RT-PCR product was sequenced to determine the relative expression of the two alleles. (B) DNA sequence analysis of the B73 type and the CML52 type alleles. Six SNPs in the 5 -UTR were used to distinguish the contribution of the two different alleles in the RNA transcript population. Green highlights the coding region and purple highlights sequence variation between the alleles. Boxes indicate the location of the primers used in the RT-PCR analysis. (C) Sequence profile of RT-PCR products from the analysis of hybrid line 1 derived from a cross of B73 × Z008E0012. RNA extracted from seedlings incubated at 26 • C served as a control, while the experimental sample was extracted from seedlings incubated for 1 h at 39 • C. The area under the peaks for the SNPs (pointed out by arrows) was measured and the ratio of the two different alleles was calculated. (D) RT-PCR analysis of the RNA extracted from control hybrid seedlings incubated at 26 • C and those subjected to heat treatment (39 • C, 1 h). Bar graph represents the normalized contribution of the two alleles (CML type and B73 type) to the RNA population in the control and heat-treated seedlings. Frontiers in Plant Science | www.frontiersin.org In the B73 and B104 lines, the distance between ZmbZIP60 and the upstream gene ZmArid10 was about 56kb and was 58 and 66 kb for two European founder lines F7 and EP1, respectively. However, the distance of ZmbZIP60 to the upstream gene, ZmArid10, was much longer in the tropical line CML247 (96 kb) and in PH207, belonging to the iodent germplasm (98 kb). The differences can be attributed to the various transposable elements (TEs) present in this region. For example, in CML247 and PH207, there are two gyma TEs, five huck TEs, one xilon-diguus TE, and one copia TE in the upstream region, all of which are Class I retroTEs. However, in the same region in B73 and B104 there are CACTA, a Class II TE, and copia TEs. A 12-bp sequence (-CTTTGCCGAGTG-) was also found repeated 10 times from −2304 to −2588 bp upstream in B73, followed by the CACTA TE which was disrupted by a Class I Copia TE. In EP1 and F7, a Copia TE was inserted −5842 bp upstream from the start of translation and a gyma element was inserted in −6235 bp of CML247 and PH207, respectively. The near upstream region of ZmbZIP60 (∼1.8 kb) in the reference lines and CML52 was marked by a number of indels. Ten major indels in this region were found in the comparison between B73/B104 and the other four lines, and they were named motif 1-10 according to their distance from the start of translation (Figures 3B,C and Supplementary Figure S2). The ZmbZIP60 promoters in 25 different NAM founder lines (Liu et al., 2003;McMullen et al., 2009), were characterized in the same way using three sets of primers, one specific to B73 and another specific to CML247 ( Figure 4A). Based on the presence or absence of the CACTA TE and motifs 4 and 5, all 32 maize lines could be categorized into one of three groups, B73 type, CML247 type and other type ( Figure 4B). Maize germplasm can be categorized as belonging to Iowa Stiff Stalk Synthetic (BSSS) and Lancaster, represented by B73 and Mo17 (Mumm and Dudley, 1994), or to one of four groups: non-Stiff Stalk, Tropical or Semitropical, Stiff Stalk, and a mixed group (Liu et al., 2003). In this study, inbred lines represent non-Stiff Stalk, Tropical or Semitropical, Stiff Stalk, and a mixed group background. According to the presence of CACTA-TE and motif 5, five lines (Ky21, Ms71, Mo18w, B104, and B73) were classified as B73 type, while 18 maize lines belonged to CML type, without the CACTA-TE and CML247 motif 5 in their promoters. They included five non-stiff stock lines (B97, M162w, Oh43, Oh7B, and Mo17), two mixed lines (Mo37w and Tx303), one other line (IL14H), eight tropical lines (Ki3, Ki11, CML52, CML247, CML277, CML333 and Tzi8, iodent germplasm PH207), and two European founder lines (F7 and EP1). The presence of the FIGURE 4 | Genotyping and classifying ZmbZIP60 promoters. (A) 5000 bp upstream region of ZmbZIP60 showing location of primers for genotyping and classifying promoters as being B73 or CML type. Note CACTA-TE and 12 bp repeats in the B73 upstream region. (B) Genotyping the NAM founder and various inbred maize lines. Primer set 1 was used to determine the presence of the CACTA TE, primer set 2 was employed to identify motif 4 and primer set 3 was used for motif 5. Based on the presence or absence of these elements, all 32 maize lines could be categorized into one of three groups, B73 type (blue), CML247 type (red), and other type (green) The boxes outlined in yellow show B73 type motif 4 in lines NC350, NC358, and W22, however, they are located differently from the start of transcription than in B73. The designation "Other" indicates the promoter could not be classified by this criterion. CACTA-TE was linked with motif 5, except in one line, CML228. This line lacked the CACTA-TE but had a B73 motif 5 in its promoter. Eight lines were classified as other (green) based on these two sets of primers. Using primer set 3 (for motif 4), nine lines could be classified as B73 type (with B73 type motif 4), 18 lines as CML type (with CML247 type motif 4), three lines (NC350, NC358 and W22) had the B73 type motif 4, but the distance from ATG to motif 4 was somewhat different than B73, nonetheless, they were classified as B73-like. Only CML69 and CML322 could not be classified by these criteria. Association of Heat Stress Response With ZmbZIP60 Promoter Features Following heat treatment of seedlings at 37 • C for 1 h, significant differences in ZmbZIP60u upregulation were observed between the B73 and CML promoter types. Among the 19 lines that belong to CML type, based on CACTA-TE and motif 5 specific primers, 16 lines had higher expression levels (Figure 5). Among the B73 type (blue), all five lines had lower expression levels. Using motif 4 as a guide, among the lowest expressing 13 lines, 9 were B73 type (blue). The correlation between ZmbZIP60 induction levels and the presence of CACTA-TE/motif 5, motif 4 and the interaction of motif 4 with the CACTA-TE/motif 5 was evaluated using a linear model. In general, the classification of the promoter region was highly correlated with the induced expression levels of ZmbZIP60u (Figure 5). The presence of motif 4 in the promoter was more highly correlated with induced expression than motif 5, but that may be partially due to differences in the population size for the different motifs (including three additional B73-like lines NC350, NC358, and W22). Nine lines with motif 4 had the lower expression levels than other lines, especially when compared to CML type, i.e., the lines with the same motif as CML247 (Figure 5). The presence of CACTA-TE, motif 4 and 5 and their interaction was highly correlated with the enhanced expression levels of ZmbZIP60 after heat treatment (Supplementary Figure S3). Promoter Activity in Maize Protoplasts The intrinsic activity of the different ZmbZIP60 promoters in their response to heat stress was tested in transient assays using maize B73 and CML52 leaf protoplasts. Two different size ZmbZIP60 promoter fragments (893 and −2121 bp upstream from the start of transcription) were amplified from CML52 (which has the same upstream sequence as the reference line, CML247) and B73. They were inserted into a pGreenII vector to drive firefly luciferase (LUC) expression. The construct also carried the Renilla luciferase gene driven by the CaMV 35S promoter that was used as an internal control. The uninduced activity of the two promoters in either B73 or CML52 protoplasts was relatively low. Following heat stress at 37 • C for 45 min, the LUC levels were induced from 1.5∼4.6-fold. LUC levels were most highly induced when either the 893 or the 2121 bp CML52 promoter was used to drive expression in CML52 protoplasts. The lowest induction was when the B73 promoter was used to drive LUC expression in either B73 or CML52 protoplasts (Figure 6). DISCUSSION bZIP60 is a powerful transcription factor in plants driving the expression of UPR genes to alleviate ER stress (Iwata and Koizumi, 2005;Iwata et al., 2009;Nagashima et al., 2011). Heat activates plant IRE1 empowering it in maize to splice ZmbZIP60 mRNA to make ZmbZIP60s, which, in turn, encodes the active ZmbZIP60 TF (Gao et al., 2008;Deng et al., 2011Deng et al., , 2016. ZmbZIP60 is thought to autoregulate by a feed-forward FIGURE 6 | ZmbZIP60 promoter activity in maize protoplasts. (A) The promoter regions from B73 (blue) or CML52 (red, same with CML247) were inserted into a pGreenII Dual-luciferase vector which has a CMV35 promoter driving Renilla luciferase as an internal control. The constructs were transformed into maize mesophyll protoplasts from B73 or CML52 as indicated. (B,C) 893 and 2121 bp ZmbZIP60 promoters, respectively, were used to drive luciferase in maize protoplasts according to the scheme below. Bar graph shows means from three biological replicates. Error bars = SD. mechanism, such that ER stress leads to the production of more bZIP60u RNA (Iwata and Koizumi, 2005;Humbert et al., 2012;Moreno et al., 2012). In our study, we found that the levels of ZmbZIP60s depend not only on the splicing activity of IRE1, but also on the levels of ZmbZIP60u in different maize lines. When subjected to heat stress, most of the inbred lines with ZmbZIP60 upstream regions comparable to CML52 had higher ZmbZIP60u expression levels than those with upstream sequences comparable to B73. We found that most tropical or subtropical maize inbred lines were more proficient than temperate lines in upregulating the expression of ZmbZIP60 in response to heat stress. Although maize was believed to be domesticated along the tropical Pacific coast of southwest Mexico, present day growth in commercial corn is optimal at 32 • C day and 27 • C or below in the night (Keeling and Greaves, 1990). During the first 10-12 days after pollination, when most cell division and differentiation occurs, each 1 • C increase in temperature above optimum (25 • C) results in a reduction of 3-4% in grain yield (Shaw, 1983). Heat stress damages cellular components by generating reactive oxygen species and destabilizing proteins and membranes (Mittler et al., 2012;Hasanuzzaman et al., 2013;Ohama et al., 2017). Heat stress activates the cytoplasmic heat shock system, a complex transcriptional network composed of a number of transcriptional regulators and their target heat shock proteins. Although UPR is not the only heat response system in plants, nonetheless defects in the UPR demonstrate the importance of that pathway in conferring heat tolerance to various phases of plant growth . A major challenge in maize breeding is to breed for environmental stress adaptation to improve yield and seed quality. Much of today's germplasm originated from seven progenitor lines: B73, LH82, LH123, PH207, PH595, PHG39, and Mo17 (Mikel and Dudley, 2006). The lack of genetic diversity in the inbred lines used for maize breeding limits the selection of germplasm. Since the UPR plays roles not only in the environmental stress response but also in yield and seed quality traits, understanding the natural variation in the UPR is important. In this paper, the inbred lines with higher IRE1-bZIP60 mediated UPR levels can be used to improve stiff stock lines, such as B73 or others by using marker-assistant selection or gene modification strategies. The molecular markers for the ZmbZIP60 upstream region and ZmbZIP60u and ZmbZIP60s expression can be used to evaluate the UPR in maize germplasm and help breeders in crop improvement. Plant Material and Culture Condition Thirty-two maize inbred lines were used to correlate ZmbZIP60 expression with the structure of the upstream region of the gene. The lines included 25 NAM founder lines and seven maize inbred lines, which have been sequenced (Mo17, F7, EP1, W22, B104 CML247, and PH207). The NAM subfamily RILs from the cross of B73 × CML52 were used in the association analysis of the factors conditioning heat induction of ZmbZIP60u. Five F1 hybrids derived from RILs (Z008E0012, Z008E0105, Z008E0127, Z008E0135, and Z008E0143) backcrossed with B73 were used to test for cis/trans effects. The cis/trans test measures the balance in expression of the two ZmbZIP60 alleles in the hybrid in response to heat stress. Maize seedlings were grown in soil in small pots with 13 h light/11 h dark and 26 • C day/20 • C night, relative humidity was set to 60%. Seedlings were tested at the three-leaf stage for variation in production and splicing of ZmbZIP60 mRNA in response to heat stress (37 • C for 1 h). For analysis of the F1 lines, 39 • C for 1 h was used to induce higher levels of ZmbZIP60u for allelic analysis. Leaves were flash frozen and stored at −80 • C for RNA extraction. RNA Extraction and RT-PCR Assay RNA was extracted by grinding harvested tissues into powder in liquid nitrogen and using a Plant RNeasy Mini Kit (Qiagen 1 ) according to the manufacturer's instructions. 500 ng total RNA were used for the cDNA synthesis (iScript cDNA Synthesis kit (Bio-rad 2 ), which in turn was utilized as template for RT-PCR analysis. Primers used in this study were listed in Supplementary Table S1. Relative gene expression levels were quantified using ImageJ 3 analysis, the expression in different lines was normalized by ZmUbi as an internal control. Mean values ± SD were determined from 3 or 4 biological replicates as indicated. Cis/Trans Analysis Total RNA extraction and RT-PCR were performed as described on F1 hybrids between a cross of B73 with one of the RILs bearing the CML52 version of the ZmbZIP60 allele. The RT-PCR products were extracted from the gel and purified using a QIAquick Gel Extraction Kit (Qiagen). After DNA sequencing, the areas under the peaks in the sequencing profile were measured using 1 www.qiagen.com/us/ 2 www.bio-rad.com 3 https://imagej.nih.gov/ij/ ImageJ and the ratio between the contributions by the two alleles were calculated. The average of six SNPs in this region were used to calculate the relative expression levels of allele-specific expression. DNA Extraction and Genotyping Genomic DNA was isolated from maize leaves at the 3leaf stage using a cetyltrimethylammonium bromide (CTAB) extraction method (Orkin, 1990). Primers were designed according to the sequence of reference lines (Supplementary Table S1). Transient Assay Using Maize Protoplast Plasmids were constructed by introducing ZmbZIP60 promoter segments into the KpnI and SpeI sites of plasmid pGreen II 8000. (SnapGene 4 ). Transient expression assays were performed using maize mesophyll protoplasts as described by Sheen (2001). Luciferase activity measurements were carried out according to the kit manufacturer's instructions (E1910, Promega, www. promega.com) and analyzed in a Berthold Centro 960 microplate luminometer. Sequence Analysis DNA sequences were downloaded from maize GDB, and TEs in the ZmbZIP60 upstream region were identified using the Maize TEs Database. Multiple sequence alignment of the reference line promoters was performed using Clustal W 5 . Primers were designed using NCBI/primer-BLAST. QTL Analysis The relative expression levels of ZmbZIP60u and ZmbZIP60s were quantified with ImageJ as described in the RT-PCR assay and used as a phenotype for QTL mapping analysis performed with the R/qtl package (Broman et al., 2003). Composite interval mapping (CIM, Zeng, 1994) implemented in the package was used along with the default LOD score. 1144 legacy SNPs 6 used in NAM population (McMullen et al., 2009) were used as markers for QTL mapping. AUTHOR CONTRIBUTIONS ZL and SH designed the study and wrote the manuscript. ZL, RS, and JT performed all of the experiments. ZL and ZZ analyzed the data. All authors read and approved the manuscript for publication. FUNDING This study was supported by National Science Foundation Plant Genome Research Program (IOS1444339) and by the Plant Sciences Institute at Iowa State University. ACKNOWLEDGMENTS We thank Patrick Schnable and Lisa Coffey for their help and advice and generous supply of materials for this study. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2018.00833/ full#supplementary-material FIGURE S1 | Preliminary analysis of ZmbZIP60 (Zm00001d046718) as a biomarker for the UPR. 10 days-old maize seedlings from the temperate lines (B73 and IL14H) and subtropical lines (Ki3, CML69, CML103, and CML32) as indicated were incubated at 26 • C or heat treated for 1 h at 37 • C. RNA was extracted and analyzed by RT-PCR using primers specific for the unspliced form of ZmbZIP60 (ZmbZIP60u) or the spliced form (ZmbZIP60s). (A) RT-PCR analysis of the expression of ZmbZIP60u and ZmbZIP60s in temperate lines (B73 and IL14H) and subtropical lines (Ki3, CML69, CML103, and CML32). (B) The levels of expression were evaluated using ImageJ with ZmUbi as an internal control. FIGURE S2 | Ten different sequence motifs representing indels in the promoter region in near upstream region (2200 bp) of ZmbZIP60 in the seven different maize inbred lines. Sequence form six public lines and the inbred line used in this study were used for multiple sequence alignment.
6,995.8
2018-06-29T00:00:00.000
[ "Biology", "Environmental Science", "Agricultural and Food Sciences" ]
Generation of cold Rydberg atoms at submicron distances from an optical nanofiber We report on a highly controllable, hybrid quantum system consisting of cold Rydberg atoms and an optical nanofiber interface. Using a two-photon ladder-type excitation in Rb, we demonstrate both coherent and incoherent Rydberg excitation at submicron distances from the nanofiber surface. The 780 nm photon, near resonant to the 5S → 5P transition, is mediated by the cooling laser, while the 482 nm light, near resonant to the 5P → 5D transition, is mediated by the guided mode of the nanofiber. The rate of atom loss from the cold atom ensemble is used to measure the Rydberg population rate. A theoretical model is developed to interpret the results and link the population rate to the experimentally measured effective Rabi frequency of the process. This work makes significant headway in the study of atom-surface interactions at submicron scales and the use of cold Rydberg atoms for all-fibered quantum networks and quantum simulations. In recent years, Rydberg atoms have emerged as leading candidates for neutral atom based quantum information processing [1][2][3][4] and quantum simulations [5][6][7]. The long-lived quantum states and the precisely tunable dipolar interaction, leading to blockade [8], can be used to prepare a mesoscopic atomic ensemble exhibiting quantum correlations and entanglement [9]. Such systems have already been used to demonstrate a quantum phase gate [10] and simulator [7] in free-space. Interfacing interacting Rydberg atoms with microfabricated devices is a very attractive choice for building compact and scalable hybrid quantum devices. Coherent Rydberg excitation has been reported for an atom chip [11], a µm-sized vapor cell [12], a hollow-core photonic crystal fiber [13,14] and has been proposed for a superconducting resonator [15], with each platform having its own advantages and disadvantages. Here, we present an alternative, but highly viable, platform for atom-based quantum networks by interfacing cold Rydberg atoms with a single-mode optical nanofiber (ONF). To date, ground state cold atom-ONF hybrid systems [16,17] have shown tremendous potential for a new generation of quantum devices. The small cross-section of the evanescent field from an ONF, as a result of the exponential radial decay from the fiber waist, leads to a large co-operativity [18,19], the key to many quantum optics experiments. The high intensity and field gradient can be used to optically trap atoms in a onedimensional array [20,21], thereby enabling the study of one dimensional, many-body physics, or can be exploited for quadrupole transition enhancement [22]. Cooperative effects, such as the generation of a collective entangled state [23], have been demonstrated in such a system using ground-state, neutral Cs atoms. In addition, atoms in the evanescent field region are intrinsically coupled to an optical bus in the form of the fiber-guided mode. This can lead to low-loss transfer of information to and from the interaction region [24], a prerequisite for Rydbergbased quantum repeaters in fiber-coupled cavities [25]. Aside from endeavours to combine ONFs and ground state, neutral atoms, work on Rydberg excitation next to ONFs has, to-date, been limited to theoretical proposals due to the difficulty in generating highly excited atom states within a few 100 nm of surfaces, e.g. dielectrics or metals, and the problem of induced electric fieldseven at distances as large as ∼100 µm -by adsorption of atoms on the surface [26,27]. In this Letter, we report on evanescent field assisted Rydberg excitation at submicron distances from the surface of an ONF, which is embedded in a 87 Rb atomic ensemble in a magnetooptical trap (MOT). A ladder-type, two-photon excitation scheme [28] is used to excite the atoms to a Rydberg state and a trap loss method [29] is used to probe the Rydberg excitation. We implement a rate equation model [30] to determine the rate of population transfer to the Rydberg state. Both coherent two-photon excitation and incoherent two-step excitation is demonstrated. A density matrix based model is developed for the three-level ladder-type system. The experimental results are consistent with the developed model. The experiment consists of an ONF, with a waist diameter of ∼400 nm that is single-mode for 780 nm, embedded in a cold atomic ensemble of 87 Rb. A schematic of the experimental setup is given in Fig. 1(a). The ONF is fabricated by exponentially tapering a section of SM800-125 fiber (cut-off wavelength 697 nm) using a H:O flamebrushing technique [31]. To guarantee a low-loss ONF in the ultrahigh vacuum (UHV) chamber, we ensure that the tapering process is adiabatic and that the fiber itself is very clean. During the experiment, 100 µW of 1064 nm light is passed through the ONF from each side to keep the fiber hot and avoid atom deposition on the ONF [32]. The 87 Rb atoms are cooled to ∼120 µK using a standard MOT configuration of three pairs of retro-reflected cooling and repump beams. The 780 nm cooling beam is 2π×14 MHz red-detuned from the 5S 1/2 (F = 2) → 5P 3/2 (F = 3) transition and the repump is on resonance with the 5S 1/2 (F = 1) → 5P 3/2 (F = 2) transition. The total powers in the cooling and repump beams are 42 mW and 2 mW, respectively. Using a magnetic field gradient of ∼24 G/cm at the center of the MOT, typically 10 7 atoms are trapped, and the typical Gaussian FWHM of the MOT is 0.5 mm. The free-space MOT fluorescence is collected by an achromatic doublet (f = 150 mm) and is divided into two parts using a 50:50 beam-splitter. One half of the signal is collected by a PMT to measure the instantaneous number of atoms in the MOT. The other half is imaged by an EMCCD camera to obtain the atom cloud density profile. The MOT fluorescence that is coupled to the guided-mode of the ONF is separated from light of other wavelengths using an assembly of dichroic mirrors and bandpass filters, before being delivered to an SPCM. The overlap between the cold atom cloud and the ONF is optimized by maximizing the photon count at the SPCM using three pairs of Helmholtz coils. With an optimized overlap, the SPCM count is proportional to the atom density, and, hence, the PMT signal, for a given Gaussian full-width-half-maximum (FWHM). The Rydberg excitation is driven by a ladder-type, two-photon process, shown in Fig. 1(b). The 780 nm light is provided by the cooling beams, while the 482 nm light is derived from a Toptica TA SHG Pro system and coupled into the nanofiber. The frequency of the 482 nm laser is stabilized to a vapor cell electromagnetically induced transparency (EIT) signal [28] and frequency shifted using an electro-optic modulator. The details of the locking scheme are given in the Supplemental Material [33]. The 482 nm light can be switched on and off using a combination of an AOM and a mechanical shutter to ensure complete cut-off. This light is coupled to the ONF using a pair of dichroic mirrors (DMLP650) and interacts with the atoms in the MOT via the evanescent field. The output light power is measured using a power meter to provide an estimate of the 482 nm power at the nanofiber waist; typical output values of about 16 µW recorded. Note that, at 482 nm, the ONF supports the fundamental mode, HE 11 , as well as the T E 01 , T M 01 , and HE 21 higher order modes. The experimental sequence is simple. First, the MOT is loaded to saturation for 8 seconds. The normalized number of atoms in the MOT at any time, t, can be obtained from the PMT signal and is expressed as where N tot (t) is the total number of atoms in the MOT at time t and N 0 = L/γ m is the steady-state number of atoms in the MOT. L is the loading rate of atoms into the MOT from the background vapor and γ m is the loss rate of atoms from the MOT. A typical loading curve is shown in Fig. 1(c). Fitting the PMT signal with Eq. 1 gives γ m , the atom loss rate from the MOT. When the MOT is loaded to saturation, the 482 nm laser propagating in the ONF is turned on. Only those atoms in the evanescent field of the nanofiber can interact with both the 780 nm and 480 nm light and, therefore, participate in the twophoton Rydberg excitation. The newly formed Rydberg atoms leave the cooling cycle and escape the MOT -this introduces a new loss rate, γ R , which includes any other loss processes, such as ionization of atoms post Rydberg excitation. The new loss mechanism starts bleeding the MOT of atoms and a new equilibrium is established over time, determined by the total loss rate, The time-dependent decay of the MOT can be written as where t 0 is the time at which the 482 nm laser is switched on. The time evolution of the atom number in the MOT is fitted with Eq. 2 to obtain γ R . Assuming all atoms excited to the Rydberg state are lost from the cooling cycle, N 0 γ R is the rate of formation of the Rydberg atoms at the moment the 482 nm laser is switched on. For a given detuning, ∆, of the 482 nm laser from the 5P 3/2 to the 29D Rydberg level, the experiment is repeated for 8 cycles to obtain mean values of γ m and γ R . The variation of γ m and γ R as a function of ∆ is investigated for two Rydberg levels, namely 29D 5/2 and 29D 3/2 . The results are presented in Fig. 2. Note that γ m does not change during the experiment, ensuring that the experimental conditions also do not change. We can clearly see that γ R shows two peaks as a function of ∆ for both of the 29D levels. The two peaks, i.e., P1 at ∆ = 11.7 MHz and P2 at ∆ = −4.3 MHz, correspond to two different mechanisms for the Rydberg excitation. P1 is obtained from a coherent, two-photon excitation, where a fraction of the groundstate atom population is coherently transferred to the Rydberg state without populating the intermediate, P 3/2 , excited state. In contrast, P2 corresponds to an incoherent, two-step excitation process. The cooling laser transfers a fraction of the ground-state population to the intermediate state, P 3/2 . The second photon, at 482 nm, then transfers a fraction of the P 3/2 population to the Rydberg state. In this process, the intermediate state is populated and there is no coherence established between the ground and Rydberg states. Ideally, the peaks should appear at ∆ = 0 and δ (the detuning of the 780 nm cooling laser). The deviation of the peaks from the expected position and separation values may arise from many factors, such as a light shift, a van der Waal's shift due to the fiber surface [20,34], or a shift due to stray electric charge on the fiber surface [14], experienced by the energy levels involved in the excitation process. These effects have not been incorporated in the model presented in this Letter. A detailed explanation of a similar mechanism affecting the ratio between coherent and incoherent excitations can be found in Ref. [35]. Figure 3 shows the variation of γ R as a function of ∆, for three different values of the cooling laser detuning, δ. The position of the coherent peak, P1, changes with δ to satisfy the two-photon resonance condition. As we expect, the position of P2 does not change for δ = 10 MHz and 14 MHz; however, there is a noticeable shift when δ = 18 MHz. A detailed theoretical model would be needed to determine the reason for this observed shift. Finally, we investigated the dependence of γ R on the Rabi frequency of the 5P 3/2 → 5D X/2 transition for both the coherent and incoherent processes. To perform this experiment, δ was set to 14 MHz and ∆ was set to the maximum of either P1 or P2. The power of the 482 nm laser was then varied and γ R was measured for both peaks and both transitions. The results are shown in Fig. 4. We compare the dependence of the observed decay rates on the frequencies and intensities of the driving optical fields by considering a three-level density matrix model for a population of atoms driven by two coherent optical fields of constant intensity. The basic model includes a mixing rate, A, at which atoms enter and leave the evanescent field of the 482 nm light, coherence dephasing due to the thermal motion of atoms, and mutual incoherence of the optical fields. It ignores other aspects of the experimental geometry. The cooling field Rabi frequency, Ω p , is fitted from the splitting of P1 and P2, and the 482 nm Rabi frequency, Ω r , is fitted to the P1 and P2 height data. The decoherence rates are fitted to the widths of P1 and P2. The full model is included in the Supplemental Material [33]. The mixing rate, A, is an important parameter for describing this experiment. The introduction of atoms in a mixed state to the interaction region boosts the incoherent production of Rydberg atoms well beyond that expected in a coherently driven system. The mixing rate may be changed experimentally by changing the cooling field detuning and thus the temperature of the atoms. The effect of this can be seen in Fig. 3, where a closer detuned trap with hotter atoms has a higher mixing rate and a larger incoherent peak than a further detuned, cooler trap. The population of Rydberg atoms in the model, σ rr , is related to the experimental decay rate, γ R , by estimating the proportion of atoms in the MOT that are also in the evanescent field, P , and multiplying by the mixing rate: P Aσ rr ≈ γ R . The value of P is set to P = (3 ± 1) * 10 −4 to fit the model to the experimental data in Fig. 4, and this value also agrees with other experimental observations. It corresponds to an interaction region extending about 100 -200 nm from the fiber surface, noting that the 1/e decay length of the evanescent field of the fundamental mode of 482 nm light is 125 nm. The mixing rate, A, is set to 0.6 MHz to fit the ratio of the incoherent to coherent peak heights in Fig. 4. This value is also consistent with the average flight time of atoms at 120 µK through an interaction region with a diameter, d = 200 nm, where A ≈v/d, withv the average speed of the atoms. The experimental decay rate clearly follows the theoretical curve at low γ R , but diverges from the model above rates of 5 Hz. This may be due to production of Rydberg atoms beginning to saturate as σ rr → 1. This is not observed in the model, with a Rydberg population at P1 of σ rr = 0.033 for Ω r = 1.25 MHz, which is well below saturation. The model only includes one cooling field, with a Rabi frequency equal to that of one cooling beam. Ω r is assumed to be constant, whereas, in the experiment, atoms are subject to a time-varying interaction as they travel through the evanescent field. Averaging over these trajectories may give a more accurate relationship between the 482 nm intensity and the effective Ω r . The model also assumes that all atoms leav-ing the interaction region while in the Rydberg state are lost from the MOT. However, considering the lifetime of the Rydberg states and the temperature of the atoms, a significant proportion of these atoms could actually be recaptured directly into the MOT if they decay into the cooling cycle. Experimental determination of the saturation rates under different conditions and for different n could allow any processes interfering with recapture, such as ionization, to be observed. In this work we have reported on the generation of n = 29 level Rydberg atoms in a 87 Rb cold atom ensemble surrounding an optical nanofiber and have shown that this is a very viable system for hybrid atom-nanofiber devices. Excitation of the Rydberg atoms is mediated via the optical nanofiber and they are estimated to be generated at no more than a couple of 100 nm from the surface where overlap between the two excitation fields (780 nm and 482 nm) is maximum. This is a significant advance on previous Rydberg generation close to dielectric surfaces [26,27] and opens up many avenues of research such as all-fibered quantum networks using Rydberg atoms, excited atom-surface interactions at submicron distances, including van der Waal's interactions, effects on the Rydberg blockade or facilitation phenomena [36] in this new regime, stray electric field effects, for example, from the dielectric nanofiber, on energy levels and lifetimes of the atom, and, perhaps even more intriguing, the limitation of the maximum excited state (n value) that can be generated close to the nanofiber due to the atom size increasing quadratically with n. The versatility of this atom-nanofiber hybrid system could be extended to explore three-step Rydberg excitations [37] where the fiber would be single-mode for the light used to drive the atomic transitions. Hence, the mode overlap in the evanescent field would be increased and Rydberg generation efficiency should be improved. A loss in detection of the 420 nm light in the 6P → 5S decay channel could also provide an alternative, nondestructive mechanism for Rydberg atom detection. In addition, a comprehensive study of the coherent interactions in MOT-embedded nanofibers could extend this experimental technique beyond a qualitative confirmation towards an investigation of the behavior of Rydberg or other exotic states, e.g. Rydberg polarons [38], adjacent to optical nanofibers. Future work will focus on trapping atoms at defined distances from the nanofiber to explore limitations on n and on a study of the influence of the nanofiber on Rydberg levels using EIT signals [28]. A standard vapor cell EIT [39], as shown in Fig. 5, is used to lock the frequency of the 482 nm laser, although a modification is implemented to shift its frequency of the 482 nm. First, the 780 nm probe laser is locked to the 5S 1/2 (F = 3) → 5P 3/2 (F = 2, 3) co transition of 85 Rb. An EOM is used to generate sidebands at 1.06632 GHz, one of which is resonant with the 87 Rb 5S 1/2 (F = 2) → 5P 3/2 (F = 3) transition. In the vapor cell, which is enriched with 87 Rb, only the resonant sideband participates in the EIT process and the resultant EIT peak is used to lock the frequency of the 482 nm laser. The frequency of the 482 nm laser can now be changed easily just by changing the frequency of the sideband. THEORETICAL MODEL OF INTERACTION REGION We use the Maxwell-Bloch equations for thermal atoms in a small interaction volume. We define the Rabi frequencies, Ω p and Ω r , and the detunings from resonance, ∆ p and ∆ r , ∂ t σ ss = i Ω p 2 σ ps + C.C. + Γ p σ pp + Γ r σ rr − A(σ ss − 1 2 ), γ 0 is the motional dephasing of the coherence, σ rg . γ 0 = 600 ± 200 kHz for a MOT temperature of 120 µK and a coherence, σ rg , generated by a cooling beam perpendicular to the nanofiber axis. Other motional dephasings are ignored. The +C.C. terms indicate that the complex conjugates of the terms in [ ] are also included. The movement of atoms into and out of the interaction volume removes atoms from the populations and coherences at a rate A and puts them in a mixture of roughly (σ ss + σ pp )/2.
4,667.6
2019-07-25T00:00:00.000
[ "Physics" ]
Neutron beam line design of a white neutron source at CSNS China Spallation Neutron Source (CSNS), which is under construction, is a large scientific facility dedicated mainly for multi-disciplinary research on material characterization using neutron scattering techniques. The CSNS Phase-I accelerator will deliver a proton beam with an energy of 1.6 GeV and a pulse repetition rate of 25 Hz to a tungsten target, and the beam power is 100 kW. A white neutron source using the back-streaming neutrons through the incoming proton beam channel was proposed and is under construction. The back-streaming neutrons which are very intense and have good time structure are very suitable for nuclear data measurements. The white neutron source includes an 80-m neutron beam line, two experimental halls, and also six different types of spectrometers. The physics design of the beam line is presented in this paper, which includes beam optics and beam characterization simulations, with the emphasis on obtaining extremely low background. The first-batch experiments on nuclear data measurements are expected to be conducted in late 2017. Introduction China Spallation Neutron Source (CSNS) is a large scientific facility dedicated mainly for multidisciplinary research on material characterization using neutron scattering techniques [1,2].The CSNS Phase-I, as shown in Fig. 1, is under construction and expected to be completed in early 2018.The CSNS accelerator complex is designed to deliver the proton beam with an energy of 1.6 GeV and a pulse repetition rate of 25 Hz to a tungsten target, and the beam power at Phase-I is 100 kW.The beam power will be upgraded to 500 kW at Phase-II.A white neutron source using the back-streaming neutrons through the incoming proton beam channel was proposed and is under construction [3,4].The back-streaming neutrons that are modestly moderated by the cooling water passing through the target slices have a very wide energy spectrum (so-called white neutrons) and also a good time structure, and they are very suitable for nuclear data measurements. Layout of back-streaming white neutron source The back-streaming White Neutron Source (Back-n) at CSNS includes an 80-m neutron beam line (the first 20-m beam line is common with the proton beam line) and two experimental halls.Two endstations with six detector systems (or spectrometers) in total with one in a time for nuclear data measurements are planned in order to carry out different types of experiments.a e-mail<EMAIL_ADDRESS>shown in Fig. 2, the near endstation (ES#1) at 56 m from the target is a high-flux experimental hall.The far endstation (ES#2) at 76 m from the target is highresolution experimental hall.A preparation room is for preparing the experiments or maintaining the detector temporarily.Besides, there is also a local control room on the ground for remotely manipulating and monitoring the experiment process. Thanks to the multiple apertures of the shutter (also functioning as a collimator) and two collimators, one can control the neutron beam intensity and beam spot sizes.According to the requirements of different experiments, one of the three beam spot settings as listed in Table 1 will be used.Due to space limitation, an in-room neutron dump has to be located at rear of ES#2 [5].The neutron beam window at about 26 m from the target can separate the different vacuum conditions between proton beam line (10 −6 Pa) and neutron beam line (10 −4 Pa).The appended motion system can control the Cd and/or B 4 C filters which absorb the low-energy neutrons to avoid the influence of low-energy neutrons on other repetition period. Neutron energy spectrum and time resolution CSNS uses a sliced tungsten target, with water-cooling.The target size is 7 cm (Height) × 17 cm (Width) × 65 cm (Length), including the water layers.The production, moderation, and transport of neutrons in the target have been simulated using FLUKA [6].A proton beam window (PBW) of about 1.5 mm in thickness and a neutron beam window (NBW) of 1 mm in thickness both in aluminium alloy are placed at 1.5 m and 26 m from the target respectively.The simulated back-streaming neutron spectrum and time resolution are shown in Figs. 3 and 4. Different operation modes have been developed for the CSNS accelerator to provide neutron beams for different nuclear data measurements [7].The smallest bunch width is 1.5 ns (rms).The transport of the back-streaming neutrons is simulated by FLUKA.general, we can obtain the time resolution under 1% for the whole usable energy range (1 eV to 100 MeV) with the combination of different operation modes (N-mode for normal neutron scattering, two D-modes for dedicated white neutron modes with beam power reduction).As shown in Fig. 4, for the D-Mode 2 the time resolution at Back-n is comparable or even better with the CERN n-TOF above 100 keV. Neutron beam spots and fluxes According to the requirements of different experiments, three beam-spot settings are available in ES#2.The shutter and collimators by using combined materials of iron and copper are designed to satisfy the beam sizes.On the shutter, five gears are configured.A no-hole position is used to block the neutron beam as a function of shutter.A tiny-hole position is used to lower the beam intensity to satisfy the low-flux white neutron experiments.The other The collimation apertures and resultant fluxes for three cases are listed in Table 1.There is also a dedicated aperture set for neutron imaging, which will not be discussed here. Experimental backgrounds The experimental backgrounds include the gammas and scattering neutrons.For ES#1, the neutron and gamma backgrounds include: 1) scattering in the inner wall of beam tube; 2) leaking through the front wall; 3) leaking through the lateral wall in which neutrons are produced by the proton beam loss on the nearby proton beam transport line.For the first, the beam tubes are designed to be of very large diameter of 300 mm and aluminium material is used.For the second, besides a thick concrete wall of 2 m, filling the gap between the beam tube and the wall with iron sand and additional in-tube collimation by boroncontaining polythene bushings are also used.For the third, iron plates of 80 cm in thickness are put in the lateral wall to shield the neutrons from the proton beam line tunnel; in addition, a 5-cm thick boron-containing polythene is attached to the inner surface of the hall except the ground to absorb neutrons.For ES#2, besides the same sources of above 1) and 2) for ES#1, the back-scattering neutrons and gammas from the dump which situates in the hall are also important.In order to suppress this effect, a sophisticated in-room dump has been designed [5].The neutron and gamma backgrounds in the two endstations which have been simulated by FLUKA, are shown in Fig. 5. Another important background contribution from the experimental sample is difficult to suppress, but should be measured in details for case-to-case experiments. Radiation protection The shutter is a key component on the neutron beam line when one needs to switch the neutron beam off to enter in the experimental halls.The absorption block of the shutter is composed of combined1.2-miron and 0.3-m copper.As shown in Fig. 5, when the shutter is closed, the prompt radiation dose equivalent rate is much less than 1 µSv/h in the ES#1 even in the case of 500 kW beam power (CSNS-II).It is safe for experimenters to enter the halls and carry out all necessary manipulations for experiments.When the beam is on target, the neutron beam line tunnel before ES#1 is prohibited to enter. When the neutron beam is on (shutter open), the prompt radiation dose equivalent rates in the back-n tunnel with a 100-kW beam power for the three beam spot settings are still small, as shown in Fig. 6.The two red contours denote the dose values of 2.5 µSv/h and 25 µSv/h.For strict safety, the person protection system (PPS) is designed to prohibit personnel to enter in the halls. Conclusions and status The physics design for the back-streaming white neutron source at CSNS has been accomplished, including optics design, neutron beam profiles and neutron/gamma backgrounds with intense FLUKA simulations.The white neutron beams and the experimental conditions are proved to be excellent for nuclear data measurements. At present, the main beam line components are under fabrication, and four detector systems are under development simultaneously.The first-batch experiments are expected to be conducted in late 2017, when the neutron target will receive the proton beam. Figure 2 . Figure 2. Layout of Back-n neutron beam line. Figure 3 . Figure 3. Back-n neutron energy spectrum at 80 m. Figure 4 . Figure 4.The simulated time resolution with different operation modes at 80 m vs that of CERN n-TOF. Figure 6 . Figure 6.Prompt radiation dose equivalent rates in the Backn tunnel with the 100-kW and 500-kW beam power of proton accelerator when the neutron beam is switched off. Figure 7 . Figure 7. Prompt radiation dose equivalent rates in the Back-n tunnel with a 100-kW beam power of proton accelerator for three beam spot settings. Table 1 . Back-n neutron beam spots and corresponding fluxes.
2,070.6
2017-09-01T00:00:00.000
[ "Engineering", "Physics" ]
A NOVEL APPROACH TO GLAUCOMA SCREENING USING OPTIC NERVE HEAD THROUGH IMAGE FUSION AND FRACTAL GEOMETRY The Glaucoma is a typical eye issue that causes vision loss. It leads to visual impairment if it is untreated on time. Normally, vision loss is slow and not perceptible. Regular and systematic eye assessments are suggested for persons from middle age to prevent further vision loss. The proposed system introduced a new technique in the field of ophthalmology to diagnose glaucoma in an effective way using Image Fusion and Fractal Geometry techniques. The optic cup and disc are extracted from fundus images using K-means and Thresholding techniques. Optic cups generated by the techniques are combined to obtain the better cup region using the image Fusion Technique to improve the glaucoma screening process. The same process is applied on optic discs to obtain the fused disc area. The Box counting fractal dimension estimation technique from Fractal Geometry is applied on fused areas to classify the image as either healthy or glaucoma. Results of these two techniques are evaluated on a publically 4285 A NOVEL APPROACH TO GLAUCOMA SCREENING available HRF dataset and obtained the accuracy of 97%. INTRODUCTION Billions of people continue to open increased risk of visual deficiency or huge visual disability. It reduces the quality of human life if the condition is undiscovered, left untreated and not diagnosed on time. Glaucoma is the one cause for visual deficiency. Hence, early discovery of glaucoma is important to strengthen the assistances of treatment and it remains a challenge when it is undiagnosed in the community. According to survey, it is shown that expenses of treating glaucoma in many of developed countries increase as severity of glaucoma [1]. If glaucoma patients are not correctly identified and treated at the prior stages leads to reduction in reserve funds and causes burdens on healthcare facilities. In current era, medical field equipped with advanced instruments but require new approaches to diagnose diseases in its initial-phase. There are different clinical parameters and approaches to detect glaucoma in the early stage [2][3][4][5][6]. Following sections describes the different methods proposed by various authors to detect glaucoma. Ajesh et al., [7] developed a methodology for finding glaucoma condition at the initial stage by examination of retinal features extracted from fundus image using imaging process techniques. Authors presented an improved machine learning algorithm to discover the disease. A discrete wavelet transform (DWT) is used for classifying the diseases. Proposed methodology provided the better results and achieved 95 percentage of accuracy. J. Carrillo et al., [8] illustrated the basic concepts of glaucoma and different screening techniques. This work presented the computational tool to extract disc and cup areas from the fundus images using thresholding technique. The proposed method was tested on Center of Prevention and Attention of Glaucoma in Bucaramanga, Colombia dataset and obtained the accuracy of 88.5%. Guangzhou An et al., [9] developed an convolutional neural network (CNN) and random forest (RF) algorithms based technique to diagnose the open-angle glaucoma. Work is focused on retinal nerve fiber layer thickness; optic disc and macular ganglion cell complex (GCC) present in Optical coherence tomography and fundus data. Proposed method generated deviation and thickness maps using segmentation approaches. It is evaluated on 357 images and obtained an AUC of 0.963. In [10], authors presented a new tool for detection of glaucoma using fundus images. They extracted local configuration pattern (LCP) based features and texton from the images to analyze the glaucoma status. Texton are generated by applying adaptive histogram equalization and convolution operations on images. The proposed method achieved the accuracy of 95.8%. Simonthomas, S et al., [11] presented new computerized approach to glaucoma diagnosis. The authors extracted the Gray Level Co-occurrence Matrix (GLCM) and Haralick based texture features to diagnose the glaucoma disease. The Image pre-processing is performed to eliminate the noises. Later, Noise free image is used to extract the GLCM and thirteen Haralick texture features. Finally, all these features are feed to k-nearest neighbors classification technique and achieved the 97% accuracy. Abhishek Pal et al., [12] presented auto encoding system called G-EYENET to identify the glaucoma. The modified u-net CNN is used to extract Region of Interest consisting of Optic Disc from fundus images. Authors considered RIM ONE v3, Drishti-GS, HRF databases for training the system and DRIONS-DB for testing purpose. The G-EYENET achieved the AUC of 0.923. Juan J et al., [13] developed a transfer learning approach to glaucoma detection. Optic disc is segmented by morphological operations then VGG-19 net is used for transfer learning. The approach achieved the AUC of 0.94. Alan Carlos de Moura Lima et al., [14] proposed a CNN based RNN architecture version 50 to study process of glaucoma in patients. The approach achieved the accuracy of 90%. Annan Li et al., [15] presented an approach to glaucoma detection using CNN. The deformable shape model is used to segment the optic disc from image. From METHODOLOGY The steps involved in the proposed method are illustrated in Figure 1. Color image captured from the fundus camera is input to proposed method. It works on two methods 1) simple thresholding and 2) K-means clustering techniques. In simple thresholding, The RGB channels are separated from the color image. Green channel image is selected for further processing because it has high density vascular architecture at optic nerve head region. The region of interest (RoI) considered for glaucoma detection is optic nerve head (ONH) area which is extracted by simple thresholding method. Reason for this RoI selection is ONH has more damaged area in glaucoma condition compared to healthy. B. K-means clustering K-means clustering is a segmentation algorithm. It divides the image into different clusters, each cluster consisting pixels similar to other pixels in the same cluster and different than those in other clusters. Algorithm divides the image into K clusters, among these clusters one of the clusters represents the disc and one more represents the cup. By experimental observation, value of K selected is 4. The cluster 3 represents the disc area and cluster 4 represents the cup area. Working principle of K-means algorithm is as follow Algorithm: K-means Input: color image (I), K=4 Output: Segmented binary Images(c1,c2,c3,c4) Step1: Randomly select K pixels as initial clusters Step 2: Allocate each pixel in the image to the closest centroid Step3: Calculate the center of the clusters Step 4: For every clusters, Find the distance between pixels and centers using Euclidean distance. Step 5: Based on calculated distance reassign the pixels to nearest clusters Step 6: Again find the center of new clusters Step 7: Repeat the steps 4,5 and 6 until pixels don't change the cluster C. FD Estimation Fractal dimensions are used to define the dimension of asymmetrical, irregular objects. The best widespread technique to estimate the irregularities in objects is Counting Method [17]. RESULTS The proposed approach is evaluated on publically available High-Resolution Fundus (HRF) Database found at webpage [18]. It has 45 retinal fundus images, out of which 15 healthy images and 30 are glaucoma affected. In this work, one ophthalmologist marked optic disc and cup areas are considered for accuracy calculation. Dice method [19] is not a sufficient method to measure the performance of proposed method because area marked by the ophthalmologist not accurately correlating with area extracted by the proposed method due to high level of pixels variations in glaucoma images. Hence, both dice coefficient and accuracy estimation [20] Steps involved in the glaucoma image processing are illustrated in the figure 2. Figure 2 a) illustrates the structure of retinal fundus image [21]. b) Shows healthy retinal image. c) Represents the glaucoma image. d) Illustrate the RoI extracted from the fundus image using simple thresholding method with a threshold value of 170. e) Illustrates the optic cup extracted from optic nerve head using simple threholding method with a threshold value of 150 after obtaining the RoI of size 252X252. f) Represents boundaries extracted using morphological operations. g) illustrates the FD calculation using Box counting method, the x-axis represents the log(r) value (the number of boxes in the vertical grid) and y-axis represents the log N(r) value (number of boxes covered the cup boundaries) and obtained FD value using (2) is 1.0438. [7] DWT 95% Carrillo et al., [8] Thresholding 88.5%. 97% Abhishek Pal et al., [12] G-EYENET AUC of 0.923 Juan J. et al., [13] glaucoma images as glaucoma. Therefore accuracy achieved is 97%. Table 2 illustrates the comparison of accuracy of method with existing approaches and obtained the better accuracy compared to other approaches. Figure 6 illustrates the Calculated FD values using thresholding, K-means and image fusion techniques on HRF dataset. Threshold FD value for healthy cup is below 1.035 and above 1.035 is considered as glaucoma. Threshold FD value for healthy disc is below 1.32 and above 1.32 is considered as glaucoma. Figure 6 illustrates CONCLUSION Early detection and diagnosis of glaucoma in everyday practice is more essential in order to ensure potential benefits for early treatment. In this work, new technique is developed to improve glaucoma evaluation. Calculating the accurate cup and disc areas using single technique in glaucoma evaluation is a challenging task. Thus, outputs of two feasible methods (thresholding and k-means) are combined to obtain accurate cup and disc areas, which in turn supports better glaucoma identification. A novel parameter called Fractal Dimension is calculated on cup and disc areas using box counting technique. Results show the proposed system is efficiently classified the images. This illustrates the image fusion technique is better approach to improve the accuracy of glaucoma detection. The proposed method is tested on HRF Dataset and provided better outcomes as compared to existing approaches. The results obtained are correlated with the results of the ophthalmologist and provided accuracy of 97%. Therefore, the proposed system can be used for early detection of glaucoma and as the decision support system for ophthalmologists. CONFLICT OF INTERESTS The author(s) declare that there is no conflict of interests.
2,314.2
2021-01-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Effect of Silicone Inlaid Materials on Reinforcing Compressive Strength of Weft-Knitted Spacer Fabric for Cushioning Applications Spacer fabrics are commonly used as cushioning materials. They can be reinforced by using a knitting method to inlay materials into the connective layer which reinforces the structure of the fabric. The compression properties of three samples that were fabricated by inlaying three different types of silicone-based elastic tubes and one sample without inlaid material have been investigated. The mechanical properties of the elastic tubes were evaluated and their relationship to the compression properties of the inlaid spacer fabrics was analysed. The compression behaviour of the spacer fabrics at an initial compressive strain of 10% is not affected by the presence of the inlaid tubes. The Young’s modulus of the inlaid tubes shows a correlation with fabric compression. Amongst the inlaid fabric samples, the spacer fabric inlaid with highly elastic silicone foam tubes can absorb more compression energy, while that inlaid with silicone tubes of higher tensile strength has higher compressive stiffness. Introduction Cushioning materials can be found in many different types of apparel and wearable items that provide shock absorption and wearer protection. Traditionally, polymeric foams are used in insoles, bra cups, and protective apparel to deliver the cushioning function [1][2][3]. Recently, spacer fabrics which are weft or warp knitted have been used as an alternative to foam materials. They are now a viable option because they have a unique three-dimensional (3D) knitted structure and provide the products with higher air and water vapour permeabilities and breathability [4][5][6][7][8]. Spacer fabrics can also be readily found in daily life items, such as in shoes, chairs, car seats, carpets, mattresses, backpacks, etc. [9]. Compression behaviour is an important criterion for determining whether or not a material can offer suitable cushioning functions and hand feel for different end-uses. In terms of polymeric foam, the compression properties can be controlled by varying the composition and the density to accommodate various applications [10,11]. Spacer fabric consists of two surface layers that are connected by spacer yarns which form a connective layer. Variations in any component of spacer fabric can affect its mechanical properties and wear comfort. The compression properties of spacer fabrics have often been studied. The elasticity of yarns used in the surface layer is one of the factors that contributes to the compression properties [12][13][14]. The compression properties of weftknitted spacer fabric have been found to be affected by the number of tuck stitches in the surface layers [15]. However, it has also been shown that the compression properties of spacer fabric are related to the connective layer [16][17][18]. Monofilament and multifilament yarns are commonly used as spacer yarns and can impart entirely different properties to the fabric [19][20][21]. The thickness, composition, connecting distance, inclination angle, and pattern of the spacer yarns can be used to control the thickness and compressive stiffness of the spacer fabric [22][23][24]. The previous studies related to compression behaviour of spacer fabrics mainly focus on the structural properties. However, it can be challenging to produce a thin fabric with high compression strength and high energy absorption when using a conventional spacer fabric structure. A spacer fabric that is thin enough for use as insoles or protective garments (less than 1 cm in thickness) could easily collapse from stress produced by the human body [25]. However, if a more compact connective layer is used, the spacer fabric becomes heavy and stiff, thus reducing its cushioning ability. Hamedi et al., proposed the use of nickel-titanium alloy wires as the spacer yarn [26]. The spacer fabric shows an improvement on the compression energy absorption; however, the cost of the fabric is also largely increased. Moreover, investigation in applying materials other than polyester or polyamide filament yarns in the connective layer of the spacer fabric is still limited. In this study, a composite structure that consists of additional silicone-based materials was investigated so as to enhance the cushioning properties of spacer fabric. A novel sandwich structure with inlaid silicone tubes in the connective layer of spacer fabric has previously been developed [27]. Silicone is a synthetic polymer with a silicon-oxygen main chain [28]. Silicone is flexible, flame retardant, and relatively inert. Silicone inlaid tubes offer extra support to reinforce the spacer structure so that the fabric can withstand pre-stress from the body during application without the flexibility and energy absorption properties being sacrificed. In order to further investigate the effect of inlaid materials on the properties of spacer fabric, samples made with three different kinds of silicone-based tubular materials were fabricated and evaluated. The main purpose was to understand the relationship between the mechanical properties of the inlaid tubular materials and the compression properties of spacer fabric. The findings can contribute to furthering the development of sandwich structured textile materials and enhance wearable cushioning products. The inlaid materials can become a new parameter in adjusting the compression and cushioning properties of knitted spacer fabrics which allows the fabric to provide the desired energy absorption ability for various end-uses. Materials The yarns for knitting the surface layers of the spacer fabric samples included 450D 3-ply 100% polyester drawn textured yarn (LS 1/20, Amossa, Osaka, Japan) and 140D spandex yarn (Heng Jing Limit, Jiangsu, China). The spacer yarn was 100% polyester monofilaments with a diameter of 0.12 mm (Nantong Ntec Monofilament Technology Co., Ltd., Nantong, China). Three types of silicone-based tubular materials, including silicone foam rods, silicone rods, and silicone hollow tubes (Yuema, Shanghai, China) were inlaid in the connective layer of the spacer fabrics, as listed in Table 1. The three types of inlay materials have a similar thickness but a different linear density. T1, which incorporates silicone foam as the tubular material, is relatively light in weight. The good elasticity and low density of silicone foams make it suitable to be used in challenging application such as shock absorbers, wound dressings, and joint sealants [29,30]. T2 and T3 both incorporate silicone rubber. Silicone rubbers can be made into tubes, hose, gaskets, and seals [31]. T2 incorporate silicone in a solid rod form, while T3 is a hollow tube form. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Preparation of the Inlaid Spacer Fabric Three samples inlaid with each type of tubular material and one sample without any inlaid material were produced by using a 10-gauge v-bed flat knitting machine (SWG091N210G, Shima Seiki, Wakayama, Japan). The two surface layers were knitted with a single jersey structure and the connective layer was a spacer structure with a linking distance of 6 needles for all the samples. One course of the spacer fabric consisted of 2 courses of knit stitches on the surface and 6 tuck stitch courses of spacer yarn. Following a previous study, the tubular materials were inlaid into the connective layer with miss stitches [27]. Therefore, the tubular materials did not come into contact with the knitting needles and floated between the front and back needle beds. The inlaid course was carried out between the tuck courses of spacer yarn and hence the spacer yarns acted as a net to hold the silicone tubes in place. One course of inlay was inserted in every 4th knitting course of the spacer fabric ( Figure 1). The weight, thickness, and cross-sectional views of the four fabric samples are shown in Table 2. Evaluation of Mechanical Properties of Inlaid Tubular Materials Tensile and compression tests were conducted on the three tubular samples by using a universal testing machine (EZ-S, Shimadzu, Kyoto, Japan). The tensile test was conducted in accordance with ASTM D2731-15, the standard test method for elastic properties of elastomeric yarns. The tubular sample was mounted between a pair of jaws (Figure 2a). The gauge length was set at 50 mm with a pre-tension of 2.55 g. The sample was subjected to 5 loading and unloading cycles. The sample was extended at a rate of 500 mm/min, held at the maximum extension limit for 30 s, and returned at a rate of 100 mm/min. The maximum extension was set at 300% of the gauge length. As T1 and T2 failed to extend to the 300% strain, 75% of the elongation at first break was used as the maximum extension instead. Therefore, T1 and T2 were extended up to 202% and 224% of the gauge length, respectively. The compression test was carried out by using a pair of compression plates. The sample was mounted onto the centre of the plate at a length of 118 mm. Double-sided tape was used to hold the sample in place during testing (Figure 2b). The compression speed was 20 mm/min with a maximum compression displacement of 0.6 mm. The samples were conditioned under a standard environment (20 ± 2 • C, 65 ± 2% relative humidity) for 24 h before they were tested. Evaluation of Mechanical Properties of Inlaid Tubular Materials Tensile and compression tests were conducted on the three tubular samples by using a universal testing machine (EZ-S, Shimadzu, Kyoto, Japan). The tensile test was conducted in accordance with ASTM D2731-15, the standard test method for elastic properties of elastomeric yarns. The tubular sample was mounted between a pair of jaws ( Figure 2a). The gauge length was set at 50 mm with a pre-tension of 2.55 g. The sample was subjected to 5 loading and unloading cycles. The sample was extended at a rate of 500 mm/min, held at the maximum extension limit for 30 s, and returned at a rate of 100 mm/min. The maximum extension was set at 300% of the gauge length. As T1 and T2 failed to extend to the 300% strain, 75% of the elongation at first break was used as the maximum extension instead. Therefore, T1 and T2 were extended up to 202% and 224% of the gauge length, respectively. The compression test was carried out by using a pair of compression plates. The sample was mounted onto the centre of the plate at a length of 118 mm. Double-sided tape was used to hold the sample in place during testing ( Figure 2b). The compression speed was 20 mm/min with a maximum compression displacement of 0.6 mm. The samples were conditioned under a standard environment (20 ± 2 °C, 65 ± 2% relative humidity) for 24 h before they were tested. Evaluation of Compression Properties of the Spacer Fabrics A compression test on the fabric samples was carried out by using the same testing machine along with a pair of compression plates with a diameter of 118 mm. The fabric samples were prepared with dimensions of 50 mm × 50 mm. The compression rate was 12 mm/min with a maximum compression stress of 60 kPa. Four specimens of each sample were tested. The compression energy of each sample was calculated as the integral of the compressive loading (WC) and unloading (WC'). All of the fabric samples were allowed to relax for one week after released from the knitting machine and stored in a standard environment (20 ± 2 °C, 65 ± 2% relative humidity) before testing. ANOVA was adopted to analyse the effect of the inlaid materials on the compression strain and compression energy. A Sidak post hoc test was used to analyse the effect between pairs. The alpha level was set at 0.05 for statistical significance. Evaluation of Compression Properties of the Spacer Fabrics A compression test on the fabric samples was carried out by using the same testing machine along with a pair of compression plates with a diameter of 118 mm. The fabric samples were prepared with dimensions of 50 mm × 50 mm. The compression rate was 12 mm/min with a maximum compression stress of 60 kPa. Four specimens of each sample were tested. The compression energy of each sample was calculated as the integral of the compressive loading (WC) and unloading (WC'). All of the fabric samples were allowed to relax for one week after released from the knitting machine and stored in a standard environment (20 ± 2 • C, 65 ± 2% relative humidity) before testing. ANOVA was adopted to analyse the effect of the inlaid materials on the compression strain and compression energy. A Sidak post hoc test was used to analyse the effect between pairs. The alpha level was set at 0.05 for statistical significance. Analysis of the Tensile Properties of the Inlaid Tubular Materials The plotted loading and unloading curves of the first and the fifth cycles of the tensile test of the three tubular samples are presented in Figure 3a,b. The force at 100% and 200% elongation of the tubes at the first and fifth loading cycles and the fifth unloading cycle, together with the Young's modulus measured from the first extension loading are shown in Figure 3c,d. Figure 3e shows the displacement-force curves obtained from the compression test. The three tubular samples show very different non-linear elastic behaviours. Silicone foam is a porous viscoelastic polymer foamed from silicone rubbers [32,33]. Silicone foam has the properties of silicone combined with foam properties, light weight, and good flexibility [34]. T1 is the most elastic and has the lowest Young's modulus amongst the three tubular samples. A relatively small force is required to extend T1 and the loss in elastic hysteresis is also small. T2 is solid rod form of silicone, has the highest Young's modulus, requires the largest force for extension, and shows a large hysteresis, especially in the first cycle of extension. T3 is hollow tube form of silicone and therefore has a lower weight and tensile strength than T2. T3 can be extended to the longest length at break. In comparison to T1, T3 requires a slightly higher force of 0.1 N to extend to a strain of 100% but 0.1 N less to extend to a strain of 200%. T2 has the highest compressive stiffness, followed by T3, whereas T1 is the softest material and most easily compressed. By inlaying the three tubular samples which have a different tensile strength, elasticity and stiffness into the spacer fabric, the effect of the mechanical properties of the inlaid materials on fabric compression properties can be identified. Analysis of the Tensile Properties of the Inlaid Tubular Materials The plotted loading and unloading curves of the first and the fifth cycles of the tensile test of the three tubular samples are presented in Figure 3a,b. The force at 100% and 200% elongation of the tubes at the first and fifth loading cycles and the fifth unloading cycle, together with the Young's modulus measured from the first extension loading are shown in Figure 3c,d. Figure 3e shows the displacement-force curves obtained from the compression test. The three tubular samples show very different non-linear elastic behaviours. Silicone foam is a porous viscoelastic polymer foamed from silicone rubbers [32,33]. Silicone foam has the properties of silicone combined with foam properties, light weight, and good flexibility [34]. T1 is the most elastic and has the lowest Young's modulus amongst the three tubular samples. A relatively small force is required to extend T1 and the loss in elastic hysteresis is also small. T2 is solid rod form of silicone, has the highest Young's modulus, requires the largest force for extension, and shows a large hysteresis, especially in the first cycle of extension. T3 is hollow tube form of silicone and therefore has a lower weight and tensile strength than T2. T3 can be extended to the longest length at break. In comparison to T1, T3 requires a slightly higher force of 0.1 N to extend to a strain of 100% but 0.1 N less to extend to a strain of 200%. T2 has the highest compressive stiffness, followed by T3, whereas T1 is the softest material and most easily compressed. By inlaying the three tubular samples which have a different tensile strength, elasticity and stiffness into the spacer fabric, the effect of the mechanical properties of the inlaid materials on fabric compression properties can be identified. Effect of the Inlaid Tubes on the Compression Behaviour of Spacer Fabric The compression stress-strain curves, fabric strain at a stress of 60 kPa, and the compression energy of the four fabric samples are shown in Figure 4. At a compressive strain of 0 to 10%, the compression behaviour of the four fabrics is very similar because they are constructed with the same surface and connective structures that have the same materials. The initial stress up to 3.5 kPa compresses the loose surface layers and tightens the spacer structure. This shows that the initial softness of the spacer fabric is not affected by the presence of the inlaid tube in the connective layer. Effect of the Inlaid Tubes on the Compression Behaviour of Spacer Fabric The compression stress-strain curves, fabric strain at a stress of 60 kPa, and the compression energy of the four fabric samples are shown in Figure 4. At a compressive strain of 0 to 10%, the compression behaviour of the four fabrics is very similar because they are constructed with the same surface and connective structures that have the same materials. The initial stress up to 3.5 kPa compresses the loose surface layers and tightens the spacer structure. This shows that the initial softness of the spacer fabric is not affected by the presence of the inlaid tube in the connective layer. When the compression stress is further increased, the monofilament yarns start to deform and buckle. F0 starts to collapse, thus showing a decrease in the slope of the stress--strain curve and entering a plateau stage at a stress of 8.5 kPa. In the plateau stage, the monofilament yarns can no longer hold the connective structure which leads to the shearing of the fabric layers and rotation of the yarns. The fabric can be easily compressed with a small increase in stress. FT1, FT2, and FT3 consist of inlaid tubes to give extra support to the structure and withstand some of the stress applied. As shown in Figure 4a, the inlaid spacer fabric can withstand a higher compression stress than the one without inlay. This supports that inlaid yarns decrease the deformation ability of knitted fabrics [35]. The different inlaid tubes have different Young's moduli and mechanical properties and thus different compression properties can be found for all three fabrics. FT1 reaches the plateau stage at 11.1 kPa of compression. The plateau stage of FT1 covers a smaller range of strain when compared with F0. This is because the inlaid silicone foam rods act as a buffer to When the compression stress is further increased, the monofilament yarns start to deform and buckle. F0 starts to collapse, thus showing a decrease in the slope of the stress-strain curve and entering a plateau stage at a stress of 8.5 kPa. In the plateau stage, the monofilament yarns can no longer hold the connective structure which leads to the shearing of the fabric layers and rotation of the yarns. The fabric can be easily compressed with a small increase in stress. FT1, FT2, and FT3 consist of inlaid tubes to give extra support to the structure and withstand some of the stress applied. As shown in Figure 4a, the inlaid spacer fabric can withstand a higher compression stress than the one without inlay. This supports that inlaid yarns decrease the deformation ability of knitted fabrics [35]. The different inlaid tubes have different Young's moduli and mechanical properties and thus different compression properties can be found for all three fabrics. FT1 reaches the plateau stage at 11.1 kPa of compression. The plateau stage of FT1 covers a smaller range of strain when compared with F0. This is because the inlaid silicone foam rods act as a buffer to absorb a certain amount of the compression energy applied to the connective layer. On the other hand, there is no prominent plateau stage for FT2 and FT3. The inlaid tubes T2 and T3 are relatively stiff and can withstand most of the compression stress that acts on the connective layer and exceed the energy absorption capacity of the monofilament yarns. Therefore, the plateau stage that typically appears in the compression curve of spacer fabric is not shown for FT2 and FT3. Significant differences (p < 0.05) between the four samples on the fabric strain at a stress of 60 kPa, WC, and WC' are found in the results of ANOVA. For the fabric strain at 60 kPa, F0 and F1 show significant difference with all the other samples while there is no significant difference between F2 and F3. At a stress of 60 kPa, F0 was compressed to the highest strain of 61%. With the silicone foam rod inlay, the strain at 60 kPa of stress for FT1 decreases by 58%. The fabric structure of FT2 and FT3 is supported by the relatively stiffer inlaid tubes and hence even smaller strains are shown at a stress of 60 kPa. Moreover, the spacer fabrics with inlay have a significantly higher WC than conventional spacer fabric with no inlay. This shows that the inlaid tubes could help to absorb more compression energy than regular spacer fabric. The inlaid structure can provide better support against impact forces when used as padding or cushioning materials. Although no significant difference (p > 0.05) was shown on the WC between the pairwise comparison of the three spacer fabrics with inlay, FT1 shows a significant difference with FT2 and FT3 on the WC'. The compression behaviour and the compression energy of the spacer fabric can be affected by the inlaid material used. Relationship between Properties of Inlaid Tubes and Spacer Fabric Properties The relationship between the elasticity of the inlaid tubes and the compression properties of the spacer fabric was further investigated. In Figure 5, the logarithmic relationships between the Young's modulus of the inlaid tube samples with the WC of the spacer fabric samples and the fabric strain at a stress of 60 kPa show a high coefficient of determination (R2 > 0.9). The inlaid tubular materials show a significant effect on the compression behaviour of the spacer fabric. Amongst the three types of inlaid spacer fabrics, FT1 shows the highest WC. By inlaying a softer and higher elastic material such as the silicone foam rods, the spacer fabric can absorb a larger amount of compression energy. T2 is however heavy and has high tensile strength. Therefore, FT2 is stiffer than FT1 and FT3, especially when the compressive strain is above 35%, where the monofilament yarns have buckled and the inlaid tubes mainly support the fabric against compression forces. The fabric compression behaviour of FT2 and FT3 is similar. The hollow silicone tube, T3, has a Young's modulus and compressive stiffness that ranges between those of T1 and T2. Therefore, FT3 has a slightly higher WC than FT2. As only three samples are studied, it is difficult to generalise the results to all the different types of inlay materials and inlaid spacer fabric. However, the correlation between tensile properties of the silicone foam rod, silicone rod, and silicone hollow tube used in this study and the corresponding inlaid spacer fabric is observed. The mechanical properties of the inlaid materials can affect the compression properties of the inlaid spacer fabric. Conclusions The effect of inlaying tubular materials in the connective layer of spacer fabric on compression reinforcement has been investigated. Three weft-knitted spacer fabric samples inlaid with different tubular materials and one conventional spacer fabric without inlaid material as the reference were fabricated. The mechanical properties of the tubular samples and the compression properties of the fabric samples and the relationship between them were evaluated. The following conclusions were made based on the findings: • The compression behaviour of the spacer fabric at an initial compressive strain of 10% is not affected by the presence of inlaid tubes in the connective layer. • The inlaid spacer fabrics require higher stress to enter the plateau stage than the conventional spacer fabric. When an inlaid material with higher tensile strength and compression strength is used, no obvious plateau stage can be found in the compression stress-strain curves of the fabric. • The inlaid spacer fabrics not only have higher compression strength but can also absorb more compression energy than the conventional spacer fabric. The inlaying of elastic materials such as silicone foam or silicone rods effectively reinforces the spacer fabric. • Different inlay materials with different Young's moduli and tensile behaviours can affect the compression energy and stiffness of the resultant fabrics. The spacer fabric inlaid with silicone foam rods, which have lower tensile strength and compression strength than silicone rods and silicone hollow tubes, can absorb more compression energy. On the other hand, the spacer fabric that is inlaid with silicone rods with a high tensile strength and compression strength has the highest compressive stiffness amongst the fabric samples. A better understanding of the effect of different types of inlaid tubes on the compression properties of weft-knitted spacer fabric is provided. The findings can be used as a reference in the design and development of spacer fabrics to meet the requirements of various cushioning applications. Conclusions The effect of inlaying tubular materials in the connective layer of spacer fabric on compression reinforcement has been investigated. Three weft-knitted spacer fabric samples inlaid with different tubular materials and one conventional spacer fabric without inlaid material as the reference were fabricated. The mechanical properties of the tubular samples and the compression properties of the fabric samples and the relationship between them were evaluated. The following conclusions were made based on the findings: • The compression behaviour of the spacer fabric at an initial compressive strain of 10% is not affected by the presence of inlaid tubes in the connective layer. • The inlaid spacer fabrics require higher stress to enter the plateau stage than the conventional spacer fabric. When an inlaid material with higher tensile strength and compression strength is used, no obvious plateau stage can be found in the compression stress-strain curves of the fabric. • The inlaid spacer fabrics not only have higher compression strength but can also absorb more compression energy than the conventional spacer fabric. The inlaying of elastic materials such as silicone foam or silicone rods effectively reinforces the spacer fabric. • Different inlay materials with different Young's moduli and tensile behaviours can affect the compression energy and stiffness of the resultant fabrics. The spacer fabric inlaid with silicone foam rods, which have lower tensile strength and compression strength than silicone rods and silicone hollow tubes, can absorb more compression energy. On the other hand, the spacer fabric that is inlaid with silicone rods with a high tensile strength and compression strength has the highest compressive stiffness amongst the fabric samples. A better understanding of the effect of different types of inlaid tubes on the compression properties of weft-knitted spacer fabric is provided. The findings can be used as a reference in the design and development of spacer fabrics to meet the requirements of various cushioning applications. Conflicts of Interest: The authors declare no conflict of interest.
7,893.2
2021-10-22T00:00:00.000
[ "Materials Science" ]
Blocks with Equal Height Zero Degrees We study blocks all of whose height zero ordinary characters have the same degree. We suspect that these might be the Broue-Puig nilpotent blocks. Introduction The celebrated nilpotent blocks of finite groups introduced by M. Broué and L. Puig in 1980 ([8]) are locally defined in terms of the Alperin-Broué subpairs ( [1]). There is a general consensus that nilpotent blocks are the most natural blocks from the local point of view. It is not easy, however, to check if a block is nilpotent or not, and to have a global characterization of them, especially one that can be detected in the character table of the group, would be quite interesting. Here we propose to study blocks B of a finite group G such that all of its height zero characters χ ∈ Irr 0 (B) have the same degree d. This property of blocks, that can easily be detected in the character table of G, seem to appear quite naturally in block theory, and deserves some consideration. The blocks all of whose irreducible characters have the same degree were already considered by T. Okuyama and Y. Tsushima in [35]. In a nilpotent block B all height zero degrees are equal. And we suspect that the converse might be true. In this paper, we are able to prove this in some cases, with quite different arguments. If B is the principal block of G, or if the defect group D of B is normal in G, or if D is abelian (and we assume the Height Zero Conjecture) then the blocks with equal height zero character degrees are nilpotent. These results constitute Sections 3, 4, and 5 below. The most difficult result in this paper, to which a large extent of it is devoted, is to prove that the blocks of simple groups with equal height zero degrees have abelian defect groups and satisfy Brauer's Height Zero Conjecture. By our previously mentioned result, this implies that equal height zero degrees blocks are also nilpotent. This certainly agrees with the recent work of J. An and C. Eaton in which they prove that nilpotent blocks of simple groups have abelian defect groups for p > 2 [2]. The study of blocks of p-solvable groups with equal height zero degrees, which we do in the last section of the paper, leads to a variation of a classical large orbit question which Date: September 23, 2009. The first author thanks the Isaac Newton Institute for Mathematical Sciences, Cambridge, for its hospitality during the preparation of part of this work. does not seem easy to solve and which has interest in its own. (Some recent partial results are given in [14].) This new type of orbit problem has connections with delicate questions on the p ′ -character degrees of finite groups. Finally, let us mention that the blocks B such that all character degrees χ(1) are ppowers for χ ∈ Irr(B) give another example of blocks with equal height zero characters degrees. These blocks were proved to be nilpotent by work of G. R. Robinson and the second author ( [32]). EHZD blocks and Nilpotent Blocks Suppose that G is a finite group, p is a prime, and B is a p-block of G. In general, we use the notation in [29]. Hence Irr(B) are the irreducible complex characters in B, IBr(B) are the irreducible Brauer characters in B, and Irr 0 (B) are the height zero characters of B. For the sake of brevity, let us say that B is EHZD (equal height zero degrees) if there is an integer d such that χ(1) = d for all χ ∈ Irr 0 (B). Recall that a block B is nilpotent if whenever (Q, b Q ) is a B-subpair (that is, b Q is a block of QC G (Q) such that (b Q ) G = B), then N G (Q, b Q )/C G (Q) is a p-group. If B is nilpotent, then we know that IBr(B) = {ϕ} by Theorem (1.2) of [8]. Also, if χ ∈ Irr(B) has height zero, then by (3.11) in page 126 of [8], we have that χ(1) = ϕ (1). It then follows that all irreducible height zero characters in B have the same degree. Thus, as we mentioned in the introduction, nilpotent blocks are EHZD blocks. (We also notice here that in a nilpotent block all height zero characters are modularly irreducible. This condition, if not equivalent, seems also closely related to nilpotency as we shall point out in several places of this paper.) Principal blocks If B is the principal block of G, then (Q, b Q ) is B-subpair if and only if b Q is the principal block of N G (Q) (by the Third Main Theorem). Since the principal block b Q is N G (Q)-invariant, we conclude that B is nilpotent if and only if N G (Q)/C G (Q) is a p-group for every p-subgroup Q of G. Hence B is nilpotent if and only if G has a normal p-complement, by a classical theorem of Frobenius. Theorem 3.1. Let G be a finite group, let p be a prime and let B be the principal block of G. Then the following conditions are equivalent: (a) All height zero χ ∈ Irr(B) have the same degree. (c) B is a nilpotent block. Proof. In Section 2 we have pointed out that (c) implies (a) and (b). Suppose now that all height zero characters in B have the same degree. Hence all non-linear characters in B have degree divisible by p. Then G has a normal p-complement by Corollary 3 of [23] and so B is nilpotent. Now, suppose that all the height zero (that is, p ′ -degree) characters in B lift an irreducible Brauer character of G. We are going to use a theorem of Pahlings that asserts that ϕ ∈ IBr(G) is linear and all nonlinear characters χ ∈ Irr(G) with decomposition number 0 = d χϕ have degrees divisible by p, then G has a normal p-complement. (See Theorem 2 of [33].) Write ϕ = 1 G ∈ IBr(G) for the trivial Brauer character of G, and suppose that χ is non-linear with d χϕ = 0. If χ has p ′ -degree, then by hypothesis, χ 0 = 1 G and therefore χ is linear. This is not possible. Hence, we conclude that p divides χ(1). It follows that G has a normal p-complement by Pahling's theorem. Abelian Defect Groups In this section we prove that EHZD blocks with abelian defect groups are exactly the nilpotent blocks (assuming Brauer's Height Zero Conjecture). Theorem 4.1. Let B be a block with an abelian defect group, and assume that Irr(B) = Irr 0 (B). Then the following conditions are equivalent: (a) All height zero χ ∈ Irr(B) have the same degree. [8] it is stated that blocks with abelian defect group and inertial index one are nilpotent. Now, suppose that B is nilpotent with defect group D. is also a p-group, and we conclude that . That is, B has inertial index one. Normal Defect Groups In this Section we prove that EHZD blocks with a normal defect group are nilpotent. The following should be well-known. Lemma 5.1. Let B be a block with defect group D ⊳ G and let b D be a block of DC G (D) covered by B. LetB be the Fong-Reynolds correspondent of B over b D . IfB is nilpotent, then B is nilpotent. For this we may replace (Q, b Q ) by any G-conjugate. Since (b Q ) G = B, we have that Q ⊆ D (Theorem (4.14) of [29]). Now let e = (b Q ) DC G (Q) . We have that e G = B by the transitivity of induction. Now, if f is a block of DC G (D) covered by e, we have that e = f DC G (Q) by Corollary (9.21) of [29]. Hence e G = B and f Hence e covers (b D ) y , and therefore (b D ) yz = b D for some z ∈ C G (Q). Since T = DC G (D), we see that yz ∈ DC G (D) and therefore y ∈ DC G (Q). Thus N G (Q, b Q )/C G (Q) is a p-group and B is nilpotent. Proof. We already know that (c) implies (a) and (b). We prove by induction on |G| that (a) implies (c). In an analog way, we could prove that (b) implies (c). Let b D be a block of DC G (D) inducing B with defect group D. Let T be the stabilizer in G of b D , and letB be the Fong-Reynolds correspondent of B over b D . If all height zero χ ∈ Irr(B) have the same degree, then the same happens inB by the Fong-Reynolds correspondence [29,Theorem (9.14)]. By Lemma (5.1), we may assume that T = G. Now by Reynolds Quasi-Simple Groups From now until the last section of the paper, we are devoted to proving the following result. Theorem 6.1. Let S be a finite non-abelian simple group, G a quasi-simple group with G/Z(G) ∼ = S, and p a prime. Assume that B is a p-block of G such that all characters in Irr 0 (B) have the same degree. Then the defect group of B is abelian and thus B is nilpotent, unless possibly one of the following holds: (1) B is a faithful block for the 2-fold covering group 2.A n of the alternating group A n (n ≥ 14) (a so-called spin-block), or (2) B is a quasi-isolated block for an exceptional group of Lie type and p is a bad prime. Theorem 5.2 shows that in order to check Theorem 6.1 it suffices to prove the following for any block B of G all of whose characters in Irr 0 (B) have the same degree: (1) the defect group of B is abelian, and (2) B satisfies the Height Zero Conjecture. In order to do this, we invoke the classification of finite simple groups as well as the Deligne-Lusztig theory of characters of finite reductive groups and the fundamental results of Bonnafé-Rouquier and Cabanes-Enguehard on blocks. It will be given in several steps in the subsequent sections. The proof in the case of alternating groups will lead to a relative hook formula for the character degrees in p-blocks of the symmetric group. Unipotent blocks In this section we consider the unipotent blocks of finite groups of Lie type. We introduce the following standard setup: Any non-exceptional Schur covering group of a finite simple group of Lie type can be obtained as G := G F , where G is a simple algebraic group of simply-connected type over the algebraic closure of a finite field, and F : G → G a Frobenius map with finite group of fixed points G F , with the sole exception of the Tits group 2 F 4 (2) ′ , which will be treated later in Proposition 9.4. Let G * denote a group in duality with G and with corresponding Frobenius map F * : G * → G * and fixed points G * := G * F * . Let r denote the defining characteristic of G and q the absolute value of all eigenvalues of Frobenius on the character lattice of an F -stable torus of G, a half-integral power of r. We will then also write G = G(q) in order to indicate the corresponding value of q. By the fundamental work of Lusztig, the irreducible characters of G can be partitioned into so called Lusztig series indexed by conjugacy classes of semisimple elements s in G * . The characters in the Lusztig series E(G) := E(G, 1) corresponding to the trivial element in G * are the so-called unipotent characters. These can be viewed as being the building blocks of the ordinary character theory of finite groups of Lie type. Again by results of Lusztig, the unipotent characters can be parametrized by a set depending only on the type of G, that is, on the Weyl group of G together with the action of F on it, not on q or r. Moreover, their degrees are given by the value at q of polynomials in one indeterminate of the form where n is either a power of 2 or a divisor of 120, and Φ i (x) denotes the ith cyclotomic polynomial over Q (see for example Chapter 13 of [10]). We write Deg(γ) ∈ Q[x] for the degree polynomial of a unipotent character γ ∈ E(G). 7.1. Specializations of degree polynomials. We start by investigating specializations of degree polynomials of unipotent characters. We first discuss the question when two different degree polynomials f 1 , f 2 can lead to the same character degree f 1 (q) = f 2 (q). be the degree polynomials of two unipotent characters of an exceptional group of Lie type G = G(q). If f 1 (q) = f 2 (q) for some prime power q > 1, respectively square root of some odd power of a prime for the Suzuki or Ree groups, Proof. According to [10,Chap. 13], n j |120 for G = E 8 , and n j |24 else. Now f 1 (q) = f 2 (q) implies that q a 1 −a 2 m j i=1 Φ i (q) a i,1 −a i,2 ∈ Q has numerator and denominator a divisor of 120. Since the second factor is coprime to q, this holds in fact for both factors. Then Lemma 7.1(b) shows that q ≤ 121 for G = E 8 , and q ≤ 25 for the other types. For these finitely many values of q and finitely many types the assertion can be checked from the tables of degree polynomials. In fact, the additional restrictions in Lemma 7.1(b) allow to restrict the number of necessary computations even further. Note that none of the exceptions is a perfect group. Unfortunately, there are infinitely many exceptions to the conclusion of the previous proposition in the case of classical groups, so we will choose a different approach for those. 7.2. e-symbols and degrees. We need to give a brief recall of the notion of e-symbols and associated degree, see [26]. Let e ≥ 1 be an integer. An e-symbol is a sequence S = (S 1 , . . . , S e ) of e strictly increasing sequences S i = (s i1 < . . . < s im ) of non-negative integers of equal length m. The rank of an e-symbol S is defined as We define an equivalence relation on e-symbols as the reflexive, symmetric and transitive closure of the relation ∼ given by . . , s im + 1). There is a natural 1-1 correspondence between e-tuples of partitions π = (π 1 , . . . , π e ) ⊢ r of r and equivalence classes of e-symbols of rank r, as follows: by adding zeros we may assume that all π i = (π i1 ≤ . . . ≤ π im ) have the same number of parts. It is easily verified that S(π) = (S 1 , . . . , S e ), with S i := (π i1 , π i2 + 1, . . . , π im + m − 1) for 1 ≤ i ≤ e, has indeed rank r, and is well-defined up to equivalence. Let (v; u 1 , . . . , u e ) be indeterminates over Q. For an e-symbol S we define [26, (5.12)]). It can be checked that the rational function f S only depends on the equivalence class of the e-symbol S. We shall also write f π for f S with S = S(π). The following connection to the imprimitive complex reflection group G(e, 1, r) ∼ = C e ≀S r will be important for us. The irreducible complex characters of the wreath product G(e, 1, r) can be parametrized by e-tuples of partitions (π 1 , . . . , π e ) of r (see for example [26, (2A)]), hence by equivalence classes of e-symbols of rank r. Now let H = H(W, u) denote the cyclotomic Hecke algebra for W = G(e, 1, r) with parameters u = (v; u 1 , . . . , u e ). This carries a canonical symmetrizing form. By the main result of Geck-Iancu-Malle [18] the Schur element (with respect to this form) of the irreducible character of H indexed by the multipartition (π 1 , . . . , π e ) ⊢ r is f −1 S , where S = S(π 1 , . . . , π e ) (see Conjecture 2.20 in [26]). In particular, specializing v to 1 and u j to the eth roots of unity ζ j := exp(2πij/e) we obtain where d S denotes the degree of the irreducible character of G(e, 1, r) indexed by S. For later use let's record the following special cases. If r = 1, so G(e, 1, r) is the cyclic group C e , then a multipartition π = (π 1 , . . . , π e ) ⊢ r is uniquely determined by the unique i such that π i = (1). The corresponding e-symbol S has S i = (1), S j = (0) for j = i, and More generally the e-symbols with S i = (r), S j = (0) for j = i parametrize linear characters ϕ i of G(e, 1, r), for 1 ≤ i ≤ e. Evaluation of the defining formula shows that then 3. d-Harish-Chandra series and cyclotomic Hecke algebras. The blocks of finite groups of Lie type are closely related to so-called d-Harish-Chandra series. Let G be as above, the group of fixed points of a simple algebraic group G under a Frobenius map. For any d ∈ N, there is a notion of d-split Levi subgroup L of G (an F -stable Levi subgroup of G), and of d-cuspidal unipotent character of L := L F , see for example [5]. A pair (L, λ) consisting of a d-split Levi subgroup L ≤ G with a d-cuspidal unipotent character λ ∈ E(L) of L is called a d-cuspidal pair. Its relative Weyl group is then defined as , the set of unipotent characters of G admits a natural partition into d-Harish-Chandra series E(G, (L, λ)), where (L, λ) runs over the d-cuspidal pairs in G modulo conjugation. Furthermore, for each d-cuspidal pair (L, λ), there is a bijection between its d-Harish-Chandra series and the irreducible characters of its relative Weyl group W G (L, λ). The degree polynomials are then given by the following d-analogue of Howlett-Lehrer-Lusztig theory: Then for any ϕ ∈ Irr(W G (L, λ)) there exists a rational function D ϕ (x) ∈ Q(x) with zeros and poles only at roots of unity or zero, but not at primitive dth roots of unity, satisfying See [28,Thm. 4.2] and the references given there. In fact, the D ϕ (x) are inverses of Schur elements of a cyclotomic Hecke algebra attached to W G (L, λ) with respect to its canonical symmetrizing form. For example, if W G (L, λ) ∼ = G(e, 1, r), then D ϕ (x) is a suitable specialization of f ϕ (u) as defined above. We first determine for which parameters (u 1 , . . . , u e ) all Schur elements of the cyclotomic Hecke algebra for the cyclic group G(e, 1, 1) are equal: where ζ ∈ K is a primitive dth root of unity. Proof. Equivalently we may assume that and ′ denoting the derivative with respect to x, Since a e = 1 we conclude b = e, and thus a j = 0 for j = 1, . . . , e − 1. The claim follows. The following result will allow to show the existence of different height zero degrees in blocks of classical groups, that is, groups of type A n , B n , C n , D n , 2 A n or 2 D n . Proof. We will show that W G (L, λ) has linear characters ϕ 1 , ϕ 2 such that D ϕ 1 (q) = D ϕ 2 (q). The claim then follows from Theorem 7.3. In groups of classical type, there are three essentially different possibilities for the structure of the relative Weyl group (see [4, (3B)]). Firstly, W G (L, λ) could be a symmetric group S n . This happens if and only if either G = SL n (q) and d = 1, or G = SU n (q) and d = 2. In both cases, all of E(G) is just one d-Harish-Chandra series and we may take the trivial and the Steinberg character, which correspond to the two linear characters of S n and have distinct degrees. The second possibility is that W G (L, λ) ∼ = G(m, 1, r) for some m ≥ 2. This occurs for all other d-Harish-Chandra series E(G, (L, λ)) in classical groups for which λ is not parametrized by a so-called degenerate symbol. Let ϕ i denote the linear character of G(m, 1, r) parametrized by the multipartition (π 1 , . . . , π m ) with π i = (r). According to (1) for some non-zero c not depending on i, and the parameters q are certain powers of q, up to sign, as follows (see [4, Bem. 2.10, 2.14, 2.19]): (I) for G = SL n (q), d = 1, we have m = d and q = (q d ; 1, q b 1 d+1 , q b 2 d+2 , . . . , q b d−1 d+d−1 ), (I') for G = SU n (q), d = 2, we have m = d * , and q is obtained from the parameters in case (I) for d * by replacing q by −q, (II) for G of type B n , C n , D n , 2 D n and d odd we have m = 2d, e = d and q = (q e ; 1, q b 1 e+1 , . . . , q b e−1 e+e−1 , −q bee , . . . , −q b 2e−1 e+e−1 ), (II') for G of type B n , C n , D n , 2 D n and d ≡ 2 (mod 4) we have m = d, and q is obtained from the parameters in case (II) for d * by replacing q by −q, (III) for G of type B n , C n , D n , 2 D n and d ≡ 0 (mod 4) we have m = d, e = d/2 and where the b i are non-negative integers which are determined by λ. Here, for d ∈ N, d * is defined by Note that it can never happen that v k u i − u j = 0 for the above choices of parameters. Furthermore, we claim that there is at least one i with |u i | > 1. Indeed, otherwise we are necessarily in cases (II) or (II') and e = 1. But then, since λ is not parametrized by a degenerate symbol, b 1 > 0 by the definition of the b i in [4, (2B)], so |u 2 | > 1, a contradiction. By our above reductions it suffices now to show that not all f i (q) are equal. For this, we estimate the q-power in f i (q) for two choices of i. For i = 1 we have u 1 = 1, and v k u 1 − u j is at most divisible by the q-power u j (at least when q is odd), and not divisible by q if k = 0, so f 1 (q) is at most divisible by the q-power m j=2 u r−1 j . Now let i be such that |u i | is maximal among the {u 1 , . . . , u m }. Then v k u i − u j is divisible by at least the q-power u j , so f i (q) is at least divisible by By our above observation we have For fixed j, this can only happen for at most one value of k, and only when j ≡ 1 + e (mod 2e) and we are in cases (II), (II') or (III). Thus, we get an additional factor at most 2 in the q-part of f 1 (q). It follows that the q-parts of f 1 (q) and f i (q) can only agree if |u i | = q = 2 and all other u j have absolute value 1. Thus e = 1, we are in cases (II) or (II') and q = (q; 1, −q). But then (or the same with q replaced by −q). Finally, we consider the case where λ is parametrized by a degenerate symbol, which can only happen in types D n and 2 D n . Then W G (L, λ) ∼ = G(2m, 2, r), for some m ≥ 1. We denote by ψ 1 , . . . , ψ m the m distinct linear characters contained in the restrictions of the linear characters ϕ 1 , . . . , ϕ 2m from G(2m, 1, r) to its normal subgroup G(2m, 2, r) of index 2. Evaluation of [26, (5.12)] shows that D ψ i =c g −1 i (q) for some constantc, where Clearly, unless d = 1, we can argue as before to conclude that g 1 (q) = g i (q) for a suitable index i. If d = 1 then W G (L, λ) ∼ = G(2, 2, r) is the Weyl group of type D r , and we are in the principal 1-series of G. Here instead we take the trivial and the Steinberg character, which have distinct degree. Unipotent blocks. After these combinatorial preparations we are ready to investigate unipotent blocks of groups of Lie type G = G(q); here a p-block of G is called unipotent if it contains at least one unipotent character of G. Proof. First assume that p is a good prime for G, odd, and not equal to 3 when G is not of type 3 D 4 . Then by Cabanes-Enguehard [9, Thm. 22.9] the intersections of unipotent pblocks with E(G) are just the d-Harish-Chandra series, where d is the multiplicative order of q modulo p. Let B be a unipotent p-block corresponding to the d-Harish-Chandra series of the d-cuspidal pair (L, λ). If L = G, so λ is a d-cuspidal character of G, then the defect group of B is trivial by [9, Thm. 22.9(ii)], whence B is of defect 0. If L < G, then W G (L, λ) = 1. Now for γ ∈ E(G, (L, λ)), where ϕ := ρ(L, λ)(γ), and for ϕ ∈ Irr(W G (L, λ)), by Theorem 7.3(b). As p divides Φ d (q) by definition, this implies the same congruence (mod p). By the description in [9, Thm. 22.9(ii)], some unipotent character in B is of height zero. This shows that the unipotent characters in B of height 0 are precisely those γ ∈ E(G, (L, λ)) with ϕ = ρ(L, λ)(γ) of degree prime to p, for example the linear characters of W G (L, λ). For G of classical type it is shown in Proposition 7.5 that not all unipotent characters in B parametrized by linear characters of W G (L, λ) have the same degree. If G is of exceptional type and W G (L, λ) is cyclic, we may invoke Lemma 7.4 together with the parameters in [4, Tab. 8.1] to conclude. The relative Weyl groups W G (L, λ) for exceptional groups which are non-cyclic are listed in [4, Tab. 3.6] and [6, Tab. 1]. It is easy to check that these have two distinct character degrees prime to p, for all primes p which are good for G. But then the corresponding unipotent degrees must be distinct by Proposition 7.2, and of height 0 by Theorem 7.3(b). Next, if G is of classical type and p = 2, then all unipotent characters of G lie in the principal p-block of G, by [9, Th. 21.14]. Here, the trivial character and the Steinberg character have p-height 0 and different degrees. It remains to consider the case where G is of exceptional type and p is a bad prime for G (including the case of 3 D 4 with p = 3). There is no bad prime for 2 B 2 . The 2-blocks for 2 G 2 and the 3-blocks of 2 F 4 have been determined by Fong [17] resp. Malle [25, Bem. 1]: unipotent characters lie either in the principal block or are of defect zero. In the principal block the trivial and the Steinberg character have different degree. Table 1. Non-principal p-blocks of positive defect for bad p For the other types of exceptional groups, we use the description of unipotent blocks for bad primes p obtained by Enguehard [15, Th. A]. Here, again any unipotent p-block is either of defect 0, or it contains at least one non-trivial d-Harish-Chandra series. According to loc. cit. and the tables in [15, pp. 347-358], the non-principal unipotent blocks not of defect zero are as listed in Table 1 (up to Ennola duality and algebraic conjugacy; the notation is as in loc. cit.) In each case either the relative Weyl group has two distinct character degrees prime to p, in which case we may conclude as above, or we list two unipotent characters γ 1 , γ 2 in the corresponding d-Harish-Chandra series which are of p-height 0 and have distinct r-parts in their degrees (see [6,Tab. 2] for the list of d-Harish-Chandra series and [10,Ch. 13] for the degrees of unipotent characters). This completes the proof of Theorem 7.6. Note that the results hold even when the finite group G is not perfect, or even solvable. We now extend the result to unipotent blocks of arbitrary finite connected reductive groups. Theorem 7.7. Let G be a connected reductive group with a Frobenius map F : G → G and group of fixed points G := G F . Let B be a unipotent p-block of G, where p is not the defining characteristic r of G. Then either B is of central defect, and all characters of B have the same degree, or B contains two height 0 characters of different degrees. Moreover, these two degrees have different r-parts unless possibly if r = 2. Proof. The derived group [G, G] is semisimple, hence a central product G 1 • . . . • G r of simple algebraic groups. We assume the G i ordered such that G 1 , . . . , G s , for some s ≤ r, is a system of representatives for the F -orbits on {G 1 , . . . , G r }. Then, where m i is the size of the F -orbit of G i . Note that, in general, G ′ will be larger than the commutator subgroup of G. Modulo Z([G, G]) F = Z(G ′ ) we obtain a direct product Since unipotent characters restrict irreducibly to the F -fixed points of the derived group [24] B covers a unique block B ′ of G ′ . Furthermore unipotent characters have the center in their kernel, so the same holds for unipotent blocks. Thus B ′ corresponds to a unique blockB of the direct productḠ. This is a direct productB =B 1 × . . . ×B s of blocksB i ofḠ i . Now assume that one of theB i is not of defect 0 forḠ i . SinceḠ i is a central factor group of a group as in Theorem 7.6,B i then contains two height 0 unipotent characters of different degrees, with different r-parts if r = 2. Thus, the same is true forB, hence also for B ′ . By the above-mentioned irreducibility of restrictions, this then also holds for B. On the other hand, if allB i are of defect 0, then so isB, so B ′ is of central defect, contained in Z([G, G]) F . But Z([G, G]) ⊆ Z(G) as G = [G, G]Z(G), so the block B is also of central defect in G, as claimed. Moreover, as eachB i contains a unique ordinary character, we also have Irr(B ′ ) = {χ ′ } for some ordinary (unipotent) character χ ′ of G ′ . Since this is the restriction of an irreducible character of G, and G/G ′ is abelian, all characters of G above χ ′ have the same degree, hence the height zero conjecture holds for B in this case. Proof. The first assertion is well-known. For the second, let ν i be a character of b lying above χ i , for i = 1, 2. Since p does not divide the index |G : N|, ν i is again of height 0. Furthermore, the assumption that gcd(r, |G : N|) = 1 and G/N is cyclic or Klein four implies by Clifford theory that ν 1 (1) = ν 2 (1). Blocks of groups of Lie type Proposition 8.1. The assertion of Theorem 6.1 holds when S is a simple group of Lie type and p is the defining characteristic. Proof. By the result of Humphreys [21] the covering group G of S has exactly one p-block of defect zero, consisting of the Steinberg character, and all other p-blocks are of full defect, in one-to-one correspondence with the irreducible characters of Z(G). For the principal block, it is clear that there exist two height 0 characters of distinct degree (viz. the trivial character and at least one further non-linear character). For the remaining blocks, some more work is needed. By the above we may now assume that Z(G) = 1, so in particular p is odd for classical groups not of types A n or 2 A n . Recall that Z(G) is naturally isomorphic to the commutator factor group G * /[G * , G * ] of the dual group G * of G. If s ∈ G * is semsimple, the corresponding semisimple character χ s ∈ Irr(G) is of p ′ -degree given by χ s (1) = |G : C G * (s)| p ′ (see for example [28, (2.1)]; note that C G * (s) is not necessarily connected). So we are done if we can find two semisimple elements s 1 , s 2 ∈ G * whose centralizer orders have different p ′ -part. Table 2. Tori and Zsigmondy primes for classical groups A n (n ≥ 3 odd) (q n+1 − 1)/(q + 1) q n + 1 l(n + 1) l(2n) B n , C n (n ≥ 2 even) q n + 1 (q n−1 + 1)(q + 1) l(2n) l(2n − 2) B n , C n (n ≥ 3 odd) q n + 1 q n − 1 l(2n) l(n) D n (n ≥ 4 even) (q n−1 − 1)(q − 1) (q n−1 + 1)(q + 1) l(n − 1) l(2n − 2) D n (n ≥ 5 odd) q n − 1 (q n−1 + 1)(q + 1) l(n) l(2n − 2) 2 D n (n ≥ 4) q n + 1 (q n−1 + 1)(q − 1) l(2n) l(2n − 2) In Table 2 we have listed two maximal tori T 1 , T 2 of G * for each type of classical group G (by giving their orders, which determines them uniquely). Except for types B n , C n with n even this is Table 3.5 in Malle [27]. We write l(m) for a Zsigmondy prime divisor of q m − 1. Then |T i | is divisible by the Zsigmondy prime ℓ i as indicated in the table, which exists unless G is of type A 1 , or G is of type A 2 , 2 A 2 or B 2 and i = 2. (Note that the case that q = 2 and G of type A 5 , A 6 or 2 A 6 does not concern us here, since then the center of G is trivial.) If |T i | is divisible by a Zsigmondy prime ℓ i , then there exist regular semisimple elements s i of order ℓ i in G * , that is, elements with centralizer order |C G * (s i )| = |T i |. If both Zsigmondy primes exist, this yields two semisimple characters of different degrees, and we are done. The only exceptional simply-connected groups with non-trivial center are those of types E 6 , 2 E 6 and E 7 . For these, we may argue as above using the maximal tori and Zsigmondy primes listed in Table 3. The proof is complete. Table 3. Tori T 1 and T 2 We now turn to the non-defining primes for groups of Lie type. According to the work of Broué-Michel [7], for any p-block B of G there exists a unique G * -conjugacy class [s] of semisimple p ′ -elements of G * , such that some irreducible representation of B is in the rational Lusztig series attached to [s]. Let's write E p (G, s) for the union of all p-blocks of G associated with the class of the p ′ -element s ∈ G * . The blocks in E p (G, 1) are called unipotent. More generally, if G is disconnected, then a block of G F is called unipotent if it covers a unipotent block of (G • ) F . We need the following crucial result of Enguehard [16, Th. 1.6]: Theorem 8.2 (Enguehard). Assume that p is good for G, and different from 3 if F induces a triality automorphism on G. Let s ∈ G * be a semisimple p ′ -element, and B a p-block in E p (G, s). Then there exists a reductive group G(s) defined over F r , with corresponding Frobenius map again denoted by F , and a unipotent p-block b of G(s) := G(s) F , such that the defect groups of B and b are isomorphic and there is a height-preserving bijection Irr(B) → Irr(b). Here, G(s) • is a group in duality with C G * (s) • , and G(s)/G(s) In the case of p = 2 for classical groups, he proves [16, Prop. 1.5]: Theorem 8.3 (Enguehard). Assume that G is of classical type in odd characteristic. Let s ∈ G * be a semisimple p ′ -elements. Then all 2-blocks in E 2 (G, s) have defect group isomorphic to a Sylow 2-subgroup of C G * (s) • . If moreover G is of type B n , C n or D n , then E 2 (G, s) is a single 2-block. Proposition 8.4. The assertion of Theorem 6.1 holds if G is quasi-simple of Lie-type. Proof. By Proposition 8.1 we may assume that p is not the defining characteristic for G, and by Proposition 9.4 we have that S ∼ = 2 F 4 (2) ′ . Furthermore, by the remarks at the beginning of Section 7 we have that G = G F for some simple, simply connected algebraic group G with Frobenius map F : G → G. Let B be a p-block of G and s ∈ G * semisimple such that B ⊆ E p (G, s) (see above). First assume that s is not quasi-isolated in G * , that is, C G * (s) is a Levi subgroup of G * . Then by the result of Bonnafé-Rouquier [9, Th. 10.1] the block B is Morita-equivalent to a block b ⊆ E p (L, 1) where L is a Levi subgroup of G in duality with C G * (s), and Jordan decomposition gives a height preserving bijection from B to b. We may then conclude by Theorem 7.7. Next assume that p is good for G, different from 3 if G is of type 3 D 4 . Then by Theorem 8.2 there is a group G(s) in duality with the centralizer C := C G * (s) of s in G * and a height preserving bijection between B and a unipotent block b of G(s) := G(s) F with the same defect group as B. By [3,Cor. 2.9] the order a(s) := |C : C • | of the component group of C is prime to the defining characteristic r and divides the order of s. As s is a p ′ -element, this implies that a(s) is prime to p as well. Moreover, by loc. cit. C/C • is isomorphic to a subgroup of the fundamental group of G, hence either cyclic or a Klein four group. Now let b ′ be a p-block of the normal subgroup N := (C • ) F of C F = C G * (s) lying below b. We showed in Theorem 7.7 that any unipotent block of the connected group N with non-abelian defect group contains two height 0 characters which are divisible by different powers of the defining prime r. Thus, Proposition 7.8 applies in this case and the claim follows. Now assume that p = 2 and G is of classical type B n , C n , D n or 2 D n . Then E 2 (G, s) is a single 2-block by Theorem 8.3. By Jordan decomposition the character degrees in E 2 (G, s) are obtained from those in E 2 (C G * (s), 1) by multiplication with a common constant. If C G * (s) • is not a torus, the trivial character and the Steinberg character in E 2 (C G * (s), 1) have distinct degrees prime to p and the claim follows. On the other hand, if C G * (s) • is a torus, that is, s is a regular element in G * , then again by the Theorem 8.3 of Enguehard the defect group of B is isomorphic to a Sylow 2-subgroup of C G * (s) • , hence abelian. Moreover, by the result of Lusztig [24], all characters in B have the same degree, whence B satisfies the height zero conjecture. Thus we may assume that G is of exceptional type, p is a bad prime and s is quasiisolated. There are no quasi-isolated elements for 2 B 2 . The p-blocks for 2 G 2 , G 2 , 2 F 4 and 3 D 4 have been determined by Fong [17], Hiss-Shamash [19,20], Malle [25], Deriziotis-Michler [12] respectively. The claim can be easily checked from those results. The remaining cases are the possible exceptions mentioned in the theorem. Alternating and sporadic groups In order to prove our main result for the alternating groups, we first derive a similar statement for blocks of the symmetric group. Recall that the irreducible characters of S n as well as the unipotent characters of GL n (q), where q is any prime power, are parametrized by partitions λ ⊢ n. We write χ λ resp. γ λ for the corresponding character of S n , resp. of GL n (q). The following important connection between their degrees is well-known: χ λ is obtained by specializing q to 1 in the degree polynomial for γ λ (see for example the formula in [10, 13.8] and compare to the hook formula for χ λ (1)). This is sometimes referred to by saying that S n is 'the general linear group over the field with one element'. Furthermore, χ λ and χ µ for two partitions λ, µ ⊢ n lie in the same p-block of S n if and only if λ and µ have the same p-core, which in turn happens if and only if γ λ and γ µ lie in the same d-Harish-Chandra series of Irr(GL n (q)), where d = p. Thus, the degrees of irreducible characters of S n in a fixed p-block are specializations at q = 1 of degree polynomials of unipotent characters in a fixed p-Harish-Chandra series. Let S = (S 1 , . . . , S d ) be a d-symbol. A hook of S is a pair h = (s, t) where for some 1 ≤ i, j, ≤ d. We then also write i(h) := i, j(h) := j, and l(h) := s − t. For 1-symbols, that is, β-sets of partitions, this is just the usual notion of hook. We can now formulate the following relative hook formula for characters in a fixed p-block of a symmetric group which seems to be new: Theorem 9.1. Let p be a prime. Let π ⊢ n be a partition with p-core µ ⊢ r and p-quotient (ν 1 , . . . , ν p ) ⊢ w, with corresponding p-symbol S. Let b i denote the number of beads on the ith runner of the p-abacus diagram for µ, and c i := pb i + i − 1. Then and where ψ ν denotes the irreducible character of C p ≀ S w parametrized by ν. Proof. Let γ be the unipotent character of GL n (q) parametrized by π, for q a prime power. Set d := p. Then γ lies in the d-Harish-Chandra series above (L, λ), where L ∼ = GL r (q) × GL 1 (q d ) w , with λ parametrized by µ ⊢ r and n = r + dw. Let S = (S 1 , . . . , S d ) be the d-symbol corresponding to (ν 1 , . . . , ν d ). According to Theorem 7.3, [26, (2.19) with (v; u 1 , . . . , u d ) = (q d ; 1, q c 2 , . . . , q c d ) (see (I) in Sect. 7.3 for the parameter values). By our above remarks, specialization at q = 1 gives the corresponding character degrees for S n . Note that numerator and denominator of the expression for Deg(γ)/Deg(λ) are indeed divisible by the same power of (q − 1), viz. n + w + me 2 , so that the specialization makes sense. We obtain as claimed. Let's note the following special case of p-quotients (ν 1 , . . . , ν p ) ⊢ w such that the corresponding p-symbol S has S i = (w), S j = (0) for j = i. Since these correspond to linear characters of the relative Weyl group in GL n (q), they parametrize characters of height 0 in B by the congruence in Theorem 9.1. We obtain A p-blocks B of S n labelled by a p-core µ ⊢ n − wp is said to be of weight w. So w denotes the number of p-hooks which must be removed from any partition π indexing a character in B to obtain its core µ. The block is said to be self-dual if µ is a self-dual partition. Proposition 9.2. Let G = S n , n ≥ 5, p a prime, and B a p-block of G. Then one of the following occurs: (a) B is of weight (and hence defect) 0, (b) p = 2 and B is of weight 1, (c) p = 3, B is of weight 1 and self-dual, or (d) B contains two height 0 characters of different degrees d 1 < d 2 , either both indexed by non-self-dual partitions or with d 2 = 2d 1 . Proof. We use the relative hook formula in (4) for the character degrees of S n for certain height 0 characters in B. We may assume that the weight w of B is positive. Let µ ⊢ n − pw denote the p-core associated to B, let 0 = e 1 < e 2 . . . < e p be the ordered set of the c i as in Theorem 9.1, and f i := w−1 k=0 j =i |pk + e i − e j |, for 1 ≤ i ≤ p. Note that by [34,Prop. 3.5] none of the partitions λ i corresponding to the p-quotients S i is self-dual, unless w = 1 in which case at most one of them is. Clearly, f p > f p−1 unless p = 2 and w = 1 (which is case (b)), which yields two distinct height 0 degrees d 1 , d 2 . If both corresponding partitions are self-dual, then w = 1. But by Theorem 9.1 we have d i ≡ ±1 (mod p), and then d 1 = d 2 /2 implies that p = 3. Corollary 9.3. Let p be a prime, B a p-block of A n . Then one of the following holds: (a) B is of defect 0, (b) p = 3, B is of weight 1 (hence with cyclic defect group C 3 ), self-dual, and all χ ∈ Irr(B) have the same degree, or (c) B contains two height 0 characters of different degrees. In particular, the assertion of Theorem 6.1 holds when S is an alternating group. Proof. Let B be a p-block of S n , containing all characters χ λ for which λ has fixed p-core µ ⊢ (n − pw). According to [34,Prop. 12.2], for example, if w > 0 then B covers a unique block B 1 of A n . First assume that p is odd. Let χ 1 , χ 2 ∈ B be two height zero characters of different degrees, parametrized by non self-dual partitions, according to Proposition 9.2. These restrict irreducibly to characters of A n in B 1 of height 0. Similarly, if χ 1 , χ 2 ∈ B have different degrees d 1 < d 2 with d 2 = 2d 1 , then the restrictions of χ 1 , χ 2 to A n contain characters of B 1 of height 0 and of different degrees. If p = 3, B is of weight 1 and self-dual, then two characters of B have the same irreducible restriction and one splits into two constituents for A n . We obtain a block B 1 with defect group of order 3 and three equal character degrees. For p = 2, restriction of characters from G to B 1 either preserves heights or decreases it, by [34,Prop. 12.5]. Thus, we may conclude by Proposition 9.2 unless w = 1. Here, the two irreducible characters in B have the same restriction to A n , so B 1 is a block with a unique ordinary character, that is, a block of defect zero. Note that case (b) of Corollary 9.3 occurs if and only if there is a self-dual 3-core for n − 3. The conditions for this to occur have been worked out in [2, Lemma 3.1]. It can be checked from the known character tables that the assertion of Theorem 6.1 remain true for the faithful blocks of 2.A n when n ≤ 13. We complete our investigation of blocks of quasisimple groups by showing: Proposition 9.4. The assertion of Theorem 6.1 holds when S is sporadic or a simple group of Lie type with exceptional Schur multiplier, or S = 2 F 4 (2) ′ . Proof. The ordinary character tables of all quasi-simple groups such that S is as in the assumption are contained in the Atlas [11]. From this, or using the electronic tables available in GAP, it can be checked that whenever B is a p-block of G with all height zero characters of the same degree then the defect group satisfies |D| ≤ p 2 , hence must be abelian. p-Solvable Groups Our main result in this section is to reduce the study of EHZD blocks of general psolvable groups to groups with p ′ -lenght one. This latter case naturally leads us to consider a variation of a classical large orbit problem. Question 10.1. Suppose that V is a finite faithful completely reducible F G-module, where F has characteristic p and G has a normal p-complement K > 1. Let P ∈ Syl p (G). Does there exists v ∈ C V (P ) such that |C K (v)| 2 < |K|? Question 10.1 is not trivial, even if P = 1. In this case, it has an affirmative answer if K is solvable (by [13]). Also, Question 10.1 has an affirmative answer if K is nilpotent, and this constitutes the main result of [14]. In some sense, it is unfortunate that our only way to prove that EHZD blocks of p-solvable groups are nilpotent is via large orbits. On the other hand, Question 10.1 has interest in its own and it is closely related to the study of p ′ -degrees of p-solvable groups, so it might deserve some consideration. Our main result in this Section is the following. Theorem 10.2. Suppose that Question 10.1 has an affirmative answer. If B is an EHZD block of a p-solvable group G, then B is nilpotent. Now we have that Irr(V ) is a completely reducible, finite, and faithfulḠ-module. By using the affirmative answer to Question (10.1), there exists β ∈ Irr(V ) centralized by P such that |K β /U| 2 < |K/U| , where K β is the stabilizer in K of β. In other words, |K : K β | 2 > |K/U| . Now, since K β /V is a p ′ -group, there exists a unique extensionβ ∈ Irr(K β ) of β, by using Corollary (8.16) of [22], which has p-power order. In particular, this linear character has Z in its kernel, and by uniqueness is P -invariant (because β is P -invariant). Letλ = 1 V × λ ∈ Irr(U). Since K β /U is a p ′ -group andλ is P -invariant, then we may find some γ ∈ Irr(K β |λ) which is P -invariant (this is becauseλ K β has p ′ -degree). Now we have that γβ ∈ Irr(K β ) (becauseβ is linear) lies over β. By the Clifford correspondence, we have that ρ = (γβ) K ∈ Irr(K). This character ρ is P -invariant, has p ′ -degree |K : K β |γ(1) and lies over λ. Also ρ K 0 ∈ Irr(K 0 ) is P -invariant, has p ′ -degree, and therefore it has an extension χ ∈ Irr(G) with χ K 0 = ρ K 0 , by using Corollary (8.16) of [22] and the fact that K 0 = O p (G). Hence d = |K : K β |γ(1) ≥ |K : K β | . Therefore, d 2 ≥ |K : K β | 2 > |K/U| . Now, let H be a p-complement of G. Hence HV = K and H ∩ V = 1. Finally, using thatλ is P -invariant and (λ) K has p ′ -degree, we can find a P -invariant ξ ∈ Irr(K|λ) of p ′ -degree. Arguing as before, we have that ξ extends to G, and therefore ξ(1) = d. However, ξ H ∈ Irr(H|λ). Hence, d 2 ≤ |H : Z| by elementary character theory. However, |H : Z| = |K : U| and this is a contradiction. (A similar argument gives the same conclusion of Lemma 11.3 if we assume that χ 0 ∈ IBr(G) for all χ ∈ Irr(G|λ) of p ′ -degree.) Proof of Theorem 10.2. We argue by induction on |G|. Let Z = O p ′ (G), and let λ ∈ Irr(Z) be covered by B. If T is the stabilizer of λ in G and b is the block of T which corresponds to B via , Theorem (9.14)), then b is a EHZD block. If T < G, then b is nilpotent by induction. Thus B is nilpotent by Lemma 1 of [30], for instance. Hence, we may assume that T = G. In this case, Irr(B) = Irr(G|λ) by Theorem (10.20) of [29]. Now we conclude that G has a normal p-complement by Lemma 10.3.
13,291.6
2009-09-23T00:00:00.000
[ "Mathematics" ]
THE FORMATION OF A POWER MULTI-PULSE EXTREME ULTRAVIOLET RADIATION IN THE PULSE PLASMA DIODE OF LOW PRESSURE In this paper results are presented on experimental studies of the temporal characteristics of spike extreme ultraviolet (EUV) radiation in the spectral range of 12.2 ÷ 15.8 nm from the anode region of high-current (I = 40 kA) pulsed discharges in tin vapor. It is observed that the intense multi-spike radiation in this range arises at an inductive stage of the discharge. It has been shown that the radiation spikes correlate with the sharp increase of active resistance and of pumped power, due to plasma heating by an electron beam, formed in the double layer of charged particles. It has been observed that for large number of spikes the conversion efficiency of pumped energy into radiation at double layer formation is essentially higher in comparison with collisional heating. Introduction This study is devoted to the investigation of phenomena, occurring in a high-current pulse plasma diode, where the plasma with tin multi-charged ions generates extreme ultraviolet (EUV).One of the methods for the creation of high-power EUV sources for nanolithography is an application of high-current discharges in tin vapor [3,11].An emission source based on the dense plasma of multi-charged tin ions is advantageous against gas-filled systems because of the expected higher conversion efficiency [7] and is capable of operating at ultralow pressures.The latter is important for probability reduction of parasitic breakdowns and for reduction of losses of radiation in optical paths.Since such a source is required to have a rather high output power [9], increasing the conversion efficiency of the supplied electric power into radiation is an important goal.Large conversion efficiency of the source is predicted for the tin [9].For production of highcontrast nonlinear photoresists, operating in EUV range, for nanolithography above-threshold pulse intensities are required [9].It can be provided by the narrow peak pulses (spikes) of radiation. This work is aimed to investigation of the processes affecting the efficiency of radiation generation in highcurrent tin-vapor discharges.The results are presented on studies of the generation of intense radiation in the range of 12.2 ÷ 15.8 nm wavelength from an extended plasma diode, operating in the regime of self-sustained plasma-beam discharge [1].The dense high-temperature plasma of multi-charged tin ions is produced by pulsed evaporation of the anode material, fast ionization of the vapor, and heating of the resulting plasma by the current and by the electron beam, formed by the double electric layer in the anode region of the diode. Experiments The experimental setup dedicated to studies of the EUV yield from the plasma of multi-charged tin ions is presented in Fig. 1.The setup consists of the pulsed high-current plasma diode with the igniting electrode, the photo-detector for measurement of the integral radiation intensity, and the semicoductor detector AXUV-20 for the radiation intensity measurement of selected wavelength range.AXUV-20 detector (fabricated by International Radiation Detectors Incorporation, California) with input Mo−Si optical filter on 12.2 ÷ 15.8 nm wavelength range was calibrated by intensity with help of the synchrotron radiation.The damped alternating current in the diode is excited between the cylindrical electrodes due to the discharge of the low-inductance capacitor bank C 0 of capacity 2.0 µF at starting pressure 2 × 10 −6 Torr.The features of the applied scheme of discharge gap are the use of electrodes with a working surface of various sizes and initially the applying a positive voltage to the electrode with a small effective surface (the anode), the negative voltage -to the electrode with a larger effective surface (the cathode).At the discharge current 10 ÷ 40 kA and the effective anode surface 2 ÷ 20 mm 2 the current density reaches value about of 0.2 ÷ 2.0 MA cm −2 .This leads to dense plasma formation near anode and to the necessary position stabilization of this plasma.In addition the small anode effective area provides suitable conditions of the electrical double layer formation exactly in near anode region.The double layer formation has been observed in [10,12].In these articles at an achievement by the discharge current of some critical value all voltage is concentrated in a narrow region and a high-current electron beam is formed, which carries all discharge current.This narrow region divides the discharge on two parts.Further study of conditions of facilitation of double layer and high-current electron beam formation in such type of discharge studied by the authors of the articles [5,4,6].In our article the double layer formation and its dynamics are not investigated in detail.Based on the knowledge of methods for forming the double layer and its stabilization, we have created such experiment conditions to create the double layer near one electrode with a small effective surface.The using of double layer is an effective method for local plasma heating.Thus the presence in the same location of the dense plasma and its source of heating makes it possible to form dense plasma with multi-charged ions in near anode region. The length of the discharge space can be varied from 3 to 10 cm, the diameter of the cathode is 10 mm and the anode diameter equals 1.5 mm or 2.5 mm, or 5 mm.The working surface of the electrodes are covered with 0.5 mm thick tin layer.The side surface of a rod anode is enveloped by tubular ceramic insulator to increase the current density to the anode.The discharge voltage is from 4 to 15 kV, the current amplitude from 10 to 40 kA, the current density on the anode reaches to 0.2 ÷ 2.0 MA cm −2 , the halfcycle of current oscillations is 1.7 µs.The discharge current and voltage are measured with Rogowski coil I d and balanced voltage divider V d , respectively.The current through the diode is excited after filling of the discharge interval by preliminary plasma due to surface discharge on the cathode, using a igniting electrode.The pulsed voltage 0.2 ÷ 5.0 kV supplies on a igniting electrode from the capacity condenser 0.025 µF through thyratron and inductance of value 400 µH. The integral radiation by plasma is monitored by the current measurement of photoelectrons.The photoelectrons are collected by fine-mesh grid placed at a distance of 0.5 mm from the photocathode.The grid is grounded, and the photocathode is at a negative potential of −50 V, supplied from an autonomous voltage source. At the radiarion intensity measuring of the selected wavelength range it is used the special measures. To protect the AXUV-20 detector from the plasma and charged particle beams the inlet diafragmatic channel is located in transverse magnetic field (strength 0.2 T, length 25 cm).To exclude the effect of photoelectron current on the detector signal, we use a set of shading diaphragms and the detector is biased by +20 V with respect to the channel. The diode discharge develops in two stages.The first stage begins with a surface breakdown at the cathode and finished when the primary plasma reaches the anode.In the first stage, which lasts for 2 ÷ 6 µs (for the electrode gap length of 5 cm), the discharge is operated in the regime of a vacuum diode with a plasma emitter.In this case, the discharge current is carried by the electron beam.Thus the working surface of the anode is preheated by the beam and the initial vapor envelope is formed.In the second stage, when the dense plasma occupied the discharge gap, the discharge switches to the plasma diode regime, in which the discharge current is determined by the parameters of the plasma and discharge circuit. Probe measurements have shown that beetwen stages of the vacuum diode and the plasma diode the transition regime exists in which the double electric layer near the anode surface is formated.The intense electron beam are accelerated into the layer and affects on the anode surface.A high-current double electric layer exists for 0.5 µs.Under these conditions, the material is intensely evaporated from the anode surface, the vapor is quickly ionized and the plasma is rapidly heated due to the beam-plasma interaction.The total energy pumped in the anode plasma and the anode itself during the first half-period of the discharge current reaches 80 % of the energy stored in the capacitor bank.The formation of the double layer in the transition regime is determined by inability of the discharge gap plasma to provide the high discharge current density [1,5].As soon as the dense plasma increases, the current is no longer limited and the double layer is disappeared.Then, the discharge operates in the inductive phase. no. / The Formation of a Poser Multi-Pulse Extreme UV Radiation Results and Discussion From the results of the experiments it follows that the intensive radiation in the range of wavelengths 12.2 ÷ 15.8 nm arises both at the transition regime and at the inductive stage of the discharge.The peculiarity of this radiation is that there are the powerful (up to 1MW) short (100 ÷ 200 ns) spikes against of the background of wide radiation pulses with duration about half-period of discharge current oscillations.The transversal dimension of the region of generation is comparable with the diameter of the anode, and its length changes depending on a discharge voltage and equals 4 ÷ 7 mm.Typical waveforms of discharge current and voltage, the radiation intensity in the selected wavelength range and integral radiation versus time are shown in Fig. 2. It is seen that in the selected wavelength range there are several radiation spikes, correlating with the corresponding half-periods of the discharge current.In the first half-period there are as the wide pulse with the relatively small amplitude and a powerful spike.Whereas the pulses emitted during the second and third half-periods have only the form of the powerful shorter narrow spikes. It is clearly seen that the radiation spikes of the selected wavelength range coincide with spike pulses of integral radiation (see Fig. 2c and 2d).Note that during first half-cycle the narrow spike of duration 200 ns is observed only at discharge voltage more than 7 kV.Its intensity grows with increase of the discharge voltage.More than 70 % of energy, radiated during first half-cycle, is concentrated in this peak pulse.The time of occurrence of the narrow peak pulse depends on an ignition voltage.(For higher voltage the peak pulse appears earlier.) In second and third half-cycles these radiation spikes are always observed near maximum of the current.The radiation spikes are registered at the current amplitude more than 10 kA.Their intensities grow with growth of the discharge voltage.Note that in second half-cycle of the current oscillations an additional spike-satellite of duration 200 ns is observed at discharge voltage 5 ÷ 8 kV (Fig. 2c).This pulse follows after the basic spike through 200 ns.The intensity of the spike-satellite also grows with increase of the discharge voltage.At voltage more than 8 kV the spikesatellite disappears.Similar two spikes during the one half-period have been observed in [8]. For effective generation of the radiation in the selected wavelength range it is required an additional energy pumping.In our case, additional energy pumping is provided by the electron beam, generated by the electric double layer which is formed periodically.In Fig. 3 the dependence of power, pumped in discharge, and corresponding pulses of radiation on time are shown.One can see that the spikes of radiation coincide with spikes of power, pumped in discharge.The different time behavior of radiation in this wavelength range in the first and subsequent half-periods can be explained as follows.In the first half-period, there is still an influx of neutral atoms into the anode region due to intense evaporation of the anode material.Moreover, since the discharge current is high, the energy distribution of plasma electrons is fairly wide.This leads to a significant collisional energy pumping; the broadening of the distribution function of ions over charge states; and, accordingly, the generation of ordinary recombination radiation within a wide spectral range with a relatively low intensity.By the end of the first half-period, the number of ions in the charge states corresponding to radiation in the 12.2 ÷ 15.8 nm wavelength range decreases. In the second and third half-periods the plasma already exists.It is necessary only to increase the charge states (i.e., plasma temperature) of ions up to necessary ones.Moreover the plasma is relatively dense and focused during previous half-period(s).In the second and third half-periods the neutral atom flux into the anode region is much less than one in the first half-period, because existing plasma shades anode from electron beam bombardment.Thus in the subsequent half-periods the energy expenditure on formation of dense plasma with necessary charge states of ions is essentially smaller than in first half-period.After formation of plasma with necessary charge states of ions in the second and third half-periods the distribution function of ions on charge states is more narrow than in the first half-period. We examined how the radiation intensity and the conversion efficiency of the stored energy into radiation depend on the external conditions.Fig. 4a shows how the total (within the 2π solid angle) energy of the radiation pulses (all pulses: spike and wide) in the 12.2 ÷ 15.8 nm wavelength range depends on the energy stored in the capacitor bank.The dependence is seen to reach maximum at stored energies larger than 140 J.It is necessary to note that this maximum in dependence of the radiation energy on the energy stored in the capacitor bank is observed at use of anode of different diameters (from 1.5 to 5 mm).As it is shown in Fig. 5 for larger diameter of the anode this maximum shifts to region of larger energy stored in the capacitor bank.For smaller diameter of the anode this maximum shifts to region of smaller energy stored in the capacitor bank. In Fig. 4 the diameter of the anode equals 2.5 mm. Figure 4b also shows the energies of the radiation pulses emitted during the first, second, and third half-periods of the discharge current.At low stored energies, radiation is mainly emitted during the first half-period.As the stored energy increases, the energies of radiation pulses emitted in different halfperiods approach one another.At stored energies, exceeding 120 J, they become nearly equal.Figure 4b shows the relative energies of radiation pulses emitted in each half-period of the discharge current as functions of the stored energy. Figure 6a shows the relative energy, pumped in each half-period of the discharge current, as a function of the stored energy.It is seen that the energy is mainly pumped in the first half-period, during which the anode material is intensely evaporated and neutral atoms are multiply ionized.In other words, the stored energy is mainly spent on the production of dense plasma.Very small energy in comparison with the plasma formation energy is spent on plasma support during next half-periods.Therefore, when the total stored energy is low, the rest of the energy pumped in the subsequent half-periods is low.However, the radiation intensity in those half-periods is relatively high.As the stored energy increases, the current in the second and third half-periods increases.The energy fraction pumped in the subsequent half-periods increases, thereby leading to an increase in the radiation energy. The equalization of the absolute and relative radiation energies emitted over one half-period of the discharge current with increasing energy, as well as the fact that the radiation intensity in the second and third half-periods is high in spite of a much lower (as compared to the first half-period) energy pump, indicates that the number of particles in the dense radiating plasma remains nearly constant for a fairly long time (approximately 6 µs).This is confirmed by the results obtained in [2], where long-lived dense plasma was also observed.Thus, after the plasma has formed in the first half-period, it is supplied with energy in the subsequent half-periods, a fraction of the supplied energy being instantaneously converted into radiation. Detailed analysis of the energy pumped in the discharge during one half-period and the energy radiated over the same time permits to find the energy conversion efficiency for each half-period.Figure 6b shows the conversion efficiency of the energy pumped in the discharge during each half-period into the radiation energy in the 12.2 ÷ 15.8 nm wavelength range, and the total energy conversion efficiency ns 0 = W 0 r /W 0 , as a function of the total stored energy.It is seen that the conversion efficiency in the second and third half-periods reaches a few percent, the energy pumped in the discharge being 10 J.For a 2.5 mm diameter anode, there is an optimum at a stored energy of 80 J (see Fig. 6b). A comparison of the radiation intensity in the selected wavelength range and the integral intensity shows that, when the stored energy is more above a certain value, the integral radiation intensity increases sharply, whereas the radiation intensity in the selected wavelength range of increases insignificantly.Presumably, this is related to the increasing of tin ion with the higher charge states, which leads to generation of the harder radiation and, therefore, to extra energy consumption. Conclusions Multi-spike radiation, important for use of highcontrast nonlinear photoresists for nanolithography, in the wavelength range 12.2 ÷ 15.8 nm from a tin vapor discharge has been observed.The duration of radiation spikes is found to be much shorter than the half-period of the discharge current. In a narrow energy range in second half-cycle the spike-satellite is observed. An analysis of the experimental data indicates the presence of long-lived dense plasma.When this plasma is optimally supplied with energy, radiation spikes are generated in the selected wavelength range.Analysis of the experimental data indicates that the intensity of these spikes is mainly determined by the discharge current and by the energy pumped in the discharge. For an efficient radiation source development it is expedient that the radiating plasma be used repeatedly, because most of the stored energy is spent on plasma formation.In order to achieve quasi-steady emission, the discharge should be supplied with optimal portions of energy. The possibility of the power (several MW) extreme ultraviolet generation in kind of train of spikes of duration 200 ns in the high-current pulse plasma diode in the inductive stage of discharge development at the current density to the anode 0.2 ÷ 2.0 MA cm −2 has been shown.The power spikes of radiation are generated in conditions of plasma heating by electron beam.The electron beam is formed by double electrical layer. The conversion efficiency of pumped energy into radiation energy at the double layer formation (beam mechanism of plasma heating) is essentially higher in comparison with ordinary heating by current, because in first case approximately all energy is pumped in very small volume. The use of plasma diode with anode of small dimension (in comparison with cathode helps to control and stabilize in space dense plasma localization and the position of double layer formation.At this condition the space of dense plasma localization coincides with the source of its heating. Figure 4 . Figure 4. (a) Radiation energies (spike and wide pulse energies) in the 12.2 ÷ 15.8 nm wavelength range during first W1r, second W2r, third W3r half-periods, (b) fractions of the energy radiated in the selected wavelength range in different half-periods of the discharge current as functions of the stored energy W0; here W0r is the total radiation energy, emitted in the selected wavelength range during a discharge into the 2π solid angle; W1r/W0r, W2r/W0r, W3r/W0r are the energy fractions emitted in the first, second, and third half-periods, respectively, da = 2.5 mm, l dis = 5 cm. Figure 5 . Figure 5. Radiation energies (spike and wide pulse energies) in the 12.2 ÷ 15.8 nm wavelength range as functions of the stored energy for the different anode diametres: da = 1.5, 2.5, 5.0 mm. Figure 6 . Figure 6.Energy fractions, pumped during one halfperiod (W1, W2, and W3 are the pumped energies and W1/W0, W2/W0, and W3/W0 are the energy fractions pumped in the first, second, and third half-periods of the discharge current, respectively); (b) conversion efficiency ηi = Wi r/Wi (i = 1, 2, 3) of the energy, pumped during one half-period Wi, into the radiation energy Wi r in the 12.2 ÷ 15.8 nm wavelength range as functions of the stored energy W0; W0 r is the radiation energy in the selected wavelength range during all half-periods, da = 2.5 mm, l dis = 5 cm.
4,688.6
2013-01-02T00:00:00.000
[ "Physics" ]
Money supply, inflation and output: an empirically comparative analysis for Vietnam and China Purpose – This study focuses on analyzing the relation between money supply, inflation and output in Vietnam and China. Design/methodology/approach – Using the error correction model and the vector autoregression model (ECM and VAR) and the canonical cointegration regression (CCR), the study shows similar patterns of these variable relations between the two economies. Findings – The study points out the difference in the estimated coefficients between the two countries with different economic scales. While inflation in Vietnam is strongly influenced by expected inflation and output growth, inflation in China is strongly influenced by money supply growth and output growth. Originality/value – Tothebestoftheauthors ’ knowledge,thisisthefirstempiricalandcomparativeresearch on the relation between money supply, inflation and output for Vietnam and China. The study demonstrates that the relationship between money supply, inflation and output is still true in case of transition economies. Introduction During economic transition, China has been considered a leader among socialist countries that have successfully transformed the economic model from a planned economy to a marketoriented economy. Economic reform is urgent and under pressure when the economy suffers serious crises. This reform is similar in Vietnam, but Vietnamese reform is 10 years later than Chinese one (1978( in China, 1986 in Vietnam) (Ma, 1999;Dao and Vu, 2008). It can be said that economic reform in China has provided some experience and creates motivation for many countries to conduct similar transitions. However, China and Vietnam are the only two countries that have been transformed from a planned economy to a market-oriented economy while keeping their own orientations. In the 1980s, apart from changing the political system, the Soviet Union and Eastern European countries have shifted to the market economy. In the area of monetary policy, China and Vietnam also have a thorough transition from one-tier bank systemwhich holds full control of the national financial system to two-tier bank systemby splitting into central banks and commercial banks, providing credit services for specific industries (Ma, 1999;Oanh, 2001;Dao and Vu, 2008). This change helps the instruments of monetary policy be activated and gradually take effect. Monetary turmoil phenomena created by mixed economies (systems including a planned economy and a market economy at the same time) have narrowed. Inflation is lowered and controlled to be more stable than before reform. Capital markets were formed after nearly a decade of economic reform (1990in China, 2000. On the other hand, Vietnam's accession to the World Trade Organization is 6 years later than China (China in 2001, Vietnam in 2007. These show that China always implements important steps in reform and achieves results before Vietnam. There are similarities as well as differences in the economy between Vietnam and China (Duong and Le, 2007). Both countries pursue the market-oriented economy, the same pattern of economic reform, development and integration process. Particularly, Vietnam follows the socialist-oriented market economy. China follows a "socialist market economy with Chinese characteristics. The political systems of two countries have certain similarities. Similarities may come from the success of China's economic reform policies, and these policies are always ahead of Vietnam (Ma, 1999;Dao and Vu, 2008). Moreover, in terms of economics, if a country accepts and operates under a market mechanism, relations in that economy will also have to follow the rules of the market. This leads to similarities in results. However, the size of two economies is different. The capacity of influence on economics and politics is also different (Duong and Le, 2007;VNEP, 2016). Furthermore, in terms of geography and history, China is less influenced by political and economic changes in the world than Vietnam. In fact, China is an important element which contributes to the establishment of international relations in general and in the economic field in particular. In the opposite direction, Vietnam is strongly affected by these relations. In the quantity theory of money (QTM), the relation between money stock (M) and price level (P) can be expressed through the equation MV ¼ PY (Mankiw, 2016) where M is the money supply, V is the velocity of circulation of money, Y is the real output and PY is the nominal output. The velocity of circulation of money is defined as the average amount of one unit of money circulated in the economy to pay for goods and services during a given period of time. Gross domestic product (GDP) is chosen as the variable representing the output, and P is chosen as the deflator (GDP deflator). According to Chow and Shen (2005) mentioned the work of Friedman, there are limitations in the equation MV ¼ PY because in practice this relation is not really accurate. In the equation, with Y held constant, P tends to increase as M increases; with M held constant, P tends to increase as Y decreases and with P held constant, Y tends to increase as M increases. In the long run, the QTM is limited for several reasons. First, interest rate affects V, and this effect may not be constant in the long run. Second, the equation mentioned can be transformed into M =P ¼ Y =V. This equation describes a demand for money equation responding to changes in income. In fact, the demand for money equation is influenced not only by income but also by interest rate and other factors ðm − pÞ ¼ f ðS; OCÞ where ðm − pÞ denotes the real money demand and S; OC represents variables that show opportunity costs of holding money. This study focuses on analyzing the relation of three macro-variables, including money supply real output and price level. Although this issue seems to be simple and obvious, previous studies about it are only conducted in other countries and China but not Vietnam (Chow and Shen, 2005;Aksoy and Piskorski, 2006;Budina et al., 2006;Homaifar and Zhang, 2008;Haug and Dewald, 2010;Anh and Thuy, 2013;Truong, 2013). In addition, limited data can be a reason why empirical research on this issue has not performed in Vietnam in previous studies. What is the relation between these three variables when two countries have many similarities in terms of economics and politics but have different economic scales? This paper will examine the relationship between money supply, inflation and output in Vietnam during the period 1986-2016 and in China during the period 1978-2008. After 30 years of reform, the study aims to demonstrate the existence of the relation between these variables and expect a new finding when using new quantitative techniques in time series data AJEB processing. Accordingly, the study contributes to shed light on the interaction between variables mentioned in the area of monetary policy management. The paper is organized as follows. Section 1 highlights briefly the achievement of economic reform as well as the similarities and differences in the economy between Vietnam and China. This section also mentions the QTM, MV ¼ PY and its limitation. Section 2 presents the methodology in the error correction model (ECM) following vector autoregression model (VAR) structure and canonical cointegration regression (CCR) for the multivariate variable. From this, the model specifications and data source are described in Vietnam and China case studies. The result of empirical study is presented in Section 3, which discusses the outcomes. One interesting result is that the different parameters of estimation between two countries are with different economic scales. In Vietnam, the expected inflation and output growth have a strong impact on inflation. In contrast, the inflation in China is strongly affected by money supply growth and output growth. Another noteworthy thing is that the increasing money supply to stimulate investment and boost economic growth in Vietnam is less effective than in China. In addition, the impact of income on money demand in Vietnam is much lesser than in Vietnam. The conclusion is shown in Section 4, which emphasizes some remarkable findings. Research model Based on the equation MV ¼ PY, the study proposes models with variables that can interact with each other. The error correction model and the vector autoregression model (ECM-VAR) are used, and then canonical cointegration regression (CCR) is applied with an expectation that regression results are reliable when the phenomenon of serial correlation and endogeneity is adjusted. (1) In China, variables including M2 money supply, the retail price index and GDP are used to represent money supply M2, price level P and output Y. The data are derived from the study of Chow and Shen (2005) for the period 1952-2002 and from China Statistical Yearbook of the National Bureau of Statistics (NBS) of China for the period 2003-2008. The data for the period 1952-2002 are taken from the NBS, but estimates are similar to the methods that the author used when the data are not directly available (see data description of Chow and Shen (2005)). Data research Money supply, inflation and output Empirical result 3.1 Unit root test and Johansen cointegration test The study uses unit root test to test the stationary of variables, for the case of Vietnam and China (1978China ( -2008. At the first difference, results show that the null hypothesis of a unit root for all variables considered can be rejected. This means that variables stop at the first difference (Table 1). After testing for stationary, Johansen cointegration test is conducted. A value of 1% (or 5%) is greater than the value of trace statistics for both Vietnam and China (Tables 2 and 3). Results indicate that there exists a long-term relationship between variables logðM 2 t Þ, logðP t Þ, logðY t Þ. This is the basis for further analysis. Volatility of price level and inflation The equation MV ¼ PY can be rewritten as the formula P ¼ V ðM =Y Þ. Accordingly, the price level P is influenced by two factors, namely money supply and output. M variable is M2, and Y variable is the real GDP. M2 money supply is chosen because interest rate may have a stronger impact on M1 money demand than M2 money demand. Increasing in interest rate will make M1 money demand be likely to decrease due to the relationship with the profitability of deposit. In the case of Vietnam, Figure 1 shows that logðPÞ has a long-term positive relationship with logðM =Y Þ and has a nearly linear relation. The price level P is the consumer price index in Vietnam with the base year 2010. In the case of China, although the starting point of the curve in Figure 2 is different from Vietnam, the positive relationship between the price level P and M2/Y is still quite obvious. The price level P is the retail price index of the base year 1978 and is selected similarly to the research of Chow and Shen (2005 Money supply, inflation and output Fisher's QTM, price level changes are based on changes in the quantity of money. Changes in the price level P and changes in money supply are proportional. However, in fact, there is an impact lag between the time a policy is enacted and the time such policy influences the economy under changes in the economic situations. The formula does not mention the period of time required from the moment the central bank begins to implement the monetary policy instruments that affect macroeconomic factors in the economy. The study of Chen (2006) on the relation between the lag of money supply and inflation for Chinese economy indicates that inflation is affected by money supply with at least a five-month lag. Because the model is estimated by year, the increase in money supply will affect inflation in that year. The study of Chow and Shen (2005), which estimates inflation for Chinese economy during the period 1952-2002, also reaches the same conclusion. However, there is a difference in the adjustment coefficient towards the equilibrium in these two countries. In Vietnam, the adjustment coefficient of the ECM model is negative and not statistically significant. This suggests that the model, in the long run, is not self-adjusting to the equilibrium. Meanwhile, inflation estimation for Chinese economy indicates that in the long run, the model can be adjusted to the equilibrium with the adjustment factor of À0.223 at a 1% significance level. The regression shows no big difference between China and Vietnam when considering the effect of last year's inflation on current inflation. Specifically, the elasticities are 0.558 and 0.656 in China for the period 1952-2002 (Chow and Shen, 2005) and the period 1978-2008, respectively. In Vietnam, both estimated coefficients are not significantly different from each other (0.439) compared to the case of China. However, there are differences in the impact on inflation in these two countries when assessing the impact factors (0.617 is much higher than 0.216). This shows that inflation in Vietnam reacts more strongly to changes in the monetary policy than in China. Δ logðPÞ t ¼ − 0:0145 þ 0:6168Δ logðM 2=Y Þ t þ 0:4392Δ logðPÞ t−1 À0:2253Δ logðM 2=Y Þ t−1 À 0:0732u t−1 For example, while it is often discussed in the monetary policy that an increase in money supply brings upward pressure on inflation (Friedman, 1970), there is no such evidence of ECM outcomes. The study continues to run the CCR model with an expectation that regression results are reliable when the phenomenon of serial correlation and endogeneity is adjusted (Wang and Wu, 2012). The regression results suggest that there is a similarity in the relation between the above mentioned three variables for both economies. The interaction is consistent with the QTM. An increase in money supply leads to an increase in inflation and promotes growth. Inflation and growth have impacts on inflation and growth in the future. Money demand is affected by income. The results of the CCR model are shown in Table 8. In terms of inflation, all variables, including M2 money supply growth, inflation and output growth, in the previous year influence inflation in the current year in both cases of Vietnam and China. First, the regression results show that inflation in Vietnam is strongly affected by inflation in the previous year. If inflation in the previous year is on an upward trend, it is likely that inflation in the following year will increase. On the other hand, due to a time lag in monetary policy implementation, inflation is difficult to control and may reverse in the next year. For the Chinese economy, last year's inflation also affects inflation in the current year, but this effect is weaker than that in Vietnam (0.180 < 0.381). This suggests that using the lagged value of inflation as expected inflation is inadequate since the estimated coefficient of Vietnam is almost two times higher than that of China. Second, output growth in the previous year increases pressure on inflation in the current year. This effect can be explained by the aggregate supply-aggregate demand model (AS-AD). When the economy has not reached potential output, an increase in the level of output will lead to an increase in the price level. The impact factor of output growth on inflation in Vietnam is approximately two times smaller than that of China (0.332 < 0.657). This implies that growth in Vietnam just partly influences inflation. Meanwhile, growth seems to have a huge impact on inflation in China. Third, a rise in M2 money supply growth in the previous year exerts upward pressure on the current year's inflation. In the money market, an increase in money supply will lead to a decrease in the base rate. Reduced interest rate helps stimulate investment and contribute to a Money supply, inflation and output Variable Error correction model following VAR structure Vietnam (1986Vietnam ( -2016 China ) , * 5 , and * 10 denote the significance at the 1%, 5%, and 10% levels, respectively Table 7. Error correction model of Δ logðM2 t Þ, Δ logðP t Þ, Δ logðY t Þ in Vietnam andChina (1978-2008) Money supply, inflation and output Variable CCR model Vietnam (1986-2016 China ) , * 5 , and * 10 denote the significance at the 1%, 5%, and 10% levels, respectively Table 8. Canonical cointegration regression model of Δ logðM2 t Þ, Δ logðP t Þ, Δ logðY t Þ in Vietnam andChina (1978-2008) AJEB rise in the aggregate demand. In the AS-AD model, an increase in the aggregate demand results in an increase in the price leve,l which in turn raises inflation. Moreover, this is consistent with Friedman's finding in which inflation is a monetary phenomenon that happens when the quantity of money increases more rapidly than output (Friedman, 1970). There is a difference in the regression results for Vietnam and China. In Vietnam, the effect of Δ logðM 2 t−1 Þ on Δ logðP t Þ (0.137) is smaller than the effect of Δ logðP t−1 Þ on Δ logðP t Þ (0.381). In contrast, in China, the effect of Δ logðM 2 t−1 Þ on Δ logðP t Þ (0.330) is greater than the effect of Δ logðP t−1 Þ on Δ logðP t Þ (0.180). This interesting result shows that while money supply shocks influence inflation in Vietnam, there exist other factors that have strong impacts on inflation. Studies suggest that this result is appropriate for developing countries, like Vietnam. The regression results show that other effects come from expected inflation. In Vietnam, inflation is often volatile and sometimes the evolution of annual inflation is far away from inflation targeting (Do and Huong, 2014). This creates a psychological fear that inflation in the past will not be adjusted soon, continue to maintain and even increase in the next year. On the other hand, Vietnam has a relatively large open economy (VNEP, 2016). Inflation is not only affected by the implementation of fiscal and monetary policy and internal macroeconomic variables inside the economy but also external factors. External factors may be the exchange rate of dong against other currencies, world economic crises, political instability, etc. For instance, when China devalue its currency to boost exports, Vietnam also has to devalue the dong (VND) to increase the competitiveness of its exports. Meanwhile, the cost of domestic production increases due to a rise in the price of inputs. Currency devaluation can lead to an increase in domestic production costs. This can raise inflation. At the same time, forecasting scenarios and macroeconomic policies are difficult to anticipate. In the opposite direction, China is the second largest economy in the world. China's inflation is less affected by other factors than small economies' inflation. Obviously, if Vietnam devalues its currency first, it is unlikely that this will affect other major countries or make them reconsider their macroeconomic policies. In terms of output, output growth is positively affected by M2 money supply growth and output growth in the previous year and is negatively affected by last year's inflation. First, results show that increasing money supply to stimulate investment and boost growth in Vietnam is less effective than in China (0.0296 > 0.378). This implies that the quantity of money injected into the economy to use for investment growth is restricted, or investment efficiency is not high. Second, the negative impact of inflation and the positive impact of M2 money supply growth in the previous year are very small in the case of Vietnam. Meanwhile, the regression results for China show that the signs of these two estimated coefficients are similar to those for Vietnam, but the impact is high. Previous studies suggest that inflation has a negative impact on growth (Ghosh and Phillips, 1998;Dammak and Helali, 2017). In Vietnam, the elasticity of expected inflation on output growth is low. According to the regression results, this may be due to the strong impact of the expected output growth (0.533) and the impact of other factors other than money supply and inflation. Third, there is a similarity in the impact factor of the expected output growth on output growth for both Vietnam (0.533) and China (0.537). Conclusion After 30 years of reform, both Vietnam and China have made a successful revolution from a planned economy to a market economy, creating tremendous economic development. Through empirical evidence, the study demonstrates that the relationship between money supply, inflation and output is still true in the case of transition economies. The law of the market is correct, though the orientation of certain market economies is different from that of Money supply, inflation and output
4,726.4
2021-10-11T00:00:00.000
[ "Economics" ]
Neutrophil to Lymphocyte Ratio in Maternal Blood: A Clue to Suspect Amnionitis There is no information about whether maternal neutrophil to lymphocyte ratios (NLRs) progressively increase with respect to the progression of acute histologic chorioamnionitis (acute-HCA) and increased maternal NLR is a risk factor for amnionitis, known as advanced acute-HCA, in pregnant women at risk for spontaneous preterm birth (PTB). The objective of the current study is to examine this issue. The study population included 132 singleton PTB (<34 weeks) due to either preterm labor or preterm-PROM with both placental pathology and maternal CBC results within 48 h before delivery. We examined maternal NLRs according to the progression of acute-HCA in extra-placental membranes (EPM) (i.e., group-0, inflammation-free EPM; group-1, inflammation restricted to decidua; group-2, inflammation restricted to the membranous trophoblast of chorion and the decidua; group-3, inflammation in the connective tissue of chorion but not amnion; group-4, amnionitis). Maternal NLRs significantly and progressively increased with the progression of acute-HCA (Spearman’s rank correlation test, γ = 0.363, p = 0.000019). Moreover, the increased maternal NLR (≥7.75) (Odds-ratio 5.56, 95% confidence-interval 1.26-24.62, p < 0.05) was a significant independent risk factor for amnionitis even after the correction for potential confounders. In conclusion, maternal NLRs significantly and progressively increased according to the progression of acute-HCA and the increased maternal NLR (≥7.75) was an independent risk factor for amnionitis in spontaneous PTB. The evaluation of the performance of NLR should clearly require a prospective description of this parameter in a cohort of patients with either threatened PTL or preterm-PROM. Introduction Ascending intrauterine infection is one of the major physiologies in spontaneous preterm birth (PTB) (i.e., preterm labor and intact membranes (PTL) and preterm premature rupture of membranes (preterm-PROM)) [1,2]. Micro-organisms from the vaginal and cervical canal ascend to chorio-decidua and advance to the amnion in extra-placental membranes (EPM) [2]; this eventually results in fetal infection [1][2][3]. During the progression of ascending intrauterine infection, maternal neutrophils sequentially migrate from the decidua through the membranous trophoblast of chorion to the connective tissue of chorion and finally infiltrates into amnion in EPM [4]. Acute histologic chorioamnionitis (acute-HCA) generated by neutrophils infiltration into the EPM is considered a maternal inflammatory response because neutrophils in EPM are derived from maternal vessels of decidua parietalis [5,6]. Recently, the neutrophil to lymphocyte ratio (NLR) as a biomarker for systemic inflammatory conditions in adults is known to be positively correlated with disease activity in rheumatic disease [47][48][49][50][51][52][53] and known to be associated with the prognosis (i.e., survival) of sepsis, systemic inflammatory response syndrome (SIRS), and septic shock [54][55][56][57][58] in patients. Moreover, some researchers demonstrated that increased neonatal NLR is a marker or predictor for significant neonatal morbidities (i.e., early-onset neonatal sepsis [EONS], broncho-pulmonary dysplasia (BPD), and necrotizing enterocolitis (NEC)) [59][60][61]. What is noteworthy is that maternal NLRs are reported to be elevated in cases with preeclampsia [62][63][64], which is associated with exaggerated inflammatory responses in the maternal vascular system [65]. However, there is no information on the relationship between maternal NLRs and the progression of acute-HCA among pregnant women at risk for PTB in the current body of research. We hypothesized that maternal NLRs progressively increase according to the progression of acute-HCA and increased maternal NLR is a risk factor for amnionitis known as advanced acute-HCA among pregnant women at risk for spontaneous PTB. We additionally examined maternal high-sensitivity C-reactive protein (hs-CRP) concentrations to demonstrate the usefulness of maternal NLR for the identification of amnionitis. The objective of the current study is to examine this issue. Study Design and Patient Population The study population included 132 singleton pregnant women who met the following criteria: (1) Korean; (2) GA at delivery between 20.6 weeks and 33.9 weeks; (3) PTB due to either PTL (63 cases) or preterm-PROM (69 cases); (4) available placental pathologic slides; (5) maternal complete blood count (CBC) profile available within 48 h before delivery. The last criterion was used to preserve a meaningful temporal relationship between maternal CBC profiles and placental pathologic findings at delivery. At our institution, the maternal CBC test and placental pathologic examination after delivery were routinely recommended and performed to all pregnant women hospitalized with either PTL or preterm-PROM. PTL and preterm-PROM were diagnosed in accordance with previously published criteria [8,9]. Written informed consent was obtained from the entire study population. The Institutional Review Board of our institute specifically approved the current study. Clinical Characteristics and Pregnancy Outcomes Clinical characteristics and pregnancy outcomes were investigated from medical records. Data included maternal age, parity, clinical history of antenatal vaginal bleeding or the evidence of placenta previa, cause of preterm delivery, gender of newborn, delivery mode, GA at delivery, birth weight, 1 min and 5 min Apgar scores, meconium staining, antenatal use of corticosteroids, antenatal use of antibiotics, and antenatal use of tocolytics. Placental tissue samples for pathologic examination included EPM (i.e., chorio-decidua and amnion), chorionic plate, and the umbilical cord. These samples were fixed in 10% neutral buffered formalin and embedded in paraffin. Sections of prepared tissue blocks were stained with hematoxylin and eosin (H&E). Clinical information regarding the placental tissues was not disclosed to pathologists. Acute-HCA in EPM was defined as the presence of neutrophil infiltration in either chorio-decidua or amnion. Acute inflammation in chorio-decidua and amnion was diagnosed according to the previously published criteria: (1) Chorio-deciduitis was diagnosed in the presence of at least one focus of >5 neutrophils in chorio-decidua; (2) amnionitis was diagnosed in the presence of at least one focus of >5 neutrophils in amnion. The progression of acute-HCA in EPM was divided according to outside-in neutrophils migration in EPM as follows: (1) group-0, inflammation-free EPM; (2) group-1, inflammation restricted to decidua; (3) group-2, inflammation restricted to the membranous trophoblast of chorion and the decidua; (4) group-3, inflammation in the connective tissue of chorion but not amnion; (5) group-4, amnionitis. Maternal Neutrophil to Lymphocyte Ratio (NLR) Maternal blood was collected in ethylenediaminetetraacetic-acid (EDTA) tubes by venipuncture of the antecubital vein within 48 h before delivery and CBC with differential leukocyte count was performed. NLR is defined as absolute neutrophil count divided by absolute lymphocyte count. We additionally examined maternal hs-CRP concentrations within 48 h before delivery to demonstrate the usefulness of maternal NLR for the identification of amnionitis. Statistical Analysis Continuous and categorical variables were compared with the Kruskal-Wallis test and Pearson's chi-square test, respectively. Multiple comparisons of continuous and categorical variables between the groups according to the progression of acute-HCA in EPM were performed with 1-way ANOVA with post-hoc Tukey test and Fisher's exact test with Bonferroni's correction, respectively. Spearman's rank correlation test was used to examine the relationship between maternal NLRs and acute-HCA in EPM. The receiver operating characteristics (ROC) curve was used to estimate the best cut-off values (maximum sum of sensitivity and specificity) and to identify maternal NLRs as being raised or not raised for the detection of amnionitis. Using this cut-off value, we compared the frequency of increased maternal NLR according to the progression of acute-HCA in EPM with Pearson's chi-square test. Moreover, linear by linear association was used to investigate the trend about the frequency of increased maternal NLR (≥7.75) according to the progression of acute-HCA in EPM. Diagnostics indices (i.e., sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio) were determined for increased maternal NLR for the identification of amnionitis. We performed multiple logistic regression analysis for the exploration of the relationship between various variables and amnionitis. We analyzed maternal hs-CRP with the same statistical methods to demonstrate the usefulness of maternal NLR for the identification of amnionitis. Statistical significance was defined as p < 0.05. Clinical Characteristics and Pregnancy Outcomes According to the Progression of Acute Histologic Chorioamnionitis (Acute-HCA) in Extra-Placental Membranes (EPM) Group-0, group-1, group-2, group-3, and group-4 was present in 36.4% (48/132), 14.4% (19/132), 20.5% (27/132), 17.4% (23/132), and 11.4% (15/132) of study population, respectively (Table 1). Table 2 demonstrated that GA at delivery and birth weight were significantly decreased according to the progression of acute-HCA in EPM and there was a significant difference in the frequency of antenatal use of antibiotics among five groups according to the progression of acute-HCA in EPM (Table 2). Table 1. Clinical characteristics and pregnancy outcomes according to the progression of acute histologic chorioamnionitis (acute-HCA) in extra-placental membranes (EPM). Figure S2, red line). Moreover, for the comparison with maternal NLR, we constructed a ROC curve to choose the cut-off values for the discovery of maternal hs-CRP (AUC, 0.581; SE, 0.086; p = 0.323) as being raised or not raised for the diagnosis of amnionitis and a cut-off value of 1.035 mg/dL was chosen ( Figure S2, blue line). Table 2 displays diagnostic indices, predictive values, and the likelihood ratios of increased maternal NLR (≥7.75) within 48 h before delivery for the identification of amnionitis. Moreover, we demonstrated diagnostic indices, predictive values, and likelihood ratios of maternal hs-CRP ≥ 1.035 mg/dL within 48 h before delivery for the identification of amnionitis in cases with either PTL or preterm-PROM (Table S2). However, these positive and negative likelihood ratios were not significant (Table S2). The Frequency of Increased Maternal Neutrophil to Lymphocyte Ratio (NLR) According to the Progression of Acute Histologic Chorioamnionitis (Acute-HCA) in Extra-Placental Membranes (EPM) There was a significant stepwise increase in the frequency of increased maternal NLR (≥7.75) according to the progression of acute-HCA in EPM (Pearson's chi-square test, p = 0.014; and linear by linear association, p = 0.000833) (Figure 2). Moreover, Table 3 demonstrated that increased maternal NLR (≥7.75) was a significant independent risk factor for amnionitis even after the correction for potential confounding variables. We additionally demonstrated the frequency of increased maternal hs-CRP (≥1.035 mg/dL) ac-cording to the progression of acute-HCA in EPM ( Figure S3). However, increased maternal hs-CRP ≥ 1.013 mg/dL was not an independent risk factor for amnionitis (Table S3). Figure 3 shows representative images for inflammation-free EPM (a, group-0), inflammation restricted to decidua (b, group-1), inflammation restricted to the membranous trophoblast of chorion and the decidua (c, group-2), inflammation in the connective tissue of chorion but not amnion (d, group-3), and amnionitis (e, group-4) in H&E stained histologic sections of EPM. Figure 3f is the schema depicting the progression of acute-HCA generated by outside-in neutrophils migration in the entire sub-divisions of EPM. Principal Findings of This Study Maternal NLRs significantly and progressively increased according to the progression of acute-HCA ( Figure 4) and increased maternal NLR (≥7.75) was an independent risk factor for amnionitis in spontaneous PTB. This finding suggests maternal NLR may be used as a non-invasive antenatal marker for amnionitis. The Usefulness of Neutrophil to Lymphocyte Ratio (NLR) as a Maternal Inflammatory Blood Marker during Pregnancy What is noteworthy is that the absolute count of each neutrophil and lymphocyte, but not the percentage of each neutrophil and lymphocyte as a relative ratio within leukocytes, should be interpreted cautiously because leukocytosis usually occurs during normal pregnancy [66] and the normal range of leukocyte count is widely variable among pregnant women [67][68][69][70][71]. Therefore, it is reasonable that the percentage of each neutrophil and lymphocyte, but not the absolute count of each neutrophil and lymphocyte in maternal blood, is used for the differentiation between inflammation-free placenta and acute-HCA during antenatal period. Biologic Plausibility about Increased Maternal Inflammatory Blood Markers According to the Progression of Acute Histologic Chorioamnionitis (Acute-HCA) in Extra-Placental Membranes (EPM) We previously demonstrated that intra-amniotic infection and inflammation recruits maternal neutrophils to the feto-maternal interface of chorio-decidua from maternal decidual vessels in both preterm rhesus model and human spontaneous PTB [72]; moreover, intra-amniotic inflammatory responses are more severe according to outside-in neutrophils migration in the chorio-decidua of EPM in human spontaneous PTB (i.e., 'inflammation restricted to decidua', 'inflammation restricted to the membranous trophoblast of chorion and the decidua', and 'inflammation in the connective tissue of chorion') [12]. Given that 'leukocyte integrin lymphocyte function-associated antigen 1 (LFA-1)' and its endothelial ligand 'intercellular adhesion molecule (ICAM)-1 play an important role in the endothelial adhesivity and transmigration of neutrophils in the capillaries of in vivo and in vitro inflammation models [73,74], we should find evidence about the expression of LFA-1/ICAM-1 in both maternal blood and EPM in the context of acute-HCA to explain the biological plausibility with respect to the positive correlation between maternal NLRs and the progression of acute-HCA generated by outside-in neutrophils migration in EPM. Indeed, maternal blood ICAM-1 was reported to be a reliable indicator of acute-HCA among cases with either PTL [28,42] or preterm-PROM [42] in spite of the above-mentioned limitations in those studies [28,42]. Moreover, EPM shows about a five-fold elevation of LFA-1 and about a three-fold elevation of ICAM-1 in mRNA sequencing profiles in preterm rhesus macaques delivered after 48 h following intra-amniotic lipopolysaccharides (LPS) infusion in our previous study (unpublished data). Therefore, one can expect that maternal NLRs significantly and progressively increased according to the progression of acute-HCA generated by outside-in neutrophils migration in EPM. Major Strengths and Limitation of This Study Firstly, the current study analyzed the progression of acute-HCA in the whole subdivisions of EPM (i.e., decidua, the membranous trophoblast of chorion, the connective tissue of chorion, and amnion). Secondly, this study demonstrated that increased maternal NLR is an independent risk factor for amnionitis, known as advanced acute-HCA in EPM, even after the adjustment for the potential confounding variables including GA at delivery. Thirdly, this study recommended maternal NLR as a maternal inflammatory blood maker for the identification of acute-HCA with the use of a simple and widely available CBC in every medical institution. Although we did not compare the specificity and sensitivity for the identification of amnionitis between maternal NLR and other tests such as cytokines and chemokines, the measurements of cytokines and chemokines are not generally and widely available in every hospital. Limitation of this study is that the positive and negative LRs of maternal NLR cut-off 7.75 for the identification of amnionitis remained low. However, we did not find any non-invasive maternal blood biomarker for amnionitis (Table S1) and, therefore, maternal NLR may be promising for future trials for the identification of amnionitis. Significance of This Study This is the first human research reporting that maternal NLRs are significantly and positively correlated with the progression of acute-HCA in the whole sub-divisions of EPM ( Figure 4) and that maternal NLRs are an independent risk factor for amnionitis, known as advanced acute-HCA, even after the correction for the potential confounding variables. This finding suggests maternal NLR may be used as a non-invasive antenatal marker for amnionitis. Unanswered Questions and Proposals for Future Study It is not yet known whether maternal inflammatory blood markers (i.e., NLR) can be used for the prediction for early acute-HCA in EPM (i.e., inflammation restricted to the decidua and inflammation restricted to the membrane trophoblast of chorion). This kind of study will improve the value of non-invasive maternal blood inflammatory markers for the early identification of pregnant women at risk for spontaneous PTB. However, the evaluation of the performance of NLR should clearly require a prospective description of this parameter in a cohort of patients with threatened PTL or preterm-PROM, including a part of patients remaining undelivered as is observed in real life. Conclusions Maternal NLRs significantly and progressively increased according to the progression of acute-HCA and increased maternal NLR (≥7.75) was an independent risk factor for amnionitis in spontaneous PTB. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10122673/s1, Figure S1: Maternal high sensitivity C-reactive protein (hs-CRP) (mg/dL) according to the progression of acute histologic chorioamnionitis (acute-HCA) in extra-placental membranes (EPM), Figure S2: A receiver operating characteristics (ROC) curve was constructed to select the cut-off values at which to identify maternal NLR, as being raised or not raised for the identification of amnionitis, Figure S3: Frequency of increased maternal high sensitivity C-reactive protein (hs-CRP) (≥1.035 mg/dL) according to the progression of acute histologic chorioamnionitis (acute-HCA) in extra-placental membranes (EPM), Table S1: Previous studies reporting the relationship between maternal inflammatory blood markers and acute histologic chorioamnionitis (acute-HCA) in ex-tra-placental membranes (EPM), Table S2: Diagnostic indices, predictive values, and likelihood ratios of maternal high sensitivity C-reactive protein (hs-CRP) ≥ 1.035 mg/dL within 48 h before delivery for the identification of amnionitis in cases with either preterm labor and intact membranes (PTL) or preterm premature rupture of membranes (preterm-PROM), Table S3: Relationship of various independent variables with amnionitis analyzed by overall logistic regression analysis. Informed Consent Statement: Written informed consent was obtained from the entire study population. Conflicts of Interest: The authors declare no conflict of interest.
3,844.4
2021-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Systematic Assessment of Carbon Emissions from Renewable Energy Access to Improve Rural Livelihoods One way of increasing access to electricity for impoverished unconnected areas without adding significant amounts of CO2 to the atmosphere is by promoting renewable energy technologies. However, decision-makers rarely, if ever, take into account the level of in-built energy requirements and consequential CO2 emissions found in renewable energy, particularly photovoltaic cells and related equipment, which have been widely disseminated in developing countries. The deployment of solar panels worldwide has mostly relied on silicon crystalline cell modules, despite the fact that less polluting material—in particular, thin film and organic cells—offers comparatively distinct technical, environmental and cost advantages characteristics. A major scientific challenge has thus been the design of a single decision-making approach to assess local and global climate change-related impacts as well as the socio-economic effects of low-carbon technology. The article focuses on the functions of the multi-criteria-based tool SURE-DSS and environmental impact analysis focused on greenhouse gases (GHG) emissions balance to inform the selection of technologies in terms of their impact on livelihoods and CO2eq. emissions. An application in a remote rural community in Cuba is discussed. The results of this study show that while PV silicon (c-Si), thin film (CdTe) and organic solar cells may each equally meet the demands of the community and enhance people’s livelihoods, their effect on the global environment varies. Introduction Access to energy is a fundamental component of development because it contributes to gross domestic product (GDP) and improves the Human Development Index (HDI) [1][2][3].Increasing access to modern clean energy services is essential to raising living standards and attaining the eight Millennium Development .Greater access to modern electricity, preferably from renewable sources, is necessary to meet the challenges associated with both the adaptation and mitigation of climate change [7]. Since the 1970s, the release of greenhouse gases (GHG) from energy generation has grown more quickly than that from other sectors, i.e., over 145% compared to 120% in the transport sector [8].While developing countries have historically contributed small amounts of atmospheric GHGs, since 2004 their emissions have increased significantly and now exceed those originating in the developed economies.Projections indicate that by 2030 the annual average emissions will increase by 2.6 per cent in developing countries compared to only 0.8 percent in developed countries [9].CO 2 makes up about 75 per cent of all GHG emissions [9].It is thus paradoxical that substantial numbers of people in developing countries still lack access to electricity-i.e., 1.3 billion worldwide, 85 per cent of whom live in rural areas [10]. The use of renewable energy technologies addresses three contemporary concerns simultaneously, i.e., economic growth and positive impact on livelihoods; local environmental protection; and the saving of global CO 2 emissions which would otherwise be released if fossil fuel sources were used instead.As to the local environment, while the addition of modern energy can facilitate development, it may, however, also have negative impacts or drain scarce financial resources, both of which could be mitigated if technology solutions were selected more carefully [11].Solar Home Systems (SHS) in particular are able to generate enough electricity to increase people's survival rates by powering medical facilities, improving living conditions and protecting the natural surroundings and global atmosphere.Yet, policy and other decision-making processes to roll out solar systems worldwide, including in less developed regions, has mostly overlooked the CO 2 emissions that manufacturing, transport and decommissioning contribute throughout their lifetime. The range of existing approaches to promoting energy for the poor is such that tools are either overly qualitative and geographically oriented on, e.g., households, enterprise and communities [12,13], or focus on large regions [14] whose capacity to match the technology to users' needs, demands, resources, and decision-makers' priorities is limited.In addition, only a few studies have investigated the environmental impact of increasing the supply of clean energy technology for poor populations in developing countries [15][16][17] but also in developed countries, such as UK [18].To tackle the knowledge gap in the assessment of CO 2 footprints when promoting energy access, SURE-DSS employs life-cycle analysis (LCA) to estimate energy consumed and the amount of CO 2 emitted prior to any installation of SHS.The article explains why this type of footprints analysis is necessary, and discusses how to undertake it without compromising or interfering with the main objective to uphold energy access to improve sustainable livelihoods in developing countries.First-hand information from a rural community in Cuba with no access to on-grid electricity has been used to test the model [15,16]. The SURE-Decision Support System is a tool to assist decision-makers promote energy access in poor areas of developing countries.The tool matches needs and demands to suitable energy technologies.It aspires to promote sustainability, improved livelihoods, local and global environmental protection, and equity.However, the SURE-DSS was not working out the global environmental impact of increasing the supply of clean energy technology to poor populations in developing countries.This article reports on the new function of SURE-DSS to calculate previously unaccounted for CO 2 emissions from renewable energy technology. Life Cycle Analysis of Solar Home Systems Electricity produced from solar home systems often enables valuable services such as the pumping of potable water, refrigeration of food and medicine, provision of additional hours of light to prolong day-time activities, and to power radio, television and mobile phones.Solar home systems (SHS) have been piloted and advanced in rural communities in developing countries [19][20][21].Implementation is, however, still limited and there is room to significantly upscale installation.Small stand-alone SHSs merit a significant share in the deployment of future energy solutions where there is no competing modern energy source available, especially in remote locations.SHS generate electricity without detrimental environmental effects.Yet, SHS consume considerable quantities of energy and generate varying degrees of air pollution during their manufacture, transportation and disposal.There are, therefore, sufficient reasons to take into account the in-built embedded energy in SHS intended to promote local socio-economic and environmental sustainability. Life cycle analysis (LCA) evaluates environmental impacts of products and services and has been used to examine often overlooked carbon-related aspects of energy systems; most notably, it has been applied to photovoltaic technology (for silicon [22][23][24][25][26]; thin film [27,28]; and organic technologies, [29,30]).LCA keeps track of different components and identifies the most energy intensive and environmentally costly materials, and production processes, installation, maintenance and decommissioning.When using LCA to compare PV grid-connected systems with non-renewable sources, the former's lower global warming emissions make it a favourable technology (based on average southern European insolation levels, solar energy technology can generate between 21 to 37 grams of CO 2 eq.emissions per kWh of electricity.In comparison, coal produces 900 grs, combined gas cycle generates 439 grs, and nuclear 40 grs CO 2 eq.emissions per kWh respectively; see [22,25,26]).LCA offers the advantage of being able to evaluate more than one factor at a time.Early work on maximization of supply and reliability of energy systems also measured efficiency, economic cost and environmental impact [31][32][33].This study does not apply a full LCA to evaluate every possible category of impact.Instead, a set of relevant criteria has been selected to focus on environmental impact analysis in terms of avoided greenhouse gases (GHG), as described in the following subsections. Criteria of the Life Cycle Analysis The nine LCA criteria used to assess silicon, thin film and organic solar energy cells are: I. energy efficiency; II.embedded energy; III.energy pay-back time; IV. avoided emissions; V. real lifetime; VI. balance of system; VII.cost of energy system; VIII.system dependability; and IX.decommissioning.They are briefly described below; details regarding their calculation and parameters are provided in the Supplementary Materials. I. Energy efficiency is the efficiency for energy conversion from a renewable (or fossil) source into practical work.For photovoltaic technology is defined as the power conversion efficiency (PCE, the ratio of power delivered by a solar module to the incident solar irradiance on the active area of the module) under standard conditions (1 kW/m 2 at AM1.5 spectra, with cell temperature 25 • C and wind less than 1 m/s). II. Embedded energy is the total amount of energy required to manufacture, transport, install, operate, and decommission the energy system.For SHS, this fluctuates between 45 and 56 GJ/kWp.The amount of embedded energy in solar systems depends on technical aspects and system boundaries, such as, for example, whether the decommissioning of batteries and panels includes their recycling [22,26,[34][35][36][37][38][39][40].Research has demonstrated that the embedded energy in organic solar cells is relatively high, at 56.02 GJ/kWp for 5% power conversion efficiency and 28.01 GJ/kWp if 10% power conversion efficiency is assumed [29].For example, the embedded energy of dye-sensitized solar cells (DSSCs) is about 100-280 kWh per square metre of active solar cell area and the related CO 2 emissions have been calculated at 19 g to 47 g CO 2 eq.per kWh of energy produced under different insolation levels, efficiency and lifetime [41].In summary, the lower this figure, the lower the level of embedded pollution. III. Energy pay-back time (EPBT) describes how long it takes a solar panel or solar system to generate the same amount of energy that was required to manufacture, transport and install it and perhaps also to decommission and recycle it.The shorter the period, the more recommended is the technology.Solar energy systems require between 4 to 7 years to generate the same amount of energy that was employed for their creation; this period is significantly shorter than the expected technical life of the systems, which is 20 to 25 years [23,24,34,[36][37][38][39]. IV. Avoided emissions implies the total CO 2 eq.(when referred to CO 2 emissions, the unit used is CO 2 "equivalent", which takes into account emissions of other Green House Gases and quantifies its impact in units' equivalent to CO 2 ) that could be saved if electricity were generated by cleaner alternatives to fossil fuels.The avoided emission in relation to PV systems is, by and large, a measure dependent on location, because its calculation relies on physical indicators such as irradiance, average temperature, and energy mix of a particular region or country where the SHS is installed.Moreover, the embedded energy of the solar modules (see II above) is taken into consideration.For example, if a PV system replaces a diesel generator, the avoided emission is 1.27 kg of CO 2 eq./kW [23].When electricity from PV panels is used instead of electricity from the national grids in the USA, the avoided emissions have been calculated as 0.522 kgCO 2 eq./kWh, but it is 0.900 kgCO 2 eq./kWh in Cuba [42,43].The energy mix represents, for each country, all the electricity systems and their anticipated associated CO 2 emissions. V. Real lifetime refers to the expected number of years that a PV system may remain in working order.The real life-time of SHS stated in any manufacturer's guarantee is usually at least 25.If no technical failures had been reported, PV panels may generate electricity beyond the guaranteed lifetime.Whereas the life-time of SHS is considered primarily a technical concern, the effective operational life-time period (i.e., the operation ratio of solar systems) is often cut short by non-technical location-dependent factors.For example, regular maintenance, commercial networks that guarantee access to spare parts, socio-economic conditions such as education and social organization, and government supportive regulations and markets to enable access and equipment upkeep.System dependability (see VIII. below) is used to also achieve a deeper understanding of system lifetime and failure because it encompasses parameters such as user's attitude and satisfaction, as well as maintenance routines [33,42,43]. VI. Balance of system (BoS) refers to the additional parts that accompany a SHS and are necessary to achieve greater efficiency: the battery that stores power, the charger regulator, and the inverter if AC electricity is used.The embedded energy and CO 2 emissions from BoS can be substantial and therefore it is included in any LCA.The energy requirements of inverters for a 3 kWp residential PV system have been estimated at 0.5 MWhth (i.e., 0.17 MWhth/kWp) [22].Alsema (2000a) and Rydh & Sanden, (2005a, 2005b) [40,44,45] suggest values of 1 MJ/Wel (0.277 MWhth/kWel) for inverters and charge regulators.It has been estimated that the power requirements of poor households range between 500 W and 1500 W [46,47].The SURE-DSS model discussed in this article uses the highest values to calculate the energy pay-back time of BoS required for typical Solar Home Systems.The costs associated with BoS vary according to the purpose of installation, e.g., whether it is roof-top, building-integrated, or a ground-based design; country of installation; and technical characteristics such as size, surface area, and module efficiency.By 2013, the price of BoS for roof-top systems was between €1.1 to €0.9/Wp.It has been estimated that this will reduce further, to €0.75/Wp by 2020, and to less than €0.5/Wp by 2030 [48].A particular feature of BoS is that its cost is calculated independently of the rest of the SHS.For example, replacement of the power storage batteries-which, have the shortest lifetime of all the systems components-necessitates additional costs.Also, to a large degree, the choice of device varies, depending on the user.Cost reductions for BoS correlate with increased efficiency and reduced size of solar modules.The SURE-DSS model employs the BoS standard approved by the European Commission in 1998 [49,50], which has an emission factor of 66g CO 2 eq./Wh for both batteries [51] and charge regulators [52].Also, as SHS may employ small inverters to feed alternative current (AC) appliances, the embedded energy for a 500W inverter is used and the values are drawn on the Ecoinvent Database (2012) [53] (Table 1).VII. Levelized cost of electricity represents the price per kWh of delivered electricity throughout the entire lifetime of the solar panel system; the levelized cost of electricity (LCOE) compares this value with the market costs to generate this energy.For example, in order to compete with electricity from fossil fuels, energy generation from PV should cost less than US$0.50 per Wp per installed solar panel; yet, the cost is still significantly higher at US$1.30/Wp to US$1/Wp [54].Cost comparisons between small stand-alone solar installations and other off-grid energy systems, such as diesel generators-widely used in rural areas-have favoured photovoltaic panels [49,55]. Most cost projections for solar technology have focused on silicon-based solar panels and results indicate that prices will continue to decrease at historic rates [56,57].Chinese manufacturers, such as SunTech, YingLi Solar, Trina Solar, are close to achieving a market low of between €1/Wp, and €0.75/Wp by 2025, and if BoS costs were included, the price would range from €2.5/Wp for small stand-alone systems to €1.5/Wp for grid connected systems larger than 100 kWp [58]. If the LCOE of thin film and organic cells were to be taken into consideration, additional advantages might well emerge in favour of solar energy solutions because solar thin film technologies require significantly less material and energy than conventional crystalline silicon modules.Future cost reduction of amorphous silicon (a-Si:H), cadmium telluride (CdTe) and copper indium diselenide (CIS) is forecast as the technology advances [27,40,57].Other thin film cells have been priced similarly, ranging from €0.9 to €1.1/Wp.Yet, lower prices, of between €0.6 and €0.7/Wp, have been obtained in 2016 for both a-Si:H and microcrystalline-Si modules, with efficiencies of 10% and 11% respectively.Thin film technologies have virtually attained the efficiency target set by the European Photovoltaic Technology Platform of above 10% with associated production costs below 0.7 €/Wp on rigid substrates [48,58]; The corresponding targets for flexible substrates are 10% and €0.6/Wp respectively.Intensive R&D, low-cost and high-volume production of thin-film PV modules are required to achieve €0.5/Wp by 2025 [48,58].A main challenge facing thin film technologies is up-scaling global production capacity.Japan, the USA and Europe already deploy advanced thin film R&D infrastructure, with factory facilities, and generation via this means is forecast to reach 10 GWp/year and 13.3 GWp/year by 2017 [48]. The prospects for improved efficiency, cost and production of organic solar technologies-including dye-sensitized solar (DSSC) [59] and fully organic-were predicted more than two decades ago [60][61][62][63][64].A remarkable 10.6% efficiency high has been achieved for organic tandem cells [65]; and over 20% on a new family of cells based on organic perovskites [65,66].However, the stability of the manufactured cells remains limited and lifetimes are still well below those of Si-cells.Prices ranging between €0.5/Wp and €0.1/Wp by 2020 have been predicted for 1 GWp production scale, [67,68].An organic solar cells in a lighting project in Africa has been demonstrated.Although the cost of the organic solar cells and hybrid technologies has not been compared to that of other off-grid systems, initial estimates of photovoltaic power conversion also promise cost reductions [68]. The LCOE of organic solar cells at €0.19/kWh to €0.50kWh is significantly lower than that of silicon cells.A hybrid tandem technology with a potential 20% efficiency and a cost of $0.50 per Wp could significantly reduce the LCOE [69]. VIII.System dependability (SD).With the exception of cost, the factors discussed above are location-dependent, which means that elements such as where the solar systems are manufactured, where to and how they are transported, and the energy mix and level of CO 2 emissions at the country of installation are all important.SD ultimately affects the amount of embedded energy, avoided emissions, and pay-back-time of any PV installation.For example, the intensity of natural radiance and the average daily temperature at a particular location affect the capacity of solar panels to generate energy per surface of installed unit.Location is crucial in determining the total energy produced by a PV system during its lifetime.Moreover, non-technical factors relating to SHS such as users' behaviour, views and expectations can be equally sensitive to geographical location (see V. Real lifetime above).Assessment of system dependability aims to achieve a better understanding of system failure and point to ways to extend the systems' real lifetime to their technical limit [33,42].Lastly: Energies 2016, 9, 1086 6 of 19 IX.Decommissioning is the process whereby the equipment that is left behind at the end of the systems operational life is disposed of.Recycling PV modules could save two thirds of the energy expended in their assembly [70]; while battery recycling plays a positive role in terms of reducing the environmental impact of standalone solar systems [71].Advances have been made in the recycling of crystalline silicon and thin film modules.However, only preliminary results are available regarding the treatment of the aluminium and glass found in thin film and organic modules.Organic polymers, nanoparticles and other electrodes (Ca/Al) contained in the systems are not currently recycled but deposited in landfills [35,72]. Energy supply to the poor in developing countries could potentially be increased through solar energy installations.So far, the silicon solar cell type has dominated the provision of solar energy in poor regions.The proposed systematic assessment seeks to assist decision-makers to also take into account the overall levels of CO 2 in the technology when they plan to increase energy access in developing countries. Approach to Estimate the Socio-Economic Impact, Energy Requirements, and Global CO 2 Emissions of Solar Home Systems (SHS) The functions performed by SURE-DSS fall into three main categories: (i) it identifies the extent of livelihoods capitals available to communities, creates a resource baseline, and generates a set of potential energy solutions that takes into account such a baseline; (ii) it captures the demands and priorities of both local beneficiaries and decision-makers, and itemizes the bearing of each pre-selected energy technology on five livelihoods capitals (social, human, financial, natural, physical).Finally; (iii) SURE calculates the global environmental impact and mitigation potential of pre-selected energy technologies using life-cycle analysis as its main approach [16,17,73].This section discusses the mathematical approaches to assess the impact of solar photovoltaic home systems on the local community and natural surroundings, and on global CO 2 emissions. The Resource Baseline and Pre-Selection of Energy Technologies In order to both, facilitate the selection of energy technologies to promote energy access which have an impact on livelihoods, and proceed with the calculation of the global CO 2 footprints of those renewable energy technologies, the SURE-Decision Support System works on the principle that populations have access to, or own, a certain amount of resources or capitals, namely: physical (e.g., infrastructure such as houses, roads, schools, energy installations); financial (e.g., wages, savings, access to credit, remittances); natural (e.g., water, land, flora, wind, sun irradiance, organic waste, landscape); social (e.g., friendship networks and affiliation to political organizations); and human (education, health, skills) see e.g., [74][75][76].It follows that the robustness of a community's livelihoods depends on access to or ownership of each and all five capitals.The contribution of SURE has been to quantitatively calculate the available five capitals as well as assessing the potential impact that the supply of additional energy would have on them.Drawing on the "full-energy menu"-e.g., solar, biogas, diesel generation, micro-hydro, wind power, hybrid options, and national grid-SURE models and compares the impacts of such a set of energy technology alternatives on the community's baseline. The selection of appropriate energy technology based on their impact on the above five livelihood capitals represents a multi-criteria decision making problem.This type of problem seeks to assess the performance of a predefined set of energy alternatives in the light of various criteria of different nature, and recommend the technology with higher scores.A set of criteria or attributes, i.e., the five livelihood capitals (and the factors that define each one) and a set of various alternative energy technologies have been defined.In order to assess the performance of the energy options in the light of the chosen factors of the five capitals, data from a community in a developing country has been collected.Because the criteria and attributes of the livelihoods capitals have different units of measure their performance has been scaled, or normalized, to compare and also aggregate results.Finally, the different performances or impacts of the technology alternatives have been totaled and to generate an overall score.SURE-DSS Energies 2016, 9, 1086 7 of 19 employs the multi-criteria approach known as compromise programming [77,78] because it enable to aggregate the impacts of the energy alternatives across all capitals and recommend best performing technologies.The alternative with the largest score will be recommended.SURE-DSS draws on a standardised non-dimensional metric to calculate the extent to which these pre-selected energy technology alternatives A i (A i , i = 1, . . ., n) may bring about changes to the livelihoods capitals, C j (C j (A i ), j = 1, 2, . . ., 5), see Equation (1): where C j (A i ) represents the overall impact of the i-th energy technology alternative (A i , i = 1, . . ., n) on capital j, j = 1, 2, . . ., 5, (1 = physical; 2 =financial; 3 = natural; 4 = social; and 5 = human); Cj(Ai) takes values between 0 to 100 and indicates how the energy option i impacts capital j ("0" is for the least desirable effect; "100" indicates the most aspired to effect of i on capital j); Xj represents the group of factors that constitutes each capital j (e.g., for natural capital, the factors refer to amount of water, land, wind, sun irradiance, organic waste available).Hence, Xj(Ai) represents the effects of the i-th energy alternative on the factors of corresponding capital j.Finally, αj is an arithmetic mean function that normalises the various types of impacts from the i-th energy technology option across all five livelihoods capitals j X j (A i ) in a common scale so that the different impact may be compared see, [16,73].Hence, Equation ( 1) is the first step in solving the multicriteria problem set by the selection of technology to increase energy access.It scales and aggregates the performance of each energy alternative across the various factors that constitute each capital, and facilitates the comparison of impacts across the five livelihood capitals.The outcome of this procedure is a payoff matrix where each energy alternative is assessed against each of the five capitals (see Table 2 below and Ref. [17]).* For EEI, see [16]; technology appropriateness has been normalized to 100.It is assumed that the greater the score, the more appropriate the technology. The next section shows the procedure employed in SURE to aggregate the various impacts of each energy alternative across all capitals. Selection of Energy Technology for Livelihoods Improvement Following a pre-selection stage where a choice is made between different energy alternatives according to available resources and the characteristics of the technologies, the most relevant solution is recommended that takes into account the demands and priorities of future beneficiaries and decision-makers.Equation (2) indicate the aggregation procedure of the expected impact of the energy alternative A i across all five capitals.It measures the gap between the expected impacts due to each implemented energy alternative A i on the five capitals and a hypothetical or "ideal" state of development for the community (whereby the five livelihoods capitals have been fully developed = 100).Having calculated all the gaps, the approach recommends the most satisfactory energy technology solution as the closest to the ideal or hypothetical state of development [73]. where D p (A i ) is the gap between the ideal state and the value resulting from modelling the implementation of the i-th energy alternative (A i ); C j (A i ) is the expected impact of the energy alternative i on asset j; Cj,ref is the ideal value of asset j (C j,ref = 1); C j,min is the lowest possible value given to asset j (C j,min = 0, a total depletion of the asset); Wj represents the relative weight factor of importance assigned to asset j; and p is a distance parameter that reflects the attitude of the decision-maker regarding deviations from the ideal state of development (typical values for p are 1 and 2) [16,73].Finally, SURE-DSS assists decision-makers in assigning the values to the weight factors of the capitals (Wj).It does so by calculating three features of the payoff matrix: the interdependence among the five capitals, the entropy within each capital, and the chances to select unsatisfactory energy solutions [74].Further, the scores of the sensitivity analysis of the photovoltaic technologies which the system has selected indicates that these scores are almost fully independent from the sets of the selected weights, confirming the proposed best solutions for the supply of energy.For details of the sensitivity analysis performed for this study, see Table S2 and Figure S1 in the Supplementary Materials. Energy for Sustainable Livelihoods and Global Emission Mitigation While Equations ( 1) and ( 2) above ascertain the most suitable energy technologies to achieve optimal developmental impact on poor local livelihoods, Equations (S1)-(S8) (shown in the Supplementary Materials) go on to appraise the selected technologies from the perspective of global environmental impact.With a focus on solar home systems, when calculating the improvement to livelihoods and the local natural environment in developing countries, the proposed advanced operation calculates the amounts of consumed energy and CO 2 emissions hidden in the solar systems.To account such concealed energy and CO 2 , the SURE-DSS processes information drawing on life cycle analysis. The parameters related to LCA described above assess the global environmental impact of producing, transporting, and installing solar home systems.They address energy pay-back time, embedded energy, energy return factor and avoided emissions criteria, whose detailed calculation is provided in the Supplementary Materials.Additionally, this section has discussed the systematic model proposed to assist decision-makers in analysing local and global impacts of photovoltaic technologies for improving livelihoods in poor areas of developing countries.The approach is applied to the case study presented in the next section. Case Study: Las Calabazas, Cuba The approach described above has been tested in a small rural community, Las Calabazas, in Villa Clara province, Cuba.Administratively, Las Calabazas is part of the municipality of Manicaragua, in the mountain range of Guamuhay at the Escambray Sierra National Park.Las Calabazas is characterised by extended periods of drought and high temperatures, particularly in recent decades, as a result of changes in the global climate.Its population size and geographical features match the Cuban government's definition of dispersed and isolated rural community.The closest settlement is found only 5 km away, Guinia with 4688 inhabitants; while the nearest town, Manicaragua with 22,266 inhabitants, is 20 km distant.The municipality administers an area of 1063 km 2 with a population density of 67.8 inhabitant per km 2 . In 2012, a structured household questionnaire was distributed to all 12 households which were then in Las Calabazas.The survey gathered information on the availability of energy, access to financial, social, human, natural and physical resources, and uncovered the priorities and demands of the Energies 2016, 9, 1086 9 of 19 population (37 inhabitants).To complement the information from the questionnaire, semi-structured interviews were undertaken with the president of the municipal government council of Manicaragua, and the manager of the hydro-electricity company for the province of Villa Clara. The country's energy mix and its related CO 2 emissions data for calculating the amount of avoided CO 2 emissions from prospective solar energy installations in Las Calabazas have been drawn from the national statistical data [79]. Results: Renewable Energy for Livelihoods Improvement and Reduction of CO 2 Emissions This section presents the results about renewable energy for livelihoods improvement and global CO 2 mitigation and examines the prospect of developing solar technology at Las Calabazas by looking into: (i) its baseline; (ii) livelihoods and energy priorities; and (iii) energy for livelihoods and global emissions mitigation. Baseline Resources and Energy The financial owned by the population of Las Calabazas was notably small and unemployment, especially among women, was high.Working on coffee plantations and in forest management were the main sources of income.Seemingly, the government had no plans to improve or increase the current capabilities of Las Calabazas [79,80].Moreover, the physical infrastructure of the community was minimal.It consisted of 12 houses made of durable materials; a one-room primary school; a small video game room, which was also used for communal activities; a cold drinks stall; and a communal warehouse.The only road to Guinía was in a state of serious despair.Although the national grid was only 4 km away, the houses were not connected.The only modern source of electricity were two small silicon photovoltaic systems, deploying 0.5 kW power capacity and without battery supply.These two roof panels provided electricity to the video games room and primary school building.Persistent drought over recent years had affected the landscape, which showed signs of desertification with low wind speeds.Interviewees reported that the natural resources in the area were gradually dwindling.Soil stress and water scarcity were reducing crop yield and people were clearly distressed by the situation.The area is nonetheless endowed with abundant solar radiation which makes it suitable for producing solar energy-as evidenced seen by the presence of the two small photovoltaic panels. Human resources were found to be relatively high according to education indicators.A large proportion (64.8%) of the surveyed residents had completed primary school; about one quarter of the population (24.3%) had attended secondary school; and, remarkably, technical training was reported by 5%.Overall, women had slightly lower levels of education.The findings match the typical pattern of higher educational levels among rural populations in the province of Villa Clara when compared to national levels.Given the low financial and infrastructure indicators and substantial environmental degradation in this part of the Escambray Mountains, such higher educational levels are noteworthy.In summary, the baseline points at relatively pronounced poverty and, with the exception of solar radiance, limited availability of natural resources, particularly water.Human capital was nonetheless high as a consequence of unexpectedly high education levels. Solar Energy for Improving Livelihoods: Demands for Energy and Technologies An additional 5.4 kW of installed capacity would be required to provide the services demanded by the 12 households.Five different energy technology scenarios were modelled to provide electricity in Las Calabazas: the current two silicon photovoltaic modules; additional photovoltaic silicon systems; thin-film panels; organic solar panels with batteries for energy storage; and a diesel generator (Table 1). An interesting situation is revealed in relation to connection to the national grid.Because it is only 4 km away from Las Calabazas, potentially connection could have been an appropriate solution.However, extending the grid was impossible due to the impenetrable mountainous terrain where the community is situated and consequent prohibitive costs of the necessary engineering work [79]. Energies 2016, 9, 1086 10 of 19 Equation ( 1) was applied to calculate the scores of the impact of the technologies on the livelihoods capitals; it uses technical input obtained from local technicians, the literature as well as data collected during field visits.The larger the score that a technology obtains, the more positive the impact on the capital.Note that local impacts of all photovoltaic technologies are the same for all capitals, with the exception of the existing photovoltaic systems that have a much more limited capacity for energy production (Table 2). Compared to the diesel generator, new solar installation in Las Calabazas entails fewer benefits for financial capital due to the technology's high investment costs and the need for a technician to maintain the panels and replace batteries.The diesel solution implies greater operational costs and it, too, requires a technician to operate and repair the machinery.The existing two photovoltaic systems on their own do not contribute significantly to physical and financial capitals because they are unable to meet the community's energy demand and so impede business ideas (Figure 1). Compared to the diesel generator, new solar installation in Las Calabazas entails fewer benefits for financial capital due to the technology's high investment costs and the need for a technician to maintain the panels and replace batteries.The diesel solution implies greater operational costs and it, too, requires a technician to operate and repair the machinery.The existing two photovoltaic systems on their own do not contribute significantly to physical and financial capitals because they are unable to meet the community's energy demand and so impede business ideas (Figure 1). The community improves social and human capacities due to increased energy availability.More hours of light means that inhabitants can perform individual and collective activities such as studying, lighting health facilities, as well as lighting streets and communal buildings allowing for extra socialising at night hours.Such benefits are expected to arise from either the current diesel generator or the photovoltaic technologies as the latter counts with batteries for energy storage. Finally, if a diesel generator were installed, it would be particularly damaging to the natural environment due to CO2 emissions, any potential accidental splits of oils into the groundwater reserves or river, and to noise.The photovoltaic technologies can have minor local impacts on the natural environment while the main major impact is associated with visual damage to the landscape due to the number of panels.Thus, overall analysis of local impacts of the various energy technologies shows that the best alternatives to meet the energy demand of Las Calabazas would be photovoltaic technologies, because these represent the largest positive impacts on the community's livelihoods capitals (Figure 2).Equation ( 2) was employed to calculate the overall score for all the alternatives with p = 1, which represents a simple weighted average function.The community improves social and human capacities due to increased energy availability.More hours of light means that inhabitants can perform individual and collective activities such as studying, lighting health facilities, as well as lighting streets and communal buildings allowing for extra socialising at night hours.Such benefits are expected to arise from either the current diesel generator or the photovoltaic technologies as the latter counts with batteries for energy storage. Finally, if a diesel generator were installed, it would be particularly damaging to the natural environment due to CO 2 emissions, any potential accidental splits of oils into the groundwater reserves or river, and to noise.The photovoltaic technologies can have minor local impacts on the natural environment while the main major impact is associated with visual damage to the landscape due to the number of panels. Thus, overall analysis of local impacts of the various energy technologies shows that the best alternatives to meet the energy demand of Las Calabazas would be photovoltaic technologies, because these represent the largest positive impacts on the community's livelihoods capitals (Figure 2).Equation ( 2) was employed to calculate the overall score for all the alternatives with p = 1, which represents a simple weighted average function.3), or multicriteria score, of five energy alternatives on overall livelihoods capitals, Las Calabazas, Cuba, 2012. The high scores obtained for PV point to the appropriateness of solar technology for this rural community.If this option were installed on house roofs, it would eliminate the need to purchase fossil fuels-which would be necessary if a diesel generator solution were selected.As a result, financial resources as well as damage to the local air quality and disruptive noise from the generator could be avoided.Inevitably, however, necessary maintenance and battery replacement costs will impact negatively on the financial assets of the community.Additional financing would thus be required to replace batteries for storage of solar power.There is a positive trade-off in that the anticipated reduction in financial capital would be compensated for by improvements accruing to the other four livelihoods capitals.In addition, the initial investment capital required to develop solar systems in such communities is high and the beneficiaries themselves cannot afford to pay for it.The government or Cuban NGOs such as Cubasolar [81], or international aid organizations, are the ideal institutions to fund, or find the funds, to cover these high initial costs [80].A further benefit, installation and operation of the solar systems at Las Calabazas could generate up to five new employment opportunities. The foreseeable effect of solar energy installations on human capital here is thus considerable.Installation of any of the three varieties of PV would improve livelihoods.Moreover, residents would gain technical skills by learning how to use and maintain PV systems at their own homes.Further, it would bring the community together in order to transport and install the solar panels (i.e., enhance social capital).The installation of additional solar panels would further promote social cohesion and willingness to participate in social networks due to the reported practice by PV users of meeting up and discussing how best to maintain the devices, and collectively purchase spare parts and replace the battery.Such willingness to interact socially is an important feature of Cuban society [79]. Adding solar energy installations to this community would not exhaust the local natural resources or damage its environment.The landscape would be slightly altered due to the presence of solar panels on people's roofs.Special care would have to be taken to ensure correct battery maintenance and disposal, which might otherwise result in leaks of toxic compounds.The addition of solar energy installations to every household would increase the present scanty physical infrastructure revealed by the baseline analysis. Solar Energy for Sustainable Livelihoods and a Cleaner Global Environment: Life-Cycle Analysis The proposed systematic model refines the technology selection process for Las Calabaza in Cuba by also identifying which of the three types of solar photovoltaic cells has optimum global emissions mitigation potential.As discussed, if silicon, thin-film and organic solar cells were installed in poor areas, their impact on the livelihoods of people would be identical-despite their different technical specifications.This sub section shows, however, that these technical differences are important where global environmental assessment is carried out.Modelling the life cycle of solar home systems with SURE-DSS enables tracing of the energy and CO2 pollution of local SHS.It does 3), or multicriteria score, of five energy alternatives on overall livelihoods capitals, Las Calabazas, Cuba, 2012. The high scores obtained for PV point to the appropriateness of solar technology for this rural community.If this option were installed on house roofs, it would eliminate the need to purchase fossil fuels-which would be necessary if a diesel generator solution were selected.As a result, financial resources as well as damage to the local air quality and disruptive noise from the generator could be avoided.Inevitably, however, necessary maintenance and battery replacement costs will impact negatively on the financial assets of the community.Additional financing would thus be required to replace batteries for storage of solar power.There is a positive trade-off in that the anticipated reduction in financial capital would be compensated for by improvements accruing to the other four livelihoods capitals.In addition, the initial investment capital required to develop solar systems in such communities is high and the beneficiaries themselves cannot afford to pay for it.The government or Cuban NGOs such as Cubasolar [81], or international aid organizations, are the ideal institutions to fund, or find the funds, to cover these high initial costs [80].A further benefit, installation and operation of the solar systems at Las Calabazas could generate up to five new employment opportunities. The foreseeable effect of solar energy installations on human capital here is thus considerable.Installation of any of the three varieties of PV would improve livelihoods.Moreover, residents would gain technical skills by learning how to use and maintain PV systems at their own homes.Further, it would bring the community together in order to transport and install the solar panels (i.e., enhance social capital).The installation of additional solar panels would further promote social cohesion and willingness to participate in social networks due to the reported practice by PV users of meeting up and discussing how best to maintain the devices, and collectively purchase spare parts and replace the battery.Such willingness to interact socially is an important feature of Cuban society [79]. Adding solar energy installations to this community would not exhaust the local natural resources or damage its environment.The landscape would be slightly altered due to the presence of solar panels on people's roofs.Special care would have to be taken to ensure correct battery maintenance and disposal, which might otherwise result in leaks of toxic compounds.The addition of solar energy installations to every household would increase the present scanty physical infrastructure revealed by the baseline analysis. Solar Energy for Sustainable Livelihoods and a Cleaner Global Environment: Life-Cycle Analysis The proposed systematic model refines the technology selection process for Las Calabaza in Cuba by also identifying which of the three types of solar photovoltaic cells has optimum global emissions mitigation potential.As discussed, if silicon, thin-film and organic solar cells were installed in poor areas, their impact on the livelihoods of people would be identical-despite their different technical specifications.This sub section shows, however, that these technical differences are important where global environmental assessment is carried out.Modelling the life cycle of solar home systems with Energies 2016, 9, 1086 12 of 19 SURE-DSS enables tracing of the energy and CO 2 pollution of local SHS.It does so by highlighting the energy pay-back time (EPBT), embedded energy and avoided CO 2 eq.emissions of each potential installation.In order to calculate these parameters for silicon, thin-film and organic technologies, the model takes into account the size of the systems.Table 3 shows the input data needed in order to calculate the global impact of the three types of solar photovoltaic cells.SURE-DSS assesses system dependability (SD) by addressing irradiance and distance from manufacturing facilities to place of installation, in addition to the CO 2 emissions at the country of origin/or installation. The amount of CO 2 emissions per kWh produced of energy (primary or electricity) in eight different countries was compared; also solar insolation for those countries is compared so as to put the solar technology option in a wider context.The third highest irradiance levels are to be found in Cuba, where also large amounts of CO 2 emissions from electricity generation are generated.SURE takes into account that type of geographical difference gives different results in terms of environmental impacts depending on the site/place of installation (Table 4).Sources: Cuba Solar [82]; IEA (2012) [83]. The total installed capacity of 5.4 kWp is generated by several Solar Home Systems (SHS) (see above).The panels must be bought at and transported to Las Calabazas from the country of manufacture.In order to choose the least polluting equipment as per LCA it is necessary to take into consideration that avoided CO 2 varies according to both the type of solar cells selected and their country of origin and manufacture (Table 5).The largest difference in the levels of avoided CO 2 emissions between the various types of solar panels is relatively small, at 5.3%.Cuba makes an ideal case for the installation of SHS due to high insolation levels in most of the island and the presence of a highly polluting electricity generation mix-with 755 g CO 2 /kWh el .Installation of SHS would therefore avoid consumption of electricity from the grid; this saving of electricity can be calculated through the avoided emissions (Table 5). Contrary to what might be expected-that is, if panels were manufactured in Cuba (i.e., avoided CO 2 emissions would be greatest because the distance travelled to transport the devices is shorter)-the photovoltaic technology that saves the largest CO 2 emissions is manufactured in Germany.This is because avoided emissions are calculated by taking into account the country of manufacture and the country of installation in terms of the emissions generated during manufacture and transportation [84].The small variation among the four countries arises from the amount of embedded emissions found in the solar devices, that is, the parameter that takes into account the energy mix of the country where the panels have been manufactured and the transporting distances from each place to Las Calabazas.The solar system that exhibits the greatest global mitigation features by saving the most emissions for supplying the SHS to Las Calabazas would come from Germany-Cuba, which avoids 133.44 tonnes of CO 2 during the system's lifetime. Of the three types of solar cells, organic PV emerges as the optimum technology for Las Calabazas (Table 5) since it exhibits the lowest EPBT, highest ERF, and largest amount of avoided emissions during its lifetime.There is nonetheless a drawback because the surface needed to install 5.4 kWp of organic PV is considerably greater-87.6 m 2 -than that required to install the same nominal power using silicon-PV and CdTe-PV-24.33 m 2 and 31.28 m 2 respectively.Besides, organic modules need to be replaced more often because their lifetime is shorter.Yet, even taking into account all these factors, the calculations still favour organic cells. The second best PV technology is thin-film (CdTe) PV.The environmental impacts are slightly lower than Silicon PV, but the system requires some larger surfaces, too. By applying the global emissions mitigation analysis, differentiation within the cluster of PV alternatives is revealed.The more environmentally friendly means of electricity production, is the organic cell, followed by thin film technology.The two approaches without over conventional crystalline silicon technology when an aggregate of three global environmental parameters is considered.The importance of the geographic origin of the panels-for the three PV technologies-is evident (see Table 6) although the impact is small, it is due to the difference in the embedded emissions during the module's manufacture arising from the different energy mix of each country.The impact of transport of the PV systems from the location of manufacture to the location of installation was also taken into account, assuming different countries of origin for the equipment: China, Europe, USA and local manufacture in Cuba, and final installation and operation in Las Calabazas, Cuba. Discussion The SURE-DSS systematic tool to assess both, the local and global impacts of solar (photovoltaic) home systems, is useful to support policy, investment and other decision-makers in evaluating the impact that additional energy access intended to alleviate poverty and protect the natural surroundings of the poor in developing countries.The capacity to assess, select and promote appropriate energy solutions to improve livelihoods and protect local and global environment is enhanced by the addition of the SURE-DSS multi-criteria approach and tool.The model supports the computation of life-cycle analysis indicators to account for contemporary avoided emissions and, importantly, historical CO 2 emissions which date back to the manufacture and transportation processes. By focusing on pay-back time, embedded energy, energy return factor and avoided emissions, the tool determines the CO 2 mitigation potential of solar technologies that are used for sustainable development-contributing to international efforts to mitigate emissions and combat climate change-and identifies which of silicone, thin film or organic cell has the greatest chance of achieving cleaner energy and a higher socio-economic impact for poverty reduction.To find out optimum energy arrangements for rural communities, SURE-DSS utilises a large amount of technical data already contained in the system, identifies primary technical and non-technical information, and operates a computing program and multi-criteria approach [16].A main contribution of the SURE modelling approach is that it identifies, in quantitative terms, potential energy supply changes to reduce the gap between a current, often deprived, socio-economic situation, and improved livelihoods (as per beneficiaries' demands and decision-makers' priorities). The SURE model can provide evidence that generating additional electricity is worthwhile if it is aimed at improving the livelihoods of the poor and that, to contribute to global emissions mitigation, it works to select the least contaminating low carbon technologies such as solar home systems.The analytical design and evidence presented in this article relating local and global environmental impact of solar technologies for home use strengthen, rather than question, any plan to increase the supply of electricity to poorest regions in the developing countries.While the approach of the systematic tool assumes that increasing solar installations will positively affect the socio-economic and environmental conditions of poor populations-as discussed above-it also highlights the likely global environmental impact of such additions.The systemic tool contributes to the process of decision-making by offering information and solutions and by calculating CO 2 emissions savings.Yet, while calculating the amount of emissions that can be saved by generating electricity for the poor from renewable sources is not, per se, a totally novel procedure, the systematic tool is innovative in that it also brings to light their embedded energy and pay-back time. SURE-DSS is an optimization tool that takes into consideration local resource availability, users' demands and environmental impact at most points of the SHS's life, including power generation.The addition of SHS to all households in Las Calabazas will benefit social and human capitals because solar panels enable additional light hours that can be used for economic, educational or leisure purposes.There is now potential for the SURE-DSS combined analysis of livelihoods resources, energy demand and supply and life cycle analysis designed for photovoltaic technologies to be developed and applied to other renewable energy technologies.The inbuilt flexibility of the computational program SURE-DSS enables assessment of different technologies as well as comparison of energy technologies for more precise selection. In the light of foreseeable expansion of solar energy installation in developing countries, particularly in rural areas where access to the grid is difficult or not available, the comprehensive technical and non-technical analysis of sustainability applied to different types of solar technology is more useful than assuming that all solar panels are, or should be, made of silicon crystalline cells.Bringing together the analysis of global mitigation and livelihoods impact of solar technology is a step forward in supporting both decision-makers and prospective beneficiaries in poor areas.Moreover, because the poorest in less developed countries are the most exposed to the malign effects for climate change, any measure to avoid further global warming is to be welcomed.There is, thus, vested interest of national governments in developing countries to reach emission reduction targets [11].The SURE systematic tool has a role to play in assisting governments to comply with international mitigation targets.SURE approach provides useful information for stakeholders and facilitates decision-making, but a full analysis of impacts on a large community and even more in a broader area (a full municipality, a department or even a country) should be addressed using alternative approaches, such as system dynamics or "systems thinking" as proposed by Gonzalez et al. [85]. This article has demonstrated how the model addresses equally concerns over poverty, sustainable development, global climate change, and low-carbon society.The SURE-DSS could be used by policy decision-makers who aim at increasing the rate of electrification among the poor.The model offers the option to choose the least polluting renewable energy technology for small rural communities, as long as the impact on poverty reduction is equally achieved.In Cuba, the choice of the specific solar home system for Las Calabazas will contribute towards improving livelihoods, reducing national CO 2 emissions, and increasing renewable energy technology's share of the country's energy mix.In view of developing countries having a greater say in the definition of a post-Kyoto agreement, the SURE systematic tool can assist governments to gather and process information that is in line with national and international aspirations. Conclusions The article has explored and explained the technical capacity and functionality of the SURE-Decision Support System.This tool provides information to stakeholders to promote appropriate and effective energy development in poor areas by assessing local sustainable livelihoods and global environmental impact from such development.It has been demonstrated that energy access to the poor in less developed countries need not jeopardize climate emissions goals as long as global carbon emissions are also monitored.The integration of the life-cycle analysis categories of embedded energy and pay-back time-in addition to saved CO 2 emissions-significantly widened the global environmental analysis the SURE-DSS is able to perform.Life cycle analysis is a valuable tool which enables assessment of a broad range of impacts, e.g., human toxicity, soil contamination and resources depletion.Notwithstanding that these impacts could affect the health of local populations, and thus their livelihoods, this study has focused on the environmental bearings of energy technologies.The exclusive attention on the impact of greenhouse gas emissions, via CO2eq.balance, to calculate the global impact of solar technologies on the atmosphere has enabled a detailed and solid addition to the SURE-DSS. The findings discussed in this article show that, the apparently conflicting objectives of increasing energy supply while also controlling global CO 2 emissions can be overcome by using a systematic and combined approach accomplished by the SURE-DSS methodology which evaluates the best option for rural energy supply and evaluates the amount of historical CO 2 and saved emissions of the optimal solutions for improving livelihoods. The livelihoods impact analysis demonstrated that, while silicon, thin film or organic solar modules could all equally meet the community's electrification demands by a photovoltaic system, not all would equally affect the global environment since the organic photovoltaic technology will avoid most emissions.Selecting appropriate energy technology is just the beginning of a progression towards sustainable poverty reduction and enhancement of economic livelihoods prospects.The IPCC (2001; 2008; 2014) [7] estimated, and later confirmed, that without near-term introduction of supportive and effective policy actions by government, energy related greenhouse gases (GHG) emissions-mainly from fossil fuel combustion-are projected to rise by over 50% by 2030.As a result, governments committed to reducing poverty are also compelled to give increased attention to CO 2 emissions.Based on a study of solar technologies, this article provides decision-makers with robust evidence which supports the addition of energy supply to improving livelihoods while also reducing global emissions.The multi-criteria systems approach discussed in this study is necessary at a time when the controversy over developing countries generating additional energy, and the repositioning of Figure 1 . Figure 1.Local impact of solar energy technologies, and diesel on livelihoods capital, Las Calabazas, Cuba, 2012. Figure 1 . Figure 1.Local impact of solar energy technologies, and diesel on livelihoods capital, Las Calabazas, Cuba, 2012. Table 2 . Multicriteria payoff matrix: impact of energy technologies on sustainable livelihoods capitals (Energy Impact Index, EEI) * to assess overall technology appropriateness, Las Calabazas, Cuba, 2012. Table 3 . Technical input values to assess global environmental impact of photovoltaic installations for improving livelihoods.As explained above, organic photovoltaic technologies have achieved more than 10% efficiency in the laboratory and on small modules.Large size modules can reach efficiencies of between 3% and 6%.The systematic model employs efficiency of 5% for organic PV modules. Table 4 . CO 2 per kWh of produced primary energy (electricity) and insolation levels in selected countries, 2012. Table 5 . Potential avoided CO 2 emissions of a 5.4 kWp SHS installation in Cuba, by country of manufacture. Table 6 . Mitigation potential of three types of solar cells for a 5.4 kWp SHS for Las Calabazas, Cuba.
12,969.6
2016-12-19T00:00:00.000
[ "Engineering", "Environmental Science" ]
Environmental Interfaces in Teaching Economic Statistics The objective of this article is, based on the Critical Statistics Education assumptions, to value some environmental interfaces in teaching Statistics by modeling projects. Due to this, we present a practical case, one in which we address an environmental issue, placed in the context of the teaching of index numbers, within the Statistics discipline in an undergraduate course in Economic Sciences. In this project, we discuss the Human Development Index (HDI) and we propose the creation of an environmental index in order to evaluate the countries concern level in following some ecological and/or preservation practices. Introduction Political awareness and the discussion of social issues related to student's reality are the main goals of Critical Education at any schooling level. In our view, as in the opinion of the main organizers of this theory (Freire, Giroux, Skovsmose, etc), such a goal can be pursued regardless of the syllabus of a discipline. We understand that educators can build adaptations to embrace themes that facilitate discussion of social-political issues, which are relevant to the student's reality. Nevertheless, it seems important to point out that to do a Critical Education pedagogy does not mean to search for methods or rules, as Giroux [1] said, "Educators should dissuade individuals who reduce teaching to the implementation of methods from entering the teaching profession" (p. 8). Theoretical Framework Critical Statistics Education, as presented by Campos [2], by connecting the fundamentals of Statistics Education and Critical Education shows, through mathematical modeling projects, the possibilities of integration and combination of objectives among these pedagogical approaches. In this context, we present a fragment of the theoretical framework from Critical Statistics Education and show, through a mathematical modeling project, how it is possible to achieve positive results within this integration. Critical Education As a development of critical thinking, Critical Education has emerged in opposition to traditionalism in the educational system, and its foundation can be credited mainly to Jurgen Habermas, in Germany, and Paulo Freire, in Latin America, among others. D'Ambrosio [3] emphasized that education should enable the students to learn and use communicative and analytical instruments, which are essential for them to exercise all rights and duties inherent to citizenship. According to him [4], the major challenge of education is to promote citizenship and creativity. The Brazilian educator Freire [5,6] outlined the bases of a true democratic pedagogy that fights authoritarian relations and that founds its principles in an essential task. His work was marked by the special conditions of Latin American society at the time (60's and 70's), but his educational effort certainly is valid in other areas and in other time. Freire's work, which proposes emancipatory ways of knowledge, has inspired Giroux [1], who extended the idea of democratization and politicization of education, within a vision of the teacher as a transformative intellectual. "Central to the category of transformative intellectual is the necessity of making the pedagogical more political and the political more pedagogical" (p. 127). In such a perspective "critical reflection and action become part of a fundamental social project to help students develop a deep and abiding faith in the struggle to overcome economic, political and social injustices, and to further humanize themselves as part of this struggle" (ibid., p. 127). Skovsmose [7,8], in turn, incorporated these concepts and progressed in the development of Critical Education. The relationship between teacher and students, as for him, is of fundamental importance. Breaks down the figure of the knowledge-owner-teacher and takes effect the presence of the one who teaches and is taught in a dialectical relationship with the students, who become co-responsible for an educational process in which all grow. He [8] stated, "The ideas concerning the dialogue and the student-teacher relationship are developed from the general point of view that education must belong to a process of democratization" (p. 350). Skovsmose considers that the first important aspect of Critical Education is the involvement of the students in the control of the educational processes, where both students and teachers are attributed a critical competence. According to him, another important aspect of Critical Education is the problem orientation of the teaching-learning process. Thus, he [8] stated, "it is essential that the problems have to do with fundamental social situations and conflicts, and it is important that the students simultaneously can recognize problems as their own problems" (p. 353). Centered around the democracy question, Skovsmose has worked towards a Critical Mathematics Education, in which working with modeling projects are valued. Statistics Education Mainly developed since the 1990s, Statistics Education was conceived in an unease context, trying to question and reflect over problems related with the teaching and learning of this discipline. This educational perspective was ignited by the difficulties that students have in thinking or reasoning statistically even when they show calculation skills. Seeking to differentiate the pedagogical problems presented by Statistics from those presented by the teaching of Mathematics, several authors, such as Gal, Chance, Garfield and Ben-Zvi, among others, converged on the idea that the teaching of Statistics should focus on the development of three specific skills: statistical thinking, statistical reasoning, and statistical literacy. Statistical literacy has been well characterized by Gal [9], who emphasized two interrelated components: a) people's ability to interpret and critically evaluate statistical information, arguments relating to data from research and stochastic phenomena found in different contexts; b) people's ability to discuss or communicate their reactions to this statistical information, along with their interpretations, opinions, and understandings. In turn, statistical thinking is linked to the idea of globally evaluating the statistical problem, understanding how and why statistical analyses are important. Thus, statistical thinking is related to the ability to identify statistical concepts involved in the investigations and problems dealt with, including the nature of data variability, uncertainty, how and when to properly use the methods of analysis and estimation, etc. According to Chance [10], this capacity provides the student to have the ability to explore the data in order to extrapolate what is given in the texts and to generate new questions beyond those indicated in the research. The way in which people reason with statistical concepts composes what is generally called statistical reasoning. According to Garfield[11], to reason statistically means doing appropriate interpretations of a certain data set, to correctly represent or summarize the data, to make connections between the concepts involved in a problem, or to combine ideas involving variability, uncertainty, and probability. The development of statistical reasoning should lead the student to be able to understand, interpret, and explain a real statistical data based process. Ben-Zvi [12] emphasized the importance of this capability and stated that all citizens should have it and that it should be a standard ingredient in education. Critical Statistics Education Mainly in Campos [2] and in Campos et al. [13], we have made an approach of Critical Education with Statistics Education, thus composing what we called Critical Statistics Education. In order to develop the competences from Statistics Education in students, Campos [2] suggests:  work with real data and relate it to the context in which it is involved;  encourage students to interpret, explain, criticize, justify, and evaluate the results, preferably working in groups, discussing and sharing opinions. For to address the major aspects from Critical Education, Campos et al. [13] suggest:  problematize teaching, work on Statistics through contextualized projects within a reality consistent with the student's;  promote debates and dialogues among students and between them and the teacher, assuming a democratic pedagogical attitude;  thematize the teaching by prioritizing activities that enable the discussion of important social and political issues;  use technology in teaching, valuing skills of instrumental character;  adopt a flexible pace for developing the themes;  discuss the curriculum and the pedagogical structure adopted. By adopting these actions in the educational process, we will be practicing a Critical Statistics Education that goes against the traditional teaching model. Mathematical Modeling The approach of Statistics with Mathematics enhance the possibility to use some aspects from mathematics education in the design and analysis of some statistical activities in classroom. In this context we have defended [14] that working through mathematical modeling projects comprises an appropriate pedagogical strategy to carry out Critical 100 Environmental Interfaces in Teaching Economic Statistics Statistics Education, as it consists in an efficient way to articulate theory and practice and favors the breakup of arbitrary boundaries between disciplines, allowing a broader and more effective scope. Modeling construction or the presence of mathematical modeling in the context of mathematics education arises mainly in situations that aim to represent and mathematically study a problem that comes from the real world and the solution should allow its analysis, reflection, awareness, discussion and validation. We understand that mathematical modeling becomes consistent with the assumptions of Statistics Education as it combines the idea of learning Statistics through study, research, analysis, interpretation, criticism and discussion of some real situations, preferably originated from a reality consistent to the student's. In this way, Statistics and reality can be connected through modeling activities. This interactive connection can be made by using known statistical procedures, in order to study, analyze, explain and predict situations arising from reality. Thus, modelling can be a way to amplify the interest for statistical content among students to the extent that they have the opportunity to study, through research, problem situations that have practical application and value their critical sense. Environment Description In a Statistics discipline, taught in an undergraduate course on Economic Sciences by the first author of this paper, held in a private university from São Paulo -Brazil, one of the program contents is Economic Indices. This topic includes index numbers and others socioeconomic indices, such as Gross National Income per capita (GNIpc), infant mortality, unemployment index, life expectancy, etc. In this context, we have approached the Human Development Index (HDI), which is calculated by United Nations or, more specifically, by the United Nations Development Programme (UNDP). HDI is supposed to emphasize that people and their capabilities should be the ultimate criteria for assessing the development of a country. As for this, the HDI should also be used to question national political choices and stimulate debates about government's priorities. In order to achieve this goal, the HDI is a summary measure of achievement in some key dimensions of human development: a long and healthy life, access to knowledge and have a decent standard of living [15]. Discussing the wideness of this index, we have criticized the fact that it does not include indicators which could evaluate questions like religion freedom, communication liberty, human security, empowerment and democratic govern choices. The students pointed out the fact that HDI also does not include an environmental index, which could measure the nature preservation level of a country. As this is a controversial subject and generates much debate, we proposed and invited the students to carry out an activity related to this theme, organizing groups, and selecting topics for each group to make a report and prepare a presentation. Subject Framework The HDI's health dimension is assessed by life expectancy at birth and is calculated by the United Nations Department of Economic and Social Affairs (UNDESA). The education dimension has two components: years of schooling for adults aged 25 years and expected years of schooling for children of school entering age. These indicators are produced by UNESCO Institute for Statistics. The two indices are combined into an education index using arithmetic mean. The standard of living dimension is measured by the GNIpc, respecting the purchasing power parity (PPP) methodology, calculated by World Bank (up to 2012) and IMF (2013). Minimum and maximum values (goalposts) are set for each sub-index in order to transform the indicators expressed in different units into indices between 0 and 1. These goalposts act as the 'natural zeroes' and 'aspirational goals', respectively, from which component indicators are standardized. UNDP has set the goalposts at the following values: The basic formula for each sub-index is: where: I i = index of the referred variable X i = observed value of the variable MIN (X i ) and MAX (X i ) are the lowest and highest values that variable X can attain, respectively, according to table 1. For the income index, the natural logarithm of the actual, minimum and maximum values is used 1 . Thus, HDI is given by the geometric mean of the normalized indices for each of these three dimension: There are others aggregated indices calculated by UNDP, which are: Inequality-adjusted Human Development Index, Gender Development Index, Gender Inequality Index and Multidimensional Poverty Index [16], but none of them is related to environmental issues. Project Development After students have been organized into groups (by themselves), we have discussed several topics that they could study and research. Therefore, they selected the following topics:  Recycling and reuse: economic impacts  Sustainable cities  Economic consequences of global warming  Socio-environmental responsibility in enterprises  Green economy  Environmental index Each group became responsible for one topic. The first five topics were to be presented 14 days from that class. The last topic would be presented one week after that. Thus, the first five presentations should help the last group to construct the environmental index. On the other hand, every group were assigned to make a report on the subject researched. The five first presentations occurred in the expected date. Groups made power-point presentations and everyone discussed all topics with great enthusiasm. Although the topics did not refer to statistical contents, all groups presented real data summarized in tables and charts, which were inserted in the presentations as well as in the reports. As they were not used to make reports, they had revealed some difficulties in mixing text and data, but with a little help from the teacher, it was no problem at all. Later, the last group have asked for teacher's help in order to build a good index. One meeting with the students was held outside class schedule and some doubts were solved at the reunion and by e-mail. At the expected date, the last topic was presented. Many discussions surrounded the presentation and all students wanted to express their points of view. The index was proposed using the HDI formula (1) and several sub-indices were suggested: I 1 ) sewer treatment index: it measure the percentage of treated sewage released into the environment in relation to the total production of sewer; I 2 ) launching rate of gases that cause greenhouse effect;. I 3 ) atmospheric pollution index of the great cities; I 4 ) native vegetal covering preservation index; I 5 ) beaches bathing index, given by the improper beach percentage for the bath due to pollution, in relation to the total number of beaches; I 6 ) recycling index, given by the percentage of effectively recycled materials; I 7 ) energy index, given by the percentage of the total energy matrix that comes from clean and/or renewable sources; I 8 ) animal species preservation index, given by the number of extinguishing threatened animal species of the local fauna. The aggregated environmental preservation index should not be calculated by the formula (2), but through a simple arithmetic mean of the sub-indices, normalized as the HDI formula (1) proposes. The country would be more engaged in environmental preservation as the result gets closer to 1. There was a problem pointed out by the teacher. Some sub-index get higher grade as the country environment preservation get worst (I 2 , I 3 , I 5 , and I 8 ). In those cases, the sub-index should be obtained after been subtracted by 1. The group understood the teacher explanation and made the appropriated corrections. In the following debates, students seems to be infuriated with the fact that an environmental preservation index officially does not exist and they discussed divulging forms to lead this information to the general population. It was not possible to estimate the index for the country, but several questions have raised from the debates. Who would not be interested in calculating this kind of index? Would some developed countries be downgraded if this index becomes part of HDI? Due to Statistical Content In this modeling project, the statistical content worked was an index calculation. Nevertheless, the first five groups have worked on data summarizing and representations, using appropriated tables and graphics. Data interpretation and analysis were also carried out. Students realized the importance of using technology to access reliable results, as data were obtained via internet, the graphs and tables were made with the help from Excel spreadsheet and the presentation have used Power Point slides . Additionally they were able to note the importance of working with real data. As for the last group, they had deepened on index subject and had detailed the calculation methodology. By creating a simple and objective index, they had involved all students in this matter and had contributed for all of them to experiment the knowledge related to this kind of calculation. Due to Statistical Competences Relating to the three capacities mentioned by the Statistical Education's theoretical framework, we observed that working with real situations involved in the index calculation allowed students to have a global view of the problem. Students had been able to perceive the difficulties that surround the complexity of this index and had followed some statistical tools used in its determination. We understand that this work tends to help the development of statistical thinking and statistical reasoning on data and measures. The interpretations that students made about the statistical tools used for index calculations can be seen as an ability to explore the content, extrapolating what is prescribed in the 102 Environmental Interfaces in Teaching Economic Statistics books. In line with this, as pointed out by Chance [10], we can say that students progressed in the development of statistical thinking. As for statistical reasoning, Garfield [11] suggested that its development should lead students to be able to understand, interpret and explain a real statistical process. This was completely done by the students, as they focused the idea of creating a new index and were able to construct meanings for the formulas, in a non-trivial way, and explained all the process to the class. Due to literacy, we believe that the preparation of reports, the use of statistics typical expressions and terminologies, the elaboration work of the presented charts and tables, as well as the quarrels involving the environment preservation index thematic tends to assist the development of this competence. Moreover, we mentioned Gal [9], who pointed out that statistical literacy emerges when students are able to interpret and critically evaluate statistical information and arguments related to researched data. He also highlighted that literacy appears when people discuss or communicate their reactions to such statistical data, their interpretations, opinions and understandings. Thus, we believe that our students achieved this goal as they showed, discussed and communicated their interpretations, critical evaluations and understandings of the index calculations. Due to Critical Education and Critical Statistics Education According to Giroux [1], Critical Education is not a method, so it is not a question of measuring its deepness or follow some steps. Nevertheless, we believe that in many ways the Critical Education principles were present in this project. In the presentations and discussions, students were faced to the environment degradation problem, its political implications and its consequences. We were able to experiment the emergence of a sense of outrage at the attitudes of people and governments facing a problem that affects everyone, rich or poor. The debates have shown a repudiation and revolt feeling to the indifference posture of the authorities, even from UN. Moreover, the pupils had argued forms of fighting the ambient degradation problem and the possible actions towards it, raising awareness of the importance of environmental preservation. Besides this aspect, it is important to mention the democratic way in which we developed this project. The theme emerged from the students; they chose the topics and organized themselves into groups. The teacher acted like a mediator, taking position on the discussed subjects and giving voice to the students, encouraging the debates. As for Critical Statistics Education, we perceive that throughout the project, we had been in the path traced for the theoretical considerations foreseen by Campos, Wodewotzki and Jacobini [13], as we followed most of the suggested attitudes for the classroom work, mentioned here: (i) We have problematized the teaching, we had worked topics related to statistics through a contextualized project, consistent to the students' reality; (ii) We have encouraged debate and discussions among students and between them and the teacher, thus taking a democratic pedagogical position; (iii) We have thematized the teaching and focused on activities emphasizing the debate of several important social and political issues; (iv) We have used technology in teaching and valorized technical skills. Conclusions In the execution of the pedagogical activities related to the project that we described here, we had the objective of showing a possibility of insertion of Critical Statistics Education inside a content from Statistics discipline in an undergraduate course. In this context, we tried to emphasize the social political interfaces involved in the suggested thematization, that emerged from the pedagogical environment lived by the professor. Our interest when telling this experience was to show that the opportunities of insertion of a thematic related to social and political problems occur at several moments at a pedagogical action. We understand that it is up to the educator to take advantage of these situations in order to stimulate the critical, investigative and contester spirit of the students, which stands out when they face a social problem, which involves their reality. We believe that, without losing focus on the statistical contents, when adopting Critical Statistics Education we can excessively enrich the pedagogical process, which gives the student the chance to better understand his/her own reality. Thus, they can find ways to carry out actions, which actually represent reactions against the unjust and sometimes immoral system in which they live. Due to this, we understand that the professor carries through a much more comprehensive role and makes education more meaningful, more interesting and more genuine.
5,160.4
2016-01-01T00:00:00.000
[ "Economics" ]
Vorticity and divergence at scales down to 200 km within and around the polar cyclones of Jupiter Since 2017 the Juno spacecraft has observed a cyclone at the north pole of Jupiter surrounded by eight smaller cyclones arranged in a polygonal pattern. It is not clear why this configuration is so stable or how it is maintained. Here we use a time series of images obtained by the JIRAM mapping spectrometer on Juno to track the winds and measure the vorticity and horizontal divergence within and around the polar cyclone and two of the circumpolar ones. We find an anticyclonic ring between the polar cyclone and the surrounding cyclones, supporting the theory that such shielding is needed for the stability of the polygonal pattern. However, even at the smallest spatial scale (180 km) we do not find the expected signature of convection—a spatial correlation between divergence and anticyclonic vorticity—in contrast with a previous study using additional assumptions about the dynamics, which shows the correlation at scales from 20 to 200 km. We suggest that a smaller size, relative to atmospheric thickness, of Jupiter’s convective storms compared with Earth’s, can reconcile the two studies. The extraction of the wind pattern, vorticity and divergence down to a scale of 200 km from the cluster of cyclones at Jupiter’s north pole shows evidence of an anticyclonic ring, which is needed to keep the system stable, around the central cyclone. No signatures of convection are observed at 200 km scales. A t Jupiter's north pole there are eight cyclones that form an octagon, with one cyclone at each vertex and one additional cyclone in the centre 1,2 . The centres of the cyclones are at latitudes of 83 ± 10°, which is about 8,700 km from the pole. Jupiter's south pole is the same, except there are only five cyclones, which form a pentagon with one at the centre. The vertices are at latitudes of −83 ± 1°. The polygons and the individual vortices that comprise them have been stable for the 4 years since Juno discovered them [3][4][5] . The polygonal patterns rotate slowly, or not at all. The peak azimuthal wind speeds around each vortex range from 70 to 100 m s −1 , and the radial distance r from the peak to the vortex centre is about 1,000 km (ref. 6 ). In contrast, Saturn has only one vortex, a cyclone, at each pole 7 . The peak winds are 150 m s −1 , and the radius at the peak is 1,500 km (refs. 8,9 ). Saturn has a six-lobed meandering jet at 75°, but it has no cyclones associated with it. Both laboratory and theoretical models treat the hexagon as a stable wave-like pattern [10][11][12][13][14] . There have been a handful of theoretical studies that specifically address the origin of polar cyclones on Jupiter and Saturn [15][16][17][18] . They comprise one-and two-layer models that introduce small-scale motions either as an initial condition or as continuous forcing balanced by dissipation. The small-scale vortices merge and become the large-scale vortices. The cyclones drift polewards, and the anticyclones drift equatorwards, as they do on Earth. In some cases the cyclones merge into one big cyclone at the pole. In other cases, with different parameter settings, the cyclones wander about without forming polygons. Only one theoretical study obtained stable polygons from random initial conditions, and only when the wavelengths of the initial random disturbances are less than 300 km (ref. 19 ). A Fourier analysis of Juno data reveals that flows with wavelengths larger than 215 km are gaining energy from smaller-scale flows-an example of an upscale energy transfer 20 . Therefore, one goal of this Article is to measure vorticity and divergence at scales much smaller than the main cyclones and determine how the upscale energy transfer takes place. Another theoretical study 21 , which used shallow water equations, introduced cyclones that have the observed gross propertiesmaximum velocity and radius-and arranged them into different polygonal patterns around the pole to see which ones are stable. The stable ones have shielding (a ring of anticyclonic vorticity surrounding each of the cyclones) and the unstable ones do not. Some models with small-scale forcing develop shielding, but they do not organize into polygons [15][16][17][18]22 . So another goal of this Article is to measure the vorticity inside and outside the large cyclones and see whether they are shielded. The small-scale forcing in the one-and two-layer models is a crude representation of convection. There are 3D models that treat convection explicitly, in some cases with the Boussinesq (quasi-incompressible) approximation [23][24][25] and in other cases with density varying vertically by up to five scale heights [25][26][27][28] . Some treat fluid in a box with periodic boundary conditions, and others use full spherical geometry. All the 3D models have small-scale convective plumes. The convective plumes produce large-scale vortices by mergers, an upscale transfer of kinetic energy, and some of the vortices arrange themselves into polygonal patterns 23,28 . Although a relation between divergence and vorticity is not discussed in any of these models, a negative correlation is expected for convection on a rotating planet. Therefore, a third goal of this Article is to measure divergence and vorticity at scales down to 180 km and search for this signature of convection. Since 2017 the Juno spacecraft has observed a cyclone at the north pole of Jupiter surrounded by eight smaller cyclones arranged in a polygonal pattern. It is not clear why this configuration is so stable or how it is maintained. Here we use a time series of images obtained by the JIRAM mapping spectrometer on Juno to track the winds and measure the vorticity and horizontal divergence within and around the polar cyclone and two of the circumpolar ones. We find an anticyclonic ring between the polar cyclone and the surrounding cyclones, supporting the theory that such shielding is needed for the stability of the polygonal pattern. However, even at the smallest spatial scale (180 km) we do not find the expected signature of convection-a spatial correlation between divergence and anticyclonic vorticity-in contrast with a previous study using additional assumptions about the dynamics, which shows the correlation at scales from 20 to 200 km. We suggest that a smaller size, relative to atmospheric thickness, of Jupiter's convective storms compared with Earth's, can reconcile the two studies. velocity and β = df/dy = 2Ωsinθ/a is the latitudinal gradient of the planetary vorticity f = 2Ωcosθ. Here θ is colatitude, y is the northward coordinate, and a and Ω are the planet's radius and angular velocity, respectively. L β plays a role in the stability of the zonal jets. On both Jupiter and Saturn, 2π/L β is approximately equal to the wavenumber of the zonal jet profile with respect to latitude when U is the root mean squared speed [29][30][31][32][33] . However, L β is infinite at the poles as β linearly approaches zero there. We therefore introduce a different scaling 19 , one based on the inverse gradient of β at the pole, γ = −dβ/dy = 2Ω/a 2 . The associated length scale is L γ = (U/γ) 1/3 , and for U = 80 m s −1 it is about 10,500 km (we ignore the oblateness and use Jupiter's equatorial radius throughout this paper). L γ represents the radius of the circle around the pole inside which the effect of the vortices-large-scale turbulence-is greater than the effect of β and the zonal jets. Note that L γ is the distance from a specific point (the pole) and L β is not. The value of L γ is close to the 8,700 km size of the polygons on Jupiter. The radius of deformation L d is c/f, where c is the gravity wave speed of the gravest vertical mode-the one spanning Jupiter's weather layer, which extends from the base of the stratosphere down to the base of the water cloud. The value of c depends on the degree of stratification of the weather layer 34 , and is assumed to be independent of latitude. Different assumptions about the vertical stratification put the average L d at the poles in the range 350-1,300 km (refs. [34][35][36] ). This brackets the 1,000 km radius of the cyclones. Originally L β was defined as the length scale where the flow transitions from turbulence to zonal jets as the scale of the flow increases 37 . Observations of Jupiter suggest that a similar transition occurs as the latitude of the flow decreases. The critical latitude, below which zonal jets dominate, was shown to be a decreasing function of L d (refs. 38,39 ). Here we argue that a critical latitude, π/2 − L γ /a, exists even for arbitrarily small values of L d . We discuss the observations using parameters of the shallow water equations, where a single layer of fluid of thickness h floats hydrostatically on a much thicker fluid, which we assume is at rest 40,41 . The two dependent variables are the horizontal velocity v and the gravitational potential ϕ = g r h, where g r is the reduced gravity (the gravitational acceleration times the fractional density difference Δρ/ρ between the two layers 41 ). Here ϕ is the column density in the 2D continuity equation and √ϕ is the gravity wave speed c; ϕ controls vortex stretching and enters in the expression for potential vorticity (PV), which is a dynamical scalar that is conserved in fluid elements. For the shallow water equations PV is (ζ + f)/ϕ, where ζ = (∇ × v) · k is the relative vorticity (the curl of the horizontal velocity). Three-dimensional effects are not completely ignored: they enter through L d , which is proportional to the square root of h and the fractional density difference Δρ/ρ. Even these quantities are uncertain, so given the paucity of information about vertical structure, it is best to discuss our observations with the shallow water equations. Vorticity and divergence. Figure 1 shows the octagon of cyclones surrounding the north pole 2 . Features in the clouds are visible at scales down to ~100 km, which is much smaller than the 1,000 km radius where the azimuthal velocity is greatest. Figure 2 shows vorticity and divergence maps for two independent determinations of the wind field. The measurement required clouds in a pair of JIRAM images separated in time to be tracked to obtain the velocity, and then closed line integrals were taken to get vorticity and divergence. The magnitude of the vorticity is larger than that of the divergence. The persistence and movement of vorticity features, even those ~180 km in size, shows that the small-scale features are not measurement noise. The motion is visible when one toggles between the left and right vorticity maps in Fig Planetary signal and measurement noise. In Fig. 3, the top two panels show covariances between n0103 and n0204, which are the two independent determinations of the wind fields in Fig. 2. Vorticity is top left, and divergence is top right. If the two measurements gave exactly the same values at each point, the arrays of points would lie on a straight line running from the lower left corner to the upper right corner. Deviations from this line, that is, a correlation coefficient less than 1.0, come partly from measurement noise and partly from cloud motions on the planet. The vorticity measurements have a correlation coefficient η of 0.729. As the measurement noise is uncorrelated with cloud motions on the planet, the noise and planetary variances add up (equation (1)). And given that the noise from n0103 is uncorrelated with that from n0204, the covariations do not contain the variance of measurement noise (equation (2)): Fig. 1 | Infrared image of the northern hemisphere as seen by JIRAM. The circle at 80° latitude is about 12,000 km from the pole. The lines of constant longitude are 15° apart. The radiances have been corrected for nadir viewing, with bright yellow signifying greater radiance and dark red signifying lesser radiance. The average brightness temperature is in the range 215-220 K. Fig. 2 covers the central cyclone and the two cyclones at 135° and 315° east longitude, respectively. The two dark features at 120-150° E, 86° N whose filaments spiral towards their centres in a clockwise direction are anticyclones. Figure reproduced with permission from ref. 2 , Springer Nature Limited. The quantities involving x and y are measurements and are known, so the measured correlation coefficient η allows one to separately determine the variance σ 2 p of vorticity on the planet and the variance σ 2 n of measurement noise. The same reasoning applies to the divergence (top right panel in Fig. 3). Table 1 shows the results for different values of the dimensions of the box used to measure vorticity and divergence. For the upper right panel of Fig. 3 and the 180 km × 180 km box, η = 0.299018, meaning that there is some divergence on the planet, but its variance is less than the measurement noise. Note that the noise values for vorticity and divergence are about the same for the same box size. The difference is that divergence decreases by a factor of 4.34, and vorticity decreases only by a factor of 1.61 from the 90 km × 90 km box to the 360 km × 360 km box. This difference is an indication that the divergence is a small-scale phenomenon that averages out for the larger box sizes. For the lower two panels of Fig. 3, divergence is plotted on the y axis with vorticity on the x axis, and η is essentially zero. Supplementary Table 2 shows that the noise estimates in Table 1 are best fitted by assuming that the measurement uncertainty for each component of velocity is 7.8 m s −1 . Potential vorticity and shielding. Figure 4 shows the azimuthal mean v of the azimuthal velocity around the central cyclone as a function of radius r out to 6,000 km. Also shown are the mean relative vorticity ζ , the mean gravitational potential φ and the mean potential vorticity PV. In each panel there are three smooth curves. The middle one, coloured orange, was derived by a linear least squares fit to the velocity data. The basis functions are given in the Methods. The profile of v(r) agrees with earlier estimates 6 , including the profile at r > 2,000 km falling off faster than 1/r, implying negative vorticity in that region. Note that the fitted curve fits the data even where the velocity becomes negative (clockwise) at 4,000-6,000 km radial distance. The tabulated data are in Supplementary Data 3. Bottom left: the ϕ values in Fig. 4 were computed from an integral and are therefore uncertain by an additive constant. However, ϕ = g r h is proportional to the thickness, and the thickness cannot be negative. The local maximum of ϕ is at r = 4,075 km, and Fig. 4 was computed with ϕ = 124 × 10 3 m 2 s 2 there. Having ϕ > 0 at the origin requires ϕ > 69 × 10 3 m 2 s 2 at r = 4,075 km. This gave L d > √φ /f = 749 km at r = 4,075 km, which is in the middle of the estimates obtained from lower latitudes when the variation in f with latitude was taken into account [34][35][36] . Discussion In a shallow water model that starts with cyclones of the observed size and velocity arrayed in polygonal patterns around the Jovian pole, stability requires an anticyclonic ring (shielding) around each cyclone 21 . With a peak azimuthal velocity of 80 m s −1 and radius at the peak of 1,000 km, a single parameter (b) controls the shape of the velocity profile and the depth of the shielding. The other free parameter is L d , but it has only a small effect on the results. The polygons are mainly stable in the range 1 < b < 2. Below this range, the shielding is too weak and the vortices merge. Above this range, the negative vorticity is too strong and the anticyclonic rings become two satellites orbiting around the cyclone 180° apart. At b > 3, these tripoles are unstable and the polygons fly apart chaotically. The blue curve in Fig. 4 has b = 1.35, which is safely in the stable zone The 200 km scale of vorticity and divergence is at least consistent with convection, Severe thunderstorms on Earth have diameters of 30-40 km, which is about five times the pressure scale height 43,44 . Granules, which are the convective elements in the solar photosphere, are about 1,000 km in diameter 45 , which is also about five times the scale height of the partially ionized hydrogen gas. The scale height on Jupiter is about 40 km at the water cloud base, so if the ratio of horizontal diameter to scale height were 5, as it is on Earth and the Sun, then convection elements on Jupiter would have diameters of 200 km. This is about the smallest scale we can measure. The bottom row of Fig. 3 shows no correlation between divergence and vorticity, although both positive and negative values are At 80 m s −1 , which is about the maximum speed of the clouds, a feature moves 38 km. This is considerably less than the 180 km box size used to measure vorticity and divergence. Thus, to a good approximation, the two determinations show the same cloud features at the same time. The small motion is still visible, however, when it takes place on a large scale and includes many small-scale, high-contrast features, as with the slight rotation of the features visible in Fig. 2 and Supplementary Figs 1-4. bottom: points are plotted with divergence on the vertical axis and vorticity on the horizontal axis for n0103 (left) and n0204 (right). present. The implications for convection are uncertain. On the one hand, if a parcel conserves PV around a cycle of updrafts and downdrafts, the material derivative of ζ + f is proportional to the material derivative of ϕ throughout the cycle. But as ϕ is proportional to h, the material derivative of ϕ is also proportional to minus the divergence ∇ · v. Equivalently, the material derivative of negative (that is, anticyclonic) ζ is proportional to the divergence. As a result, negative vorticity lags the divergence by a quarter cycle and there is no measurable correlation. On the other hand, if a parcel has its vorticity reset to zero at the start of each updraft, then negative vorticity develops on rising trajectories, because they diverge at the top. In this case there is a correlation between divergence and negative vorticity, and that would be a sign of convection. Our feature-tracking approach yields vorticity and divergence at spatial scales of 200 km and larger. An entirely different approach 20 is to use infrared brightness itself as a dynamical variable, which extends the spatial scale down to wavelengths of ~15 km. The study 20 assumes that negative infrared brightness anomalies, which are related to cloud height, are upward displacements of pressure surfaces and therefore a measure of anticyclonic vorticity. This assumption is verified at scales from 250 to 1,600 km, which are smaller than the large cyclones but large enough that feature tracking is possible. It further assumes that the flow is quasigeostrophic, so the divergence is given by −1/f times the material rate of change of vorticity. This is the surface quasigeostrophic model 46,47 , which is used in meteorology and oceanography 48,49 . Applied to Jupiter 20 , one observes the signature of convection-a negative correlation between divergence and vorticity at 100 km scales and an upscale energy transfer from scales less than ~200 km to scales greater than ~200 km. These scales are just below the reach of our feature-tracking method. However, the quasigeostrophic approximation is based on Ro = ζ/f ≪ 1, where Ro is the Rossby number. The inequality is not strictly valid for Jupiter's polar cyclones (Fig. 4), and it is even less valid at smaller scales as Ro is predicted to increase with horizontal wavenumber 46 . At mid-latitudes a downscale energy transfer is observed 50 , from wavelength scales of 2,000 km to the shortest scale measured, which is about 500 km. That study 50 used Cassini visible light imaging; ours used Juno infrared imaging, and the surface quasigeostrophic model uses infrared brightness. More work is needed to reconcile these three datasets. A parallel study 51 of Jupiter's south polar vortices, focusing on vorticity and stability, represents a step in the right direction. Methods A series of 12 images was started every 8 min to cover the same region at the north pole of Jupiter. Ideally the images in a series would fit together like tiles in a mosaic with no overlap and no spaces in between. The spacecraft was approaching Jupiter, and the image resolution changed from 22 km per pixel in the middle of the first series to 14 km per pixel in the middle of the fourth series, 24 min later. The two maps on the left of Fig. 2 were made by measuring cloud displacements between the first and third series, which are 16 min apart, and the two maps on the right were made from the second and fourth series, which are also 16 min apart. Therefore, the left and right maps are separated in time by only 8 min, but use entirely different images. Supplementary Table 1 contains the archival filenames and our working names for the 48 images that were used in the analysis. The four series are named n01 to n04, each of which recorded roughly the same place on the planet 8 min after the one before. The first step in the processing was to determine the precise location on the planet of each resolution element in each image. This was done with NAIF/ SPICE data from the spacecraft and the precise geometric calibration of the JIRAM instrument. The second step was to map the brightness patterns onto a gridded reference plane tangent to the planet at the pole. We used 15 km per pixel for this mapping. These data are provided in Supplementary Data 1. Under this assumption, the scaled PV at the pole is 4.18. The variations in PV are mostly due to variations in ζ and ϕ and less due to variations in f, which is the red line sloping gently down to the right. The other two curves (blue and green) were chosen to bracket the data and were used as initial conditions in a model study 21 that is described in the Discussion. The data points are given in Supplementary Data 3. The third step was to measure cloud displacements in the reference plane using the Tracker3 software from JPL. The software automatically searches for the best correlation of brightness patterns between two images. This was done between images in series n01 and n03 and between images in series n02 and n04. Velocity was the displacement in kilometres divided by the time interval, which was always close to 16 min but depended on which image in each series was used. Correlation was done within a square box in the reference plane. After experimenting, we settled on a 15 × 15 pixel correlation box for the Tracker3 software. Thus, with 15 km per pixel in the reference plane, we were using squares 225 km on a side to define a feature. The resolution of the wind measurement was therefore ±112.5 km. We oversampled it by a factor of 2.5 to obtain wind vectors on a 45 km × 45 km grid. That dataset is Supplementary Data 2. We determined vorticity and divergence at every grid point by integrating around boxes of various sizes using Stokes's theorem and Gauss's law, respectively. Table 1 gives results for boxes 2, 4, 6 and 8 pixels on a side, corresponding to 90, 180, 270 and 360 km on a side, respectively. The error in the velocity estimate σ v depended crucially on the granularity of the scene at the scale of the resolution element δ, which on average was about 18 km. Except for areas where there were no features at all, for which there were no estimates, the worst case was a single cloud feature of size ≤δ, for which the variance σ 2 v = 2δ 2 /Δt 2 , where Δt is the 16 min time step and the factor of 2 arises because we are subtracting position in two images. Then σ v = 2 1/2 δ/Δt, about 26.5 m s −1 . However, if the velocity measurement is the average of N statistically independent estimates of velocity, the variance is 2δ 2 /Δt 2 /N. The best-case scenario is when N is the number of resolution elements in the correlation box of L on a side such that N = (L/δ) 2 . Then σ v = 2 1/2 δ/Δt/N 1/2 = 2 1/2 δ 2 /L/Δt, which is 2.1 m s −1 for L = 225 km. Thus σ v is highly uncertain, but in Supplementary Table 2 we show that σ v = 7.81 m s −1 gives a good fit to the noise column in Table 1. A quantitative measure of granularity is provided by image entropy H (ref. 52 ). We define it for each 15 × 15 correlation box from the histogram of brightness values in the box: The input data were 32 bit numbers, but we only had 225 pixels. We divided the range from the brightest to the darkest pixel into 256 grey levels, and we counted the number of times that each grey level appeared in the image. That number divided by 225 is p k , the frequency of occurrence of grey level k normalized so that ∑ p k = 1. The sum is over the 256 grey levels. If the brightness corresponding to a particular grey level k1 did not occur in the image, then p k1 = 0. At least 31 of the p k values must be zero. If all 225 pixels have brightnesses corresponding to grey level k2, then p k2 = 1 and all the other p k = 0, resulting in H = 0. If the brightness levels of all the 225 pixels are different, then H = log 2 (225) = 7.81. This is the maximum entropy for this problem. Low entropy is bad for feature tracking, and we experimented to find a value that eliminated the most suspicious data, such as the large pixel-to-pixel variations in the upper left and lower right corners of the divergence maps. We manually verified that the feature-tracking software was failing in those regions. Supplementary Fig. 5 is a histogram of entropy values, and Supplementary Figs 6 and 7 compare the vorticity and divergence maps with the low-entropy data present and with them masked out. The data in Fig. 4 consisted of ~26,000 measured velocity vectors on the 45 km × 45 km grid of r < 6,010 km. Taking the azimuthal component v(r) of each vector, and knowing its r, we performed two separate least squares fits, one for a n and the other for b n in equation (2) to get analytic expressions for v(r) and ∂φ(r)/∂r, respectively. . (4) This choice of functions had no physical significance. The functions were chosen simply to fit the data and provide analytic expressions for integration and differentiation. For a good fit, the parameter r 0 must be close to the radius of the velocity maximum. From visual inspections, it was chosen to be 1,060 km for ∂φ/∂r and 1,200 km for v. We analytically integrated the expression for cyclostrophic balance in equation (2) to get φ (r) in Fig. 4, and we analytically differentiated the expression (1/r)∂(rv)/∂r =ζ to obtain vorticity. The measured azimuthal velocities are available in Supplementary Data 3. Data availability JIRAM data are available online at the Planetary Data System (PDS) at https:// pds-atmospheres.nmsu.edu/data_and_services/atmospheres_data/JUNO/jiram. The filenames of the images are listed in Supplementary Table 1. Calibrated, geometrically controlled radiance data mapped onto an orthographic projection centred on the north pole and velocity vectors derived from the radiance data are available in Supplementary Data 1-2.
6,631.8
2022-09-22T00:00:00.000
[ "Physics", "Environmental Science" ]
Searching for motifs in the behaviour of larval Drosophila melanogaster and Caenorhabditis elegans reveals continuity between behavioural states We present a novel method for the unsupervised discovery of behavioural motifs in larval Drosophila melanogaster and Caenorhabditis elegans. A motif is defined as a particular sequence of postures that recurs frequently. The animal's changing posture is represented by an eigenshape time series, and we look for motifs in this time series. To find motifs, the eigenshape time series is segmented, and the segments clustered using spline regression. Unlike previous approaches, our method can classify sequences of unequal duration as the same motif. The behavioural motifs are used as the basis of a probabilistic behavioural annotator, the eigenshape annotator (ESA). Probabilistic annotation avoids rigid threshold values and allows classification uncertainty to be quantified. We apply eigenshape annotation to both larval Drosophila and C. elegans and produce a good match to hand annotation of behavioural states. However, we find many behavioural events cannot be unambiguously classified. By comparing the results with ESA of an artificial agent's behaviour, we argue that the ambiguity is due to greater continuity between behavioural states than is generally assumed for these organisms. Introduction Automated analysis of behaviour is of increasing importance to biology and neuroscience. Behavioural control is the ultimate function of neural processing [1]. The recent expansion of tools for manipulating neural activity, such as optogenetics, has made it crucial to be able to screen rapidly and automatically for the behavioural consequences of these manipulations. Standardization of quantitative behavioural assays and reproducibility of analyses are thus key to progress in understanding neural circuits. Traditional manual annotation of behavioural data is not feasible for large datasets. As a consequence, automated high-throughput behavioural annotators have been developed. An example is the Janelia Automatic Animal Behaviour Annotator (JAABA) [2]. JAABA first requires hand annotation of a subset of the data and then the software uses machine learning algorithms to find the same patterns in the unannotated data. Other researchers have developed classifiers that extract specific parameters from behavioural data and then register a state if a certain parameter (or parameter set) exceeds a user-defined threshold [3][4][5][6]. Note that for these classifiers both the set of possible behaviours and the description of those behaviours are encoded by the user. In contrast, our goal is to discover patterns in behaviour without reference to any user-defined thresholds or examples. Posture is the main observable component of behaviour, and the behavioural annotators mentioned above mainly use postural information as input to classify behavioural states. In this context, Stephens et al. introduced eigenworms [7], using principal component analysis to produce a low-dimensional representation of C. elegans midline shapes. For the unrestricted free behaviour of C. elegans, four eigenworms account for 92% of the animal's posture variance. This means that four numbers can describe any actual worm posture with high precision. Mathematically, postures are described by a superposition of eigenworms, i.e. where a i (t) is the coefficient associated with the ith eigenworm at time t. Figure 1 shows the eigenworms and an example of posture reconstruction. Eigenshapes provide a compact representation of posture and hence clearly have potential use in behavioural annotation. Specifically, behaviour (change in posture over time) is represented by the time evolution of eigenshape coefficients, i.e. the time series of a i (t)s. This time series will be referred to as the eigenshape coefficient time series (ECTS) and forms the basis of our method. The technical aim of this paper is the unsupervised discovery of frequently repeated ECTS subsequences. In the data mining literature, frequently repeated subsequences are also known as motifs [8]. ECTS motifs correspond to frequently repeated sequences of posture that can be viewed as behavioural states or actions [9,10]. Previous attempts to extract ECTS motifs using a simple 'sliding window' motif discovery approach [11] suffer from two major problems. First, the window for any pass is of fixed length, hence this method considers only exactly equal duration sequences as potential matches. Second, the sliding window method defines a motif as a pair of closest neighbour sequences. However, motifs are understood intuitively not as a single pair of subsequences, but as a frequently repeated subsequence. Our motif finding methodology was designed to overcome these two problems. First, we derive the equivalent of eigenworms for larval Drosophila, termed eigenmaggots. The ECTS of both larval Drosophila and C. elegans are then analysed using our novel motif finding method. The ECTS motifs are used as the basis of a probabilistic behavioural annotator, the eigenshape annotator (ESA). 1 We show that the resulting annotation corresponds well to hand annotation, although a number of behaviours cannot be unambiguously classified. The ESA analysis is also applied to the behaviour of a state-based simulated maggot to show that the ambiguity is not inherent in the method, but reflects a greater continuity between behavioural states in these organisms than is generally assumed. In summary, our new method both confirms the results of previous behavioural annotation and reveals some of its limitations. Overview Our aim is to go from video of a behaving animal to annotation of its behavioural states, where those states are determined using bottom-up discovery of motifs in the sequence of postures. We start by recording freely foraging Drosophila larva, extract their midline as a set of angles, and apply principal component analysis to obtain a low-dimensional description of postures, the ECTS. Equivalent information for the worm is available from the C. elegans behavioural database (CBD). Discovering motifs in the multidimensional ECTS is a non-trivial problem, and there are no existing adequate tools. We developed a two-step process to Figure 1. Constructing eigenworms. In each video frame, thresholding is used to separate the animal from the background, then the resulting binary images are skeletonized. This skeleton, or midline, is used as a proxy for the animal's posture. Panel (a) shows a frame from the CBD with the worm's contour and midline highlighted, panel (b) shows the corresponding midline. The skeleton has been rotated to remove the worm's overall rotation relative to the plate. Panels (c) zooms in on the midline, showing how a set of u i angles provide a piecewise linear approximation to the midline curvature. This angular data forms a vector for each frame, or a matrix for a movie. The matrix's principal components are the eigenworms. Panel (d ) shows an example of a posture reconstruction. The blue shapes in the middle column are the eigenworms, which can be added together with different weights to reconstruct any actual worm posture. first extract subsequences and then fit a statistical model to cluster the subsequences. Briefly (details are given below), we use changes in the dynamics of the ECTS to divide the sequence into variable length subsequences, with the intent that each subsequence contains a single 'action'. The subsequences are aligned and then clustered using a spline regression model [12,13], a method for analysing curves analogous to Gaussian mixture models. The resulting clusters constitute motifs by which the animal's behaviour can be annotated. The results are compared with alternative annotation systems and with hand annotation provided by a human expert, which is treated as ground truth. Data collection Canton-S flies were maintained on conventional cornmeal-agar molasses medium at 228C and kept in a 12 h dark-light cycle. For the behavioural experiments, larvae in their 3rd instar stage were placed on 3% agarose and were allowed to freely forage. Across 33 individuals, 14 h of video was recorded at 30 fps. The videos were segmented (see below) into a total of 11 613 actions. The tracking and data acquisition hardware used for this publication are described in detail in [14]. Briefly, the larva moving over a fixed stage was imaged using a camera (Basler A622f) on top. The camera was mounted on a moving stage to follow the animal. The software for image capture and stage control was written in C using the OpenCV libraries. To analyse worm behaviour, we used data from the CBD [6]. The database consists of videos of worms (recorded at 30 fps) browsing in bacteria. For every video, there is a corresponding feature file, which contains many precalculated statistics of worm morphology. The feature files also contain the eigenworm coefficient time series. The worm analysis in this paper uses this precalculated ECTS. Twenty-two thousand and sixty-six actions were analysed from 100 experiments with N2 worms, corresponding to 25 h of video. Constructing eigenmaggots In each video frame, the larva was separated from the background by a thresholding algorithm. The resulting binary images were skeletonized using the built-in MATLAB function [15]. Midlines were rotated such that the endpoints, corresponding to the head and tail of the animal, lie along the x-axis. This operation removes the overall rotation of the animal's body relative to the plate. The midlines were normalized such that they consist of 71 points placed equidistant from each other. The length of the larva can change, but is neglected in this analysis, i.e. we treat every midline as if it is the same length. The eigenshapes in figures 1 and 2 have been reconstructed to reflect the average physical size of the midlines. The angles among consecutive points defining the midline were restricted to the intervalp , u i p. As a result of these operations, each frame is associated with a 70 dimensional vector, where the ith component is u i (figure 1c). These vectors are concatenated to form an n * 70 data matrix, where n is the number of frames. Principal component analysis is applied to this data matrix to construct the eigenshapes and the associated ECTS. Eigenshape coefficient time series For both the larval and worm analysis, the coefficients of the three most significant eigenshapes were included in the ECTS, that is ECTSðtÞ ¼ ½a 1 ðtÞ, a 2 ðtÞ, a 3 ðtÞ, see equation (1.1). After principal rsif.royalsocietypublishing.org J. R. Soc. Interface 12: 20150899 component analysis, the inspection of the eigenvalues reveals that for both organisms three coefficients account for approximately 90% of the posture variance [16], thus provide an accurate description of posture. At the same time, a three-dimensional ECTS is small enough to avoid 'the curse of dimensionality' that could lead to difficulties during the clustering step [17]. Dropped frames Both the larval and the maggot ECTS contains dropped frames. If a gap was short (less than 0.5 s), then ECTS was linearly interpolated. After the interpolation, 1.1% of the Drosophila and 4.2% of the C. elegans frames were still missing. For both organisms on a significant portion of the dropped frames, the animal was curled up in a 'doughnut shape' from which it is difficult to extract a biologically meaningful skeleton. For C. elegans, more frames were dropped, because the worms were browsing in food. The layer of bacteria can obscure the worm in the image, making separation of the body of the worm from the background more challenging. Note that the inability to analyse curled-up postures introduces a bias to the pipeline, as no posture with self-intersection is included. Segmentation The intuition behind the segmentation algorithm is that boundaries between windows should be located where the dynamics of ECTS changes. ECTS was smoothed using a weighted running average filter with a window size of four frames and weights inversely proportional to the distance from the window's centre. Segmentation operates on a 'body score' time series that is created by calculating a weighted sum of the separate dimensions of ECTS, where the weights are set by the eigenvalues associated with the eigenshapes. The segmentation algorithm scans the body score to find local minima and maxima. An action is defined as a local maxima in body score bounded by minimas. The minimas define the start and end of the segmented subsequence. Figure 3 shows the result of segmentation for Drosophila and C. elegans with the corresponding body score time series. The maxima/minima finding algorithm is controlled by a master parameter. The results are not strongly dependent on the precise parameter setting: adjusting it by +25% leaves 92% of the annotation unchanged. The behavioural videos of C. elegans were recorded while the worms were browsing in food. In this environment, worms often show low activity. Our segmentation was designed to identify periods where the body score rapidly changes, hence the identification of low activity periods required an extra step. Low activity periods were identified by intervals where the time derivative of body score remained under half of its average value for more than 0.5 s. These periods were added to the collection of actions prior to proceeding to the clustering step. If the two parameters (less than 50% of average body score for more than 0.5 s) are adjusted +25%, then 97% of the action's classifications are not altered. Thus, fine tuning of the parameters is not necessary. Curve alignment and clustering Segmentation produces a large set of subsequences, or actions, each of which is a continuous ECTS curve. Hence, splines, locally smooth piecewise polynomials, are a natural choice to parametrize actions. Spline regression [12,13] was used to assign the actions to clusters. This method is analogous to Gaussian mixture models, but instead of Gaussian distributions, clusters are parametrized by splines. To improve the consistency of spline fitting, the ECTS subsequences were aligned in the time domain. The frame with the highest body score was used as a reference, and actions were shifted in time such that their point of highest body score coincides, see electronic supplementary material, figure S2 for illustration. Note that if ECTS ¼ [0, 0, 0], then the posture is a flat line (for both organisms). The higher the coefficients are, generally the more curved the postures are (although the bend caused by the coefficients can be in opposite directions and cancel each other). Therefore, the maxima of the body score correspond to the frame with the most bent posture and as such this frame is a rational choice to define a reference point in time by which subsequences of different lengths can be aligned. Splines had three internal knot points and each polynomial had an order of 3. An expectation-maximization (EM) algorithm [13] was used to learn model parameters. EM was initiated 500 times with random boundary conditions, and the solution with the highest likelihood was kept. Bayesian information criteria (BIC) [18 -20] was used to identify the optimal number of clusters. BIC is defined as where L model is the likelihood of the fitted model, k is the number of free parameters and n is the number of observations. The first term reflects goodness of fit of the model, and the second is a penalty term is for the number of free parameters. Spline regression clustering produces a membership probability that a given action belongs to a cluster. Therefore, this method avoids rigid cluster assignments and also allows classification uncertainty to be quantified. To measure the classification uncertainty Shannon entropy [21] was used, defined as where p i is the probability that a given action belongs to a cluster i. Note that the most uncertain situation is when the probability is equally distributed among the clusters, correspondingly H has a maximum when all p i ¼ 1/i max (i max is the number of clusters). Comparison of behavioural annotations In the following, a 'behavioural event' means an interval of consecutive frames tagged with the same behaviour. A behavioural event marked by an automated annotator (ESA, JAABA or CBD) was counted as true positive if at least 50% of it was also tagged by ground truth annotation with the same behaviour. Otherwise, the event was either counted as a false positive (automated annotator marked a behavioural event that had less than 50% overlap with an identically annotated behavioural event in the ground truth annotation) or a false negative (ground truth marked a behavioural event that had less than 50% overlap with an identically annotated behavioural event in the automated annotation). Furthermore, we had to consider the problem that different annotations used different behavioural state spaces. The behaviours were always matched to the closest behaviour in the ground truth annotation. Specifically, for larval Drosophila, ESA's turning manoeuvre was treated as a match to both stop cast and turn in the ground truth annotation. That is, if ground truth contained either a turn or a stop cast behaviour and at least 50% of the frames were tagged as a turning manoeuvre by ESA, then it was counted as a true positive. Run casts are the same behaviour across ground truth, JAABA and ESA. For C. elegans, the ground truth hand annotation's dwelling was treated as a match to CBD's pause and ESA's passive state. The CBD's Y and V turns were both treated as a match to the ground truth's turn behaviour. Parts of the time series were excluded from the analysis when the video frames could not be segmented and hence midline information was not accessible. Note that JAABA, CBD and ground truth annotation is available for these periods as they do not exclusively rely on contour information. We modified the output of JAABA to avoid the problem of 'flickering annotation'. Flickering annotation occurs when single frames within a behavioural event are not classified as rsif.royalsocietypublishing.org J. R. Soc. Interface 12: 20150899 part of the event, e.g. the sequence 0011011100 (where 1 means that the frame corresponds to a given behaviour, 0 means it does not). JAABA works on a frame-by-frame basis, hence these sequences are present when an event is near threshold value. To avoid the false positives caused by the small gaps, we have connected behavioural events that are less than three frames apart. Hence, the sequence above would become 0011111100. To summarize annotation accuracy, we report the precision ( positive predictive value) and sensitivity (also known as recall and true positive rate) [22] in tables 1 and 2. Sensitivity is the percentage of events recognized by the annotator, and precision is the proportion of events tagged by the annotator that are true positives. Furthermore, these two measures are combined as the F-score, defined as which is commonly used to quantify the goodness of classification. Visualization, density cross sections and feature histograms To produce figures 4b and 5b and figure S1, the standard MATLAB [15] implementation of metric multidimensional that uses a nonlinear time warping to find the optimal match between a pair of subsequences [24]. Note that the Euclidean distance among the points (corresponding to the actions) on the map correlates with the DTW distance among the subsequences, but the distances on the map are in arbitrary units. To construct each map, a random sample of 5000 actions were used. The algorithm was run 500 times with random initial conditions and the solution with the highest R 2 was kept. The density cross sections of aggregated ECTS curves were visualized to see possible density fluctuations (see §3.4). Sets of stereotypical curves would form high-density regions in the cross sections. Hence, the cross sections can be used to detect stereotypical curves corresponding to stereotypical posture sequences. Density cross sections are measured on aggregated and aligned ECTS curves at specific 'time slices' as shown in figure 6a. To estimate the density of curves, a kernel density estimation method was used [25]. Figure 6 only shows the cross section for one time slice, see electronic supplementary material, figure S3 for additional cross sections. To create the histograms of C. elegans behavioural features, data were directly imported from the CBD feature files. These features are defined in [6]. The hardware and software that was used to obtain the behavioural features for larval Drosophila is described in [4]. Eigenshapes The eigenworm analysis pipeline extracts a vector of angles between consecutive points along the animal's midline, and applies principle component analysis to reduce the dimensionality of this description. The same method was adapted to create the analogous set of shapes for Drosophila larva, the eigenmaggots (figure 2). We find that eigenmaggots (figure 2b) are as efficient to describe larval postures as the eigenworms (figure 1d) are to describe worm postures. The inspection of eigenvalues reveals that three eigenmaggots account for over 90% of the postural variance [16] (figure 2a). Thus, eigenmaggots provide an accurate low-dimensional description of larval postures. In contrast to eigenworms, eigenmaggots do not capture forward locomotion [7]. This difference is due to the different mode of locomotion. C. elegans propels itself by moving its body in a sinusoidal wave perpendicularly to the direction of motion [26]. Larval Drosophila crawls forward using peristaltic contraction waves [27]. The peristaltic waves can be recognized by the contraction of the abdominal sections, but this contraction does not alter the animal's midline shape from the camera's top view, and therefore is not captured by the eigenmaggot description. It is noted here that we have experimented with supplementing the larval ECTS with the tail speed time series as an extra dimension. The idea is that tail speed captures the state of peristalsis. However, the additional information did not improve the classification when evaluated against the ground truth annotation. Motifs for Drosophila larva For foraging Drosophila larva, the BIC for the spline regression model gave the best fit when assuming the presence of two behavioural motifs. The first motif we call a run cast. A run cast is a low amplitude head cast while the larva is moving approximately straight [28,29]. Successive run casts make up the larva's typical forward locomotion. The second motif corresponds to high amplitude head casts that may or may not be followed by a sharp change of direction. Some previous analyses of larval behaviour distinguish 'stop casts' (or simply 'casts'), where the larva stops locomotion and sweeps its head laterally, from 'turns', which start in a bent body shape and end as the larva resumes locomotion in a new direction [4,9]. This classification scheme is not unique; others have proposed alternatives [30]. We do not find evidence to support the distinction between 'stop casts' and 'turns' instead our analysis describes these behaviours as a single motif, the turning manoeuvre. See electronic supplementary material, video S1 and figure 4 for an annotated trajectory and a visualization of the relationship among the motifs. produced an F-score of 0.68 also with many false positive events. See table 1 for the precision, sensitivity and F-score statistics for each behaviour for both JAABA annotation and ESA. Electronic supplementary material, video S3 shows the binary video of the larva, hand annotation, JAABA and ESA annotations next to each other, so that the reader can gain a good understanding of how the different annotations relate to the larva's behaviour. Typically, disagreements happen between ESA and hand annotation when an action has high classification uncertainty. Classification uncertainty is quantified by the Shannon entropy [21] and it is denoted by H. Seventy-three per cent of the ESA actions have a low uncertainty, meaning H , H max /4, where H max ¼ log 2 2, because two states have been found. For these low uncertainty actions, hand annotation and ESA agree on 87%. When classification entropy is high, H . H max /4, then the agreement rate between the two annotations drops to 49%. In short, action labels typically differ where ESA is uncertain. When hand annotation and ESA are in disagreement, it is often debatable which one is correct. In §3.4, we argue that the difficulty to resolve disagreements is due to an unbroken continuity between the two behavioural motifs. Motifs for Caenorhabditis elegans ESA was developed with the analysis of larval Drosophila in mind, but can also be applied to C. elegans. The worm behavioural data were obtained from the CBD. The database contains movies of worms browsing in bacteria, an environment where worms tend to pause for long periods. These pauses required an extra step in the segmentation process, see Methods for details. In this case, BIC for the spline regression model fit indicated the presence of three behavioural motifs, corresponding to locomotion, turns and passive periods. Segmentation divides locomotion into 'steps', where each step is a p/2 advancement of the locomotion wave. Multiple locomotion steps make up the characteristic undulatory motion of the worm. The turn behaviour as defined by ESA also includes classic V turns, lower amplitude turns and sharp pirouettes [23]. The passive periods are a mixture of pauses, dwelling and quiescence [31]. Figure 5 shows a visualization of the relationship between the motifs and an annotated trajectory, and electronic supplementary material, video S2 provides a dynamic illustration of the annotation. To benchmark ESA, its performance was compared against hand annotation. ESA produced an F-score of 0.82 ( precision ¼ 0.74 and sensitivity ¼ 0.95), where the dominant source of error was a large number of false positive turn events. This finding is not surprising given that the turning behaviour as defined by ESA is very permissive. Existing automated behavioural annotation of the CBD resulted in an F-score of 0.88 ( precision ¼ 0.86 and sensitivity ¼ 0.9). See table 2 for the precision, sensitivity and F-score statistics for each behaviour for both CBD annotation and ESA. Furthermore, see electronic supplementary material, video S4, which shows the video of the worm, hand annotation, CBD and ESA annotations next to each other. As for larval Drosophila, there is a significantly increased chance of a C. elegans action to be labelled differentially by ESA and hand annotation if the action has a high classification uncertainty (H . H max /4, where H max ¼ log 2 3 as three behavioural states have been detected) according to ESA. The probability that hand annotation labels these uncertain actions identically decreases to 39% from the population average 77%. Do the larva and the worm exhibit discrete behaviours? For both animals, the above analysis produces a substantial proportion of actions (around 25%) for which classification uncertainty is high. This suggests that the identified behaviours are not discrete, where 'discrete' means clearly distinguishable and stereotypical. Rather we see a continuous spectrum of behaviour. This is in contrast with the overwhelming majority of the literature that treats behaviour of these animals as a set of discrete states, although we are not the first to suggest a continuum among behavioural states for C. elegans [31]. To compare our results to what might be expected if there were discrete states, ESA was used to annotate the behaviour of an agent-based simulation of Drosophila larva which had been developed independently to study chemotaxis [32]. The agent's behaviour is controlled by a Markov chain model with three states: stop cast, run cast and straight run. Within each state, the precise motion (e.g. body bend) is determined by the current sensory conditions so can vary significantly. Videos were recorded of the agent in its virtual world, and the videos were put through the ESA pipeline (i.e. extracting eigenshape representation, segmentation, clustering). In this way, we test the ESA pipeline for its ability to detect underlying discrete states. We also present several alternative analyses that reveal distinct actions in the simulation but suggest a continuum of actions in the real animals. Clustering results For the simulated agent ESA produced three clusters and for 94% of the time, it produced the same behavioural classification as ground truth annotation. BIC indicated a difference between the agent and the animals. For the agent, BIC provided strong evidence to distinguish the three clusters (DBIC min ¼ 7.57). In contrast, for both Drosophila larva and C. elegans, there was weak statistical evidence to justify the number of clusters (in both cases DBIC min , 3.75) [33]. In other words, BIC is confident that there are three distinct clusters among the agent's actions, but for the two animals, the cluster structure is statistically much less justified. Structure in aggregated eigenshape coefficient time-series segments We can directly examine this difference in cluster structure by visualizing the presence or absence of clear density bands in the aggregated ECTS subsequences (see §2.9). Sets of stereotypical curves form high-density regions in the cross sections, hence the cross sections can be used to detect stereotypical curves corresponding to a stereotypical posture sequences. Figure 6a shows the aggregated ECTS curves for the first ECTS component of larval Drosophila. Figure 6b-d shows the density cross sections for larval Drosophila, C. elegans and the agent, respectively. Note that the positive/negative asymmetry of ECTS values along the x-axis corresponds to the left/right asymmetry in larval behaviour and to the dorsal/ventral distinction for C. elegans. For both organisms, there is a single band in each half of the x-axis. This profile is in contrast with the two distinct bands of the agent's density cross section. The curves forming each high-density band correspond to one Markov state of the agent. Seven cross sections at various x-values were examined in each dimension for both the C. elegans and Drosophila (electronic supplementary material, figure S3), but they all had the same qualitative features as the cross section shown in figure 6, i.e. the animals do not have distinct bands that would support the inference of separable behavioural states. Structure in behavioural features Weathervaning, or klinotaxis, is a steering process that results in the animal's trajectory bending towards higher concentration of odour [34]. For Drosophila larva, low amplitude head casts are hypothesized to be responsible for weathervaning [29]. These weathervaning casts are distinguished from head casts by the amplitude of body angle [28,29], which is very closely related to the amplitude of the first ECTS component, see figure 2b. The agent's behaviour was coded with this distinction in mind, so head casts tend to cause a higher body angle than weathervaning casts. Figure 6e shows the histogram of the maxima of first ECTS component during the agent's actions. The bimodal distribution clearly indicates two distinct behaviours. Based on this observation, we examined the maxima and average of a number of features of larval Drosophila (head speed, head angle, body angle, body angle speed and head angle speed) and C. elegans (eccentricity, head, midbody and tail angles) actions, see figure 6e-h and electronic supplementary material, figures S4 and S5. We hoped to find multimodal distributions and possibly sharp cut-off values because these could be used as data-defined thresholds to distinguish actions. However, in all cases, a smooth, unimodal distribution was found. Multidimensional scaling A final way to examine this issue is to use multidimensional scaling to visualize the distance matrix of actions. DTW was used to measure distance, where the weights are set by the eigenvalue associated with each dimension of ECTS. Figures 4b and 5b show the larval Drosophila and C. elegans maps, respectively. As can be seen, there is no clear boundary in either figure to unambiguously separate behavioural motifs. This is in contrast with the agent's map, electronic supplementary material, figure S1, where clearly separated regions can be seen. Discussion This paper introduces eigenshape annotation, a bottom-up unsupervised method that searches for frequently repeated posture sequences in behavioural data. This problem is closely related to behavioural annotation, but not identical to it. Most behavioural annotators recognize behaviours through userdefined thresholds or training data [2][3][4][5][6]. In both cases, the set of possible behaviours and the description of those behaviours are determined by the user. In contrast, ESA is trying to discover the behavioural states directly from the data without any user input. Note that this task is considerably more challenging than behavioural annotation owing to the lack of a priori constraints. Thus, the novelty of this work is to create a data processing pipeline that discovers behavioural motifs in an unsupervised manner, where a behavioural motif is defined as a frequently repeated posture sequence. The behavioural motifs discovered were generally consistent with behaviours described in the literature. However, many ESA motifs were more permissive than the definitions in other studies. For example, the ESA 'turning manoeuvre' for larva includes turns and high amplitude head casts [4], whereas the ESA 'turning behaviour' for the worm is a mixture of classic and wide V turns [6,23]. In both cases, there was no justification in the data for making any further subdivision of turns. Note that it can also be difficult for human observers to distinguish these behaviours consistently. ESA was also unable to unambiguously classify many actions. The seeming continuity of the action distance maps, figures 4b and 5b, motivated us to further consider whether there are 'defining features' that could objectively distinguish behaviours. In a simulated agent that was coded with distinct behavioural states, it is straightforward to find such features, for example, the amplitude of body bend (figure 6e). We searched for multimodal distributions in a variety of features of the Drosophila and C. elegans data, but failed in both cases. It remains possible that some feature we did not consider might reveal multimodality, or that discrete behaviours can be distinguished by considering a combination of multiple features. There is an extensive literature that treats the behaviour of these animals as a set of discrete states. Despite our observation of continuity among behavioural states, our results are not necessarily in contradiction with the discrete treatment of behaviour. Discrete states can be seen as coarse graining (or binning) the continuous behavioural states. For example, the CBD defines V turns as a bend greater then p/6 propagating through the body. If the bend is between p/12 and p/6, then the event is called an Y turn. Thus, this classification scheme treats turning as a two state variable (V/Y turn). In contrast, ESA produces a membership probability that an action is a turn, instead of discretizing non-turns, Y and V turns at arbitrary thresholds. Coarse graining simplifies the underlying postural dynamics, and it can be an appropriate simplification for many studies. For example, the CBD's turn annotation is appropriate for studies looking at the worm's biased random walk. On the other hand, if an analysis requires the precise characterization of the worm's turning behaviour, then the continuous classification scheme of ESA can be advantageous. However, adopting a coarse-grained description for convenience does not justify the widespread treatment in the research literature of behaviour as actually consisting of a set of discrete states, an assumption that needs to be independently evaluated. There is a risk that initially arbitrary distinctions between behaviours have become reified as qualitatively distinct behaviours of the animal, and treated as a set of actions between which it selects. For example, it is sometimes assumed that the underlying neural activity has a modularity that matches the behavioural states, and that this should guide investigation of neural circuits. In our results, the lack of stereotypical and distinguishable behavioural states suggests that the underlying neural activity is not stereotypical or modular. It remains possible that a highly stereotypical activity pattern of neurons implements a behavioural state, but owing to biomechanical effects, the resulting posture sequences are not so stereotypical. These alternate possibilities can only be addressed by studies of neural activity that do not exclusively depend on behavioural annotators that make a priori assumptions about the existence of discrete states. A further possibility is that the lack of discrete actions observed in our study was a consequence of the particular behavioural conditions in which the animals were tested. Both environments were free of stimulus gradients: larval Drosophila was crawling on plain agar, whereas C. elegans was browsing in bacteria (although the bacterial layer could have minor inhomogeneities leading to shallow gradients). In future work, we will examine whether the behavioural space changes under different environmental conditions, for example, during directed chemotaxis in larval Drosophila. ESA could be improved by advances in computer vision. Standard thresholding and skeletonizing algorithms fail when the animal intersects itself (2.5). The exclusion of selfintersecting postures introduces a bias to the pipeline, as no posture with self-intersection is included in the analysis. It is a possibility that there are discrete elements of behaviour in the self-intersecting sequences of postures. The idea behind ESA is to find motifs in behaviour. We represented behaviour as posture, and posture as an ECTS, but the framework presented is not specific to either. ECTS can be replaced with any time series capturing behavioural features, or alternatively ECTS can be supplemented with such time series. Time series of higher-level features provide extra information for the classifier, potentially increasing its accuracy. For example, including a 'direction of locomotion' time series could lead to the detection of reversals as a separate state. Alternative motif finding algorithms could be used on ECTS as well. For example, the subsequences yielded by segmentation can also be clustered using distance-based methods. We have experimented with several methods [35,36] in combination with standard distance measures (Euclidean and DTW), but it always led to results inferior to spline regression clustering in terms of the classification performance evaluated against hand annotation. We think that the performance difference is due to the ambiguous separation of clusters. Because of its probabilistic nature, spline regression clustering is better equipped to deal with datasets where many of the entries cannot be unambiguously classified. Finally, we note that motif discovery is a challenging problem and it is an area of intense research in the machine learning community. Owing to the abundance of sequencing data most of the effort is focused on discrete, one-dimensional time series. To the best of our knowledge, the combination of segmentation and clustering is a novel approach to multidimensional motif finding. As discussed earlier, the framework is not specific to ECTS, therefore, we expect that with minor modifications the framework could also make contributions in other applications. Funding. This work was supported by grants nos. EP/F500385/1 and
8,741.6
2015-12-06T00:00:00.000
[ "Biology" ]
Effects of Endwall Fillet and Bulb on the Temperature Uniformity of Pin-Fined Microchannel Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases. Introduction Energy and mass transfer process are involved in many industries, such as energy, transportation, aeronautics and astronautics, electronics, manufacturing, among which the heat exchanger plays a key role.With the development of science and technology and the increasing requirements of energy conservation and emission reduction, the heat exchange system has to promote the efficiency to meet the demand of higher heat intensity and thermal load.The high-efficient and energy-saving compact heat exchanger such as microchannel heat sink has gained more attraction in academic research and engineering applications [1]. The research on heat and mass enhancement based on flow control structures, such as pin-fin [2], groove [3], cavity [4], tip clearance [5], rib [6], dimple/protrusion [7], and bifurcation [8], is well conducted.Especially, the microchannel with pin-fins is widely used because of its high efficiency of heat transfer.Kosar et al. [9] studied the forced flow in the microchannel with staggered and in-line circular/diamond pin-fins array by experiment, and found that at low a Reynolds number, endwall effects and fin density influenced the flow resistance a lot, which was based on the proposition of modified correlations to predict the pressure drop in the microchannel.The Nusselt number, friction factor of water flow and thermal resistance in the microchannel with staggered pin-fin in a large range of heat fluxes were studied next [10].Then, they found compact fins at a high Reynolds number delayed the flow separation, thus depressed the endwall effect.Marques and Kelly [11] experimentally investigated the pressure loss and heat exchange of a micro heat exchanger with pin-fin array, and the results indicated that its heat exchange performances were all better than the corresponding parallel plate heat exchanger.Then, they proposed a cooling effectiveness prediction model for micro pin-fin heat exchanger in a gas turbine blade cooling application.Vanapalli et al. [12] analyzed the pressure penalty of a microchannel with compact pin-fin array.The results showed that among different cross section shapes, the sine-shaped pin-fin performed best when considering pressure penalty.The heat transfer of a microchannel with pin-fin in a large range of Reynolds numbers was experimentally studied by Wang et al. [13].The results showed its heat transfer coefficient was twice that of the plate microchannel.What is more, the pin-fin with triangular cross section showed best heat transfer performance.The above studies show that arranging pin-fin in a microchannel is an effective heat transfer enhancement technique in high heat flux conditions. As is shown in the above research results, the flow control structures enhance the whole heat exchange process in the microchannel, while on the walls in wake flow, corner vortex and separation bubbles, local high temperature regions still cannot be eliminated, indicating the deterioration of local heat transfer.Additionally, for the equipment under precision control, such as spacecraft and micro electronic devices, the deterioration of local heat transfer will increase the temperature gradient of the heat exchange surface, generate additional span-wise thermal resistance and influence the modulation response of the system a lot, which will threaten the safety of the equipment and economy and reliability of the system.Therefore, promoting the heat exchange surface thermal uniformity and eliminating local high temperature regions is significant for improving the efficiency and practicality of the microchannel heat sink with flow control structures. Chyu [14] studied heat transfer performance of short pin-fined arrays with fillets, which was enveloped by a circular profile and arranged along the normal direction of channel, under Re = 5000−30,000.The results showed that the pin-fin endwall fillet deteriorated the heat transfer slightly at large Re.Furthermore, from the study of three-dimensional passage flow in compressors, the flow separation and channel secondary flow were found to be greatly influenced by the fillet.Cuyrlett [15] experimentally found that CDA (Controlled Diffusion Airfoil) blade with fillet increased the loss and secondary flow, while that for DCA (Double Circular Arc) blade with fillet showed the opposite trend.Hoeger et al. [16] numerically presented a "chamfer vortex", which was the opposite of the horseshoe vortex.The vortex could transfer the high energy fluid to the rear surface, suppress the separation of boundary layer in corner area, and enlarge the operating range of the multi-stage compressor.Hoeger et al. [17] analyzed the influence of the fillet and bulb on the endwall flow in the compressor cascade, and the results showed that the separation in the rear surface decreased and even disappeared under the influence of fillet structure.Kuegler et al. [18] numerically found that the fillet could decrease the corner separation and enlarge the range of throttling process in multi-stage compressor.Goodhand et al. [19] found that after removing the fillet, flow separation was enhanced considerably near the hub of stator blades in compressor and thus the loss was greater.Meyer et al. [20] found out in their experiment that the loss increased with the increment of the radius of fillet.All this research reveals that necessary designs on the fillet structures can beneficially influence the interaction between main flow and near wall flow and the heat exchange process by means of changing the separation flow structure of boundary layer near the corner region. Therefore, to improve the uniformity of the temperature field, optimization of the microchannel with endwall filleted and bulbed pin-fin distributed in one side of the cylinder is carried out in this research.Laminar flow and heat transfer performances are studied with the Reynolds number ranging from 50 to 200 and the radius of fillet and bulb structures ranging from 0 to 15 µm, respectively.The flow pattern, temperature distribution and variations of performance parameters are comparatively analyzed. Geometrical Model and Boundary Conditions Fully developed periodic velocity and temperature fields are obtained after several identical flow control devices arranged along stream-wise direction.For the flow and heat transfer process, most of the passage is in the periodic temperature field.Therefore, in our research minimal reduplicated unit is selected as computational domain to simplify the calculation process.The microchannel cross section is 200 µm (W) × 50 µm (H) and periodic length is 150 µm.In Figure 1, the coordinates x, y, z refer to the span-wise, stream-wise and normal-wise directions.A uniform constant heat flux of q" = 5 × 10 5 W•m −2 and no-slip boundary condition are set at four external surfaces and surfaces of the pin-fin device.Transitional periodic boundary condition is applied at the inlet and outlet surface.The main flow temperature is 300 K with the inlet Reynolds number ranging from 50 to 200. Governing Equation Similar to our previous work [21], the incompressible inviscid steady water flow is assumed in this research.Therefore, the governing equations are as follows: Energy equation The SIMPLE method is selected in coupling the pressure and velocity.The standard scheme is selected for pressure discretization and a second-order-up-wind scheme is selected for the momentum equation.The residuals of continuity, energy, and velocity components are selected to evaluate the convergence of the calculation process with the convergence criteria of 1 × 10 −6 . Parameter Definition In this research, the work substance is water.Its thermo-physical properties are given in Table 1. Table 1.Thermo-physical properties of water.The Reynolds number is defined by Substances where U m,in is the average velocity of the inlet surface, D h is the hydraulic diameter and can be calculated by The Nusselt number is defined by where λ is the thermal conductivity of fluid.Heat transfer coefficient h is described as where q" represents the heat flux.∆T refers to the difference between average temperature of walls T w,ave and average fluid temperature T f,ave The Fanning friction is calculated by where ∆P is the pressure drop, ρ is the density of fluid, L represents the stream-wise microchannel length. In the forced conductive flow with heat transfer, the entropy generation contains two parts, one caused by the heat transfer irreversibility, and the other due to the fluid frictional irreversibility [22].The entropy generation of flow can be calculated by the following equations [23] where T refers to the temperature of water. As for the internal microchannel flow, the entropy generating rate per unit length is calculated by [23] S = 1 q and r are defined by Therefore, Equation ( 13) can be given as Model Validation In order to increase the accuracy and validity of our model, an all-hexahedral mesh is generated and improved.To balance the accuracy and computational resource, we performed a grid independence validation to determine the optimal grid nodes for the following numerical analysis.Four different meshes are generated and studied with the grid node of 400,824, 1,329,900, 2,475,960 and 3,926,120 (under the circumstance of q" = 5 × 10 5 W•m −2 , T in = 300 K, Re = 100, r = 5 µm).The results are shown in Table 2.The relative discrepancies of Nu and f are 0.07% and −0.16% respectively when the mesh changes from Mesh 2 to Mesh 3. Hence, the optimal mesh is Mesh 2. Meshes with other geometrical structures choose similar mesh size.Using the model above, the numerical simulation in the corresponding smooth microchannel with a height of 200 µm, a width of 50 µm and a length of 150 µm has been conducted, and Table 3 shows that the differences of fRe and Nu compared with the results in reference [24] are no more than 1%, which ensures the accuracy and reliability of the model. Flow Structure and Temperature Distribution The temperature distribution with limiting streamlines on the wall, and temperature distribution with streamlines on the span-wise middle section are used to analyze the detailed flow structures and heat transfer performance in this study. Figure 2 shows the temperature distribution with limiting streamlines on the pin-fin with fillet and bulb, and surrounding walls when the Reynolds number is 50, and the radius of fillet and bulb ranges from 0 µm to 15 µm.When Re = 50, as the radius increases, the high temperature region A in the rear side of smooth pin-fin decreases gradually, and the temperature also goes down.When r = 15 µm, the temperature in region A is slightly above the average temperature of the rear side, and the high temperature area disappears.It is obvious that the fillet and bulb weaken the symmetry of the flow field along the normal-wise direction, and the fluid from the rear side has the trend to flow upward, so the flow structure in region A has been improved.As the radius increases, the fillet and bulb influence the flow field even more, thus the high temperature region A decreases gradually.The temperature change trend of region B in rear side with proposed flow structures is the same with that of region A, but because of the direct effect of the fillet and bulb, the effects in region B are more obvious, and the high temperature area already disappears when r = 10 µm.The flow in this area cools the wall surface more intensely, which significantly improves the heat and mass transfer in region B and thus greatly improves the local temperature uniformity.For region C in the side surface of the pin-fin, the temperature decreases with the increase of the radius, which means both the fillet and bulb strengthen the heat transfer process near the surface of the pin-fin at a low Reynolds number.Especially, the bulb makes the low temperature area move from the side surface of the pin-fin to the bulb when the radius is larger than 10 µm.For the fillet and bulb, the changes of temperature distribution in the bottom wall region D which is in contact with the fillet and bulb differ from each other as the radius increases.For the pin-fin with fillet, the temperature gradient decreases gradually, and the low temperature region gradually increases as the radius increases.When r = 15 µm, the low temperature area occupies almost the whole mainstream area, and the overall thermal uniformity of the bottom wall is improved.For the pin-fin with bulb, the increase of the radius leads to the temperature decreases at all bottom walls, but it also results in the increase of temperature gradient gradually.For the leading edge of the pin-fin region E, its temperature decreases and approaches to the low temperature zone on both sides of the pin-fin with the increase of the radius.The triangle-shape high temperature area in this region decreases with the radius increasing, and it is more pronounced on the bottom side. From the above analysis, the proposed structures significantly lower the temperature of the rear side surface, leading edge and side surface of the pin-fin and shrink the high temperature area, and its effect increases as the radius increases.The proposed flow structures improve the temperature uniformity of the microchannel, which can be useful for the laminar heat transfer enhancement researches employing triangular, square and circular pin-fins [25][26][27][28].As shown in Figure 3, when Re = 200, compared to Figure 2, the highest temperature region of pin-fin with both fillet and bulb has changed from the rear of the cylinder to the vortex region near the corner when Re changes from 50 to 200.The high temperature region N is shrinking and cooling with the increment of the radius, which indicates that the local flow and heat transfer is refined.This is because the fillet and bulb enhance the mass transfer and admixture between the main flow and near-surface flow, decrease the average temperature of the bottom surface and the temperature gradient and intensify the heat and mass transfer near the vortex region.The same trend appears in the region near the side surface, which is highly expected and reasonable because with the increase of the velocity of the fluid, the fluid will wash the wall surface more violently which thus improves the heat transfer there.However, region M is expanding and heating with the increment of the radius, which indicates that flow and heat transfer deteriorates.The same trend appears on the top surface near the rear of the cylinder.This is because, at a high Reynolds number, the vortex in the corner and in wake of the rear of the cylinder is stronger, which suppresses the influence of the fillet and bulb on the flow on the top surface and region M, constrains the effect of the proposed structure to the lower half in the microchannel, limits it to develop in the span-wise direction and makes the low temperature flow away from the top surface.Besides, in the side surface of the cylinder, the temperature increases as the radius increases.With the increase of the radius, the geometrical structure in z direction becomes more asymmetrical, which induces an asymmetrical flow field, with more violent flow washing the bottom.Thus, the asymmetrical flow field causes differences of heat transfer between region M and region N.The asymmetrical field makes it more difficult to evaluate the whole heat transfer performance.Therefore, more analysis is needed to find out whether the whole heat transfer performance is improved considering that different regions have different trends.Different structures also make a difference.For the result of pin-fin with bulb, compared to the result of pin-fin with fillet structure, a relatively higher temperature area appears in region L. Additionally, in region K it is interesting to note that the back flow in the rear in those with bulb shifts ahead compared to that with fillet.This phenomenon can explain the flow pattern differences in region L between two structures: The back flow with bulb causes more flow separation in the rear of the cylinder surface.This leads to the deterioration of heat transfer and thus the expansion of region L in the cases with bulb.Besides, the distribution of temperature field when Re is 200 is similar to that of a Re of 50 around the bottom area except that those with bulb have an undesirable greater temperature gradient.As shown in Figure 4, the channel is full of strong secondary flow.When Re is 50, in smooth microchannel, the secondary flow is symmetrical in the normal-wise direction.As for the case with a 15 µm fillet, due to the asymmetrical geometry structure, the secondary flow is no longer symmetrical, with a stronger secondary flow at the bottom and a weakened one at the top.It is worth noting that with the increment of the radius the secondary flow at the bottom expands upwards and gradually secondary flow in these two areas merges together into one.A similar trend is observed in the microchannel with bulbed pin-fin.The main low temperature region of those in smooth channel is symmetrical, circular and in the central of the channel while the region of those with fillet and bulb is asymmetrical and moves downwards, getting closer to the fillet and bulb.The maximum temperature is lower in those with fillet and bulb than that of smooth channel, especially in the regions near the fillet and bulb.The decrease of maximum temperature contributes to the decrease of the temperature gradient.It can be explained by the fact that after adding the fillet and bulb, the secondary flow, from two separate ones into one, has enhanced the heat transfer near the vortex region and thus the uniformity of the temperature field is strengthened.We can also find that in the microchannel with bulb, the secondary flow in the bottom is more violent than that in the channel with the fillet while the interaction between two secondary flows in different regions is weaker than that in the channel with fillet.It can be explained by the fact that in the channel with bulbed pin-fin, two enormous changes in geometrical structure, one of which is in the edge of the bulb and the bottom wall and the other one is in the edge of the bulb and the pin-fin surface.The first geometrical change has contributed to enhance the secondary flow at the bottom while the latter one hinders the development of the secondary flow upwards and thus weakens the interaction between the two areas. Figure 5 shows that when Re = 200, the span-wise secondary flow both at the bottom and top is confined to the corner, its intensity is relatively less than that in the case of Re = 50.The influences of fillet and bulb on the flow structure decreases, and the interaction of the upper and lower secondary flow is no longer obvious.However, the fillet and bulb have a great influence on the temperature distribution.As the radius increases, the mainstream low temperature core region is obvious asymmetry, and it moves downwards gradually.The lower secondary flow is more violent than that in the smooth structure, and thus enhances the heat transfer performance of the bottom wall.Besides, the increase of secondary flow constrains the high temperature area to the corner, reduces the average temperature and enhances the heat transfer performance.However, because of the movement of the mainstream low temperature core region, the temperature of the upper fluid rises, leading to the heat transfer deterioration of the top wall which is consistent with the result of Figure 3.All the effects mentioned above are strongest when radius is 15 µm. Performance Parameters Analysis The fillet and bulb influence the flow pattern and the distribution of temperature field.From the above analysis, it is obvious that the fillet and bulb can lower the maximum wall temperature, improve thermal uniformity of the wall and enhance the heat transfer efficiency.However, the change of the cross-sectional area and the flow field will also influence the flow resistance.The flow drag, heat transfer performance parameter and the comprehensive heat transfer performance of the channel with the fillet and bulb are analyzed in the following. Flow Resistance Analysis The Fanning friction factor f, form drag C p and frictional resistance C f are compared in order to analyze the resistance of fluid.The definition of Fanning friction factor is given by Formula (9).The frictional resistance C f , and form drag C p are given by where X w is the wall shear stress, C p /f and C f /f indicate the proportion of form loss and friction loss in the Fanning friction factor, respectively.Figure 6 shows the influence of the fillet and bulb on f.As for the microchannel with filleted pin-fin, f is almost the same with the smooth pin-fin microchannel except for cases when r = 15 µm and r = 10 µm, Re = 50.When the radius of the fillet increases, the flow resistance increases slightly, the maximum increase percentage is only 2.03%.This is mainly due to the slight decrease of the flow passage regions as the radius of the fillet increases.The results show that the fillet structure will not notably increase the flow resistance.As for the microchannel with bulbed pin-fin, the radius influences f a lot.f /f 0 increases a lot as the radius increases and is much larger than the others when radius is greater than 10 µm.All these influences decrease as the Reynolds number increases.The maximum increment percentage is 13.48% when r = 15 µm, Re = 50.The bulb increases the flow resistance as the radius increases.7 is shown to explain how the fillet and bulb influence f.The Fanning friction factor consists of frictional resistance C f and form drag C p , so the increment of the f also consists of two parts.(C p − C p0 )/f 0 represents form drag increment and (C f − C f0 )/f 0 represents frictional resistance increment.Figure 7 shows that the fillet has little influence on C p and C f , so it well explains the influence of fillet on f.However, for the microchannel with bulbed pin-fin, both C p and C f increase, and C p increases more significantly.When radius increases to 10 µm or 15 µm, the proposed structure advances and extends the reflux region, which induces the increase of C p of the microchannel when it a relatively larger radius.The larger the radius is, the more the form drag generates.This is why (C p − C p0 )/f 0 is larger than the others. Heat Transfer Effect Analysis The variations of Nu (defined by Equation ( 6)) at different cases are shown in Figure 8.For cases of microchannels with filleted pin-fin, Nu increases with the increase of radius at low Re and adverse tendency is observed at relatively high Re.It is because in low Re flow, heat transfer augmentation increases as the radius increases.However, the augment weakens with the increases of Re and disappears when Re reaches 120.Then, heat transfer begins to deteriorate and the extent of deterioration increases with the radius increasing.This is because that the highest temperature region has changed from the rear of the cylinder to vortex region near the corner when Re changes from 50 to 200.A similar tendency appears in the cases with bulbs except that, for microchannels with a 5 µm or 10 µm bulb, the increment disappears when Re is 90; for those with a 15 µm bulb, the augment disappears when Re is 160. Entropy Generation Analysis Entropy generation indicates the fluid frictional irreversibility and heat transfer irreversibility in the forced conductive flow with heat transfer, and the relative entropy generation is used to evaluate flow loss and heat transfer performance in this work, based on the entropy generation theory and (15).As shown in Figure 9, the total relative entropy generation increases with the increment of Re in the microchannel with filleted pin-fin.The greatest increments, appearing when r = 15 µm, increase from 0.86 to 1.04 with the increase of Re.While in other cases, the increase is relatively smaller.It is worth noting that the increment magnitude increases with the radius increasing.It can be explained that, as it is revealed in Figures 4 and 5, secondary flow is enhanced with the increase of the fillet radius, which causes a higher loss, and with the increment of Re, secondary flows in those with larger fillets change greater.The increase rate of entropy generation decreases with the increase of Re in all cases.It is because secondary flows are depressed at high Re.Therefore, the loss has a tendency to decrease.However, the increase of Re will increase the flow loss and this effect surpasses the tendency of loss to decrease caused by the secondary flow condition.Finally, the flow loss increases at a rate which decreases with the increase of Re.Besides, adding fillet can improve the flow performance in low Re.This effect weakens with the increase of Re, disappears when Re is 120 and then the flow performance begins to deteriorate.As for the microchannel with bulbed pin-fin, the changing range of entropy with Re is larger compared to those with fillet structures, which reveals a greater loss, but the tendency of entropy generation is similar: The increase rate of entropy generation decreases with the increase of Re and the increment magnitude increases with the bulb radius increasing.Also, adding bulb can improve the flow performance in low Re and this effect disappears when Re is under 100 for those with a 2 µm, 5 µm or 10 µm bulb and for those with a 15 µm bulb, the effect disappears when Re is above 100.Heat transfer induced entropy generation is dominate typical microchannel heat exchange systems [29].As shown in Table 4, in low Re, heat transfer induced entropy generation reasonably constitutes the main part of the total entropy generation.Its proportion decreases with the increase of Re, which indicates an increasing frictional loss.The synthetic thermal performance TP takes both the heat transfer improvement and frictional loss penalty into account and it is frequently used to evaluate the heat transfer performance of devices.TP [30,31] is defined as where Nu 0 and f 0 refer to the relative data of channel with only a cylinder.The results are shown in Figure 10.ζ in Figure 10 is defined by where TP 0 and TP r refer to the data of benchmark without the fillet and bulb and data of those with fillet and bulb, respectively.As shown in Figure 10, obviously TP of both structures worsens with Re increasing.When Re is above 100, the thermal heat transfer augment of those with fillet or a 15 µm bulb disappears and then the heat transfer performance begins to deteriorate; for those with a 2 µm, 5 µm or 10 µm bulb, the thermal heat transfer augment disappears when Re is lower than 100, which is coherent with the entropy generation analysis.The extent of deterioration increases with the proposed structure becoming larger.For low Re flow, the highest temperature region is in the rear of the cylinder.Both fillet and bulb improve the heat transfer there and thus lead to thermal performance improvement.However, for high Re flow, the highest temperature region has changed from the rear of the cylinder to vortex region near the corner.Although the fillet and bulb still generate an improvement near the bottom area of the cylinder, heat exchange near the top area of the cylinder and vortex region in the corner begins to deteriorate, especially for those with bulb structures.The effect of the latter one surpasses the former one and thus causes deterioration of synthetic thermal performance.It is worth noting that thermal performance of those with fillet structures better than those with bulb structures in relatively high Re, partly because of a greater augment in frictional loss in those with bulb structures. Conclusions The heat transfer performance and flow structures of laminar flow in the microchannels with endwall filleted pin-fin and bulbed pin-fin are investigated in this work.The conclusions obtained are as follows. (1) Two separate symmetrical secondary flows develop in the microchannels with only a cylinder, with one on the top of the other.After adding the fillet and bulb, at low Re, the lower span-wise secondary flow is enhanced as the radius increases and these two span-wise secondary flows gradually merge into one which develops along the normal direction and improves the heat transfer performance in the channel.Both fillet and bulb strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder and enhance the thermal uniformity of the pin-fin and the outside wall.At high Re, the mainstream low-temperature core region is obvious asymmetry: It moves downwards gradually as Re increases.(2) The flow resistance coefficient f of the microchannel with filleted pin-fin does not increase significantly compared to traditional microchannel with pin-fin, which means the use of the fillet can enhance the heat transfer while f of the microchannel is almost a constant.The maximum increase of f is 2.03%.As for the microchannel with bulbed pin-fin, f will not increase significantly when the radius is 2 µm or 5 µm, but when the radius increases to 10 µm or 15 µm, f increases a lot, and the maximum increase of f is 13.48%.(3) For the microchannel with proposed structures, Nu increases as the radius increases in low Re flow and the adverse tendency appears in relatively high Re flow.Nu increases 16.93% at most for those with fillet and 20.65% at most for those with bulb.The increment of heat transfer performance decreases gradually as Re increases, and disappears when Re is 120 for those with fillet.As for microchannels with a 2 µm, 5 µm or 10 µm bulb, the increment decreases rapidly and disappears when Re is 90; for those with a 15 µm bulb, the augment disappears when Re is 160.(4) Compared with the conventional pin-fin microchannel heat sink, the synthetic thermal performance coefficient TP of the new pin-fin microchannels increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. Figure 1 . Figure 1.Microchannel heat sink and flow domain: (a) model of microchannel; (b) geometrical structures of flow domain; (c) geometry detail. Figure Figure7is shown to explain how the fillet and bulb influence f.The Fanning friction factor consists of frictional resistance C f and form drag C p , so the increment of the f also consists of two parts.(C p − C p0 )/f 0 represents form drag increment and (C f − C f0 )/f 0 represents frictional resistance increment.Figure7shows that the fillet has little influence on C p and C f , so it well explains the influence of fillet on f.However, for the microchannel with bulbed pin-fin, both C p and C f increase, and C p increases more significantly.When radius increases to 10 µm or 15 µm, the proposed structure advances and extends the reflux region, which induces the increase of C p of the microchannel when it Figure 9 . Figure 9. Variation of S/S 0 in the microchannel with Re. 4 . Variations of S T and S F .
7,337.4
2017-11-15T00:00:00.000
[ "Engineering", "Physics" ]
Agent Based Simulation of Group Emotions Evolution and Strategy Intervention in Extreme Events Agent based simulation method has become a prominent approach in computational modeling and analysis of public emergency management in social science research. The group emotions evolution, information diffusion, and collective behavior selection make extreme incidents studies a complex system problem, which requires new methods for incidents management and strategy evaluation. This paper studies the group emotion evolution and intervention strategy effectiveness using agent based simulation method. By employing a computational experimentation methodology, we construct the group emotion evolution as a complex systemand test the effects of three strategies. In addition, the events-chainmodel is proposed tomodel the accumulation influence of the temporal successive events. Each strategy is examined through three simulation experiments, including twomake-up scenarios and a real case study.We show how various strategies could impact the group emotion evolution in terms of the complex emergence and emotion accumulation influence in extreme events. This paper also provides an effective method of how to use agent-based simulation for the study of complex collective behavior evolution problem in extreme incidents, emergency, and security study domains. Introduction Understanding how the group emotion develops in extreme events is a critical issue in emergency management.Public emergency always occurs with rumors propagating, malicious incitement, and emotion infection, which cause group emotion changing or even extreme behavior [1,2].As a public security incident, extreme group incident is different from general mass incidents in the information propagation manners [3], in which the violence activity information is spreading on the relation network in a relatively private way, such as in the Urumqi incident (Xinjiang, China, 2009) [4].It is difficult to obtain realistic investigation data of the extreme incidents for the government or researchers due to the privacy and interaction covertness, as well as the complexity of the individual interaction.On the other hand, the group behavior and extreme emotion are hard to detect, which makes it difficult for the strategy formulation and evaluation.Thus, the covert information spreading and related group emotion evolution modeling and analysis become important for incident process understanding and intervention strategy evaluation. Empirical and theoretical methods both have been used to study the collective behavior evolution process in group incidents [5][6][7][8][9].The results of empirical studies are different according to each specific event, but they are useful for finding inherent evolution pattern in the incidents [6,9].Theoretical studies try to analyze the collective behavior with mathematical methods, and it is effective in understanding the evolution process from macro perspective [10].However, both methods have limitations on modeling the complicated system emergence produced by the individual interaction.Agent based Simulation/Modeling (ABS/ABM) provides an effective way to understand the complicated system evolution problem [11,12].It includes models of behavior which observe the collective effects of agent behaviors and interactions [13].As a simulation method, ABS is usually used along with other methods according to the problem, such as social network [14,15] and system dynamics [16].The autonomous agents can be used with the heterogeneity and network structure for researching the diversity of the agent behaviors, as well as the system complexity from the micro level.Social network is widely used to describe the relation structure between the agents, and the "network agents" is an effective tool to study the collective behavior evolution of agents with relation network [17].Based on the ABS method, strategies evaluation with computational models is widely used for strategy effectiveness analysis [18].One of the advantages is that this method provides a way to analyze the strategy effectiveness with repeated experiments, which is impossible for most strategies in real world.Simultaneously, the influence factor analysis can be carried out to understand inherent process of evolution. In this paper, we construct the ABS models for studying the group emotions evolution process and examine the impacts of three strategies used in incidents intervention: speakers controlling, interaction probability, and communication intervention.The paper is organized as follows.Problem description and research framework are given in Section 2. It is described as mathematical forms, and the research framework gives an overview of this paper.The models used in this research are given in detail in Section 3, including the relation network model, the eventschain model, the information model, and the agent model.Section 4 gives three intervention strategies, as well as evaluation criteria based on the simulation method.Simulation experiments are performed and the results are shown in Section 5, including two virtual scenarios and a real case study.Section 6 concludes the paper. Problem Description and Research Framework 2.1.Problem Description and Definition.Emotion and recognition constitute a central element of the human repertoire and the study of their functioning is a prerequisite for the understanding of individual and collective behaviors [19].The computational modeling (such as ABS) provides a powerful tool to study the individual or collective behaviors as a complex system.This technology has been widely used in emergency management, counterterrorism, and national security.This paper focuses on the group extreme incidents, which are caused by human or organization planning.There are some characteristics of this type of events, so we make three reasonable assumptions for the events description and computational experiments.The first assumption is that the events in the incidents would not occur at the same time.This means that the events are discrete in the simulation experiments.Although the emotion evolution process may last for a period of time, the events (including related behaviors, such as attributes changings) are independent. The second assumption is that, at any given moment, all actors act conditionally independently of each other.Such an assumption was already proposed in many studies [20] as a basis for ABS research. The third assumption is that the strategies are performed completely and ideally.Although the strategy definition and construction depend on the real possibility and feasibility, we assume that there is no case of failure in the experiment. The general principle of these assumptions is to model the process in a formal framework.Based on the assumptions, we can treat the problem as a behavior evolution process on the network.The individuals can be represented as the nodes of the network and the individual actions are the agent behaviors based on the relation of the network. The underlying network structure is the basic structure of the diffusion processes which consisted of agents (individuals) and relationships between them.We use the definition of dynamic network in [20].The set is a set of social actors (individuals or agents), the relation on is defined mathematically as a subset of the × , and the relation is represented by the × adjacency matrix = ( ).The = {0, 1}, respectively, represents that there is no link or there is a tie between agents and (, = 1, . . ., ).The agent attributes are assumed to be discrete with a preset interval of values, and ℎ denotes the value of agent on the ℎth attribute.The time dependence is indicated by denoting ℎ = ℎ (), where denotes time and ℎ is the column containing the ℎ values. This paper presents models and methods for studying the dynamic evolution process of ( 1 (), . . ., ()).Virtually, the adjacency of the relation matrix should be included in the dynamic analysis as well, but we here assume that the network remained changeless in a short period (e.g., an event), which is proposed by previous research [21].In contrast to the individual attributes ℎ , we are more interested in the collective state, for example, the statistics result of the agents' attributes, which describe the characteristics of the whole collective behaviors and states, and they are also called covariates [20]. The interventions strategies are defined as a series of functions acting on the agents set , the relation set , and the attributes set .We use the symbol to represent the strategy set, and = ( 1 , . . ., ), where the constant is the number of the strategies.For each strategy, there is a formulation of = (, , 1 (), . . ., (), ).The in the formulation means the time on which the strategy performs. Based on the definition, we could now describe the problem as follows: the initial data include the agents set , the relation set , the attributes matrix of the agents ( 1 ( 1 ), . . ., ( 1 )), and the covariates set ( 1 ( 1 ), . .., ( 1 )); the agents will take actions (interaction) during each time step, which makes the values of and changing; the purpose of this paper is to understand the dynamic evolution process of the collective emotions and to observe the effectiveness of different intervention strategies in the evolution process. Research Framework. Based on the problem definition mentioned above, the research framework can be constructed for the sake of emotion evolution process analysis and intervention strategy evaluation.As shown in Figure 1, this work consists of the following four components.(1) The agent based artificial social simulation framework, involving frequent interactions between individuals within the group and the resulting changes of collective emotion state during the extreme emotion diffusion.(2) Models for simulation, including four models used in the simulation experiments, the events-chain model as system evolution driven, the social network model as the group structure (agent interaction environment), the information model as the collective emotion driven, and the interaction model as the interaction rules in the agent based simulation (ABS) method.(3) Simulation experiments results analysis, providing the statistical results of the collective emotion evolution (), the covariates results for the intervention effectiveness (), and the characteristics analysis of the evolution process.(4) The intervention strategies and evaluation, including the strategies construction with mathematical forms and the effectiveness evaluation.The core module of the method is the ABS structure, which is used to model the complex interaction behavior between the individuals and to observe the collective emotion emergence.The models for simulation are built to describe the specific environment and behavior in the corresponding scenarios of the extreme events, such as the successive terrorism events, the information diffusion process, and the individual interaction.With simulation time going on, the statistical results are changing with the individuals' behaviors, and the collective emotion criteria capture the evolution dynamics from the macro perspective.Thus the simulation results analysis is a dynamic process describing the macro collective emotion state.The strategies effectiveness is analyzed by the covariates calculating and the evolution characteristics analysis, which provides results for strategies evaluation.The strategies make influence in two ways, the model intervention (data aspect) and the process intervention (interaction aspect). Model Description 3.1.The Relationship Network Model.In extreme events, for covertness, information spreading is always invisible and acts in relatively covert ways instead of public channels, such as email, instant messaging, or even mouth-to-mouth way.During the process, the personal relation plays an important role in the communication object selection.In this method, the BA network model [22] is used as the relation structure, which can be seen as the interaction environment rather than the behavior model, given it can format the rules of agent selection in interaction.During the period of an event, the network structure can be treated as a static state [21]. The Events-Chain Model. In order to simulate the temporal successive events, two structures are proposed, the "events-chain" structure and "meta-event" structure.The "meta-event" structure is a unit of single event or can be called episode, which is the component unit of the events-chain.For example, it can be rumors spreading process or an action of an organization.It also includes the following related group response activities.The "events-chain" structure consists of series of temporal "meta-events, " and the structure makes it capable of describing the complex event process for the simulation.The accumulated emotion of events is expressed through the agent model, which makes it possible to analyze the events related collective emotion and behavior evolution in a computational experimentation way. Figure 2 illustrates the events-chain model of temporal series events.In the model, the nodes represent the metaevents, which are ordered by the temporal relations and The agent model denoted as .The attributes of the event can be noted as , which includes the event criticality ( 1 ) and the start time ( 2 ).The criticality represents the event "importance" level to the individuals.When the criticality value is high, the group response will be violent, and when it is low, the collective behavior will be peaceful.Thus the successive events can be modeled as a set = { 1 , 2 , . . ., } and the criticality set = { 11 , 21 , . . ., 1 } and start time set = { 12 , 22 , . . ., 2 }.The accumulated effects of events militate through the agent attributes and behavior, which will be described in the agent model. The Information Model. The events usually occur accompanying event information in the forms of news, rumors, or extreme activity information.Information transferring between agents makes agent attributes and behavior changing.The information spreading process of meta-event is similar to the rumor spreading process, but the difference is that the impact can be accumulated during the period of event influence time.According to the event information set, each agent has an event information set of = { 1 , 2 , . . ., }, and = {0, 1}. We assume that, after the individual receives the information, it will be transmitted, which is different from the classic rumor propagation model [5].Agent will randomly select another agent who connects with it to send the message and exchange opinion, rather than treating the information as several statuses.This is because we have found that the extremists or radicals will send the violent activity information over and over again even though the object has got the information [4]. The Agent Model. The agent model describes the attributes and behavior rules in the simulation.As mentioned before, the attributes are the values of , and the agent behavior can be modeled as a series of functions on . Agent Attributes Autonomy.Autonomy is an important variable when considering the information propagation in agent based simulation [23].The autonomy of an agent will influence the acceptable level of the external information, which expresses the impact on the behavior of the individual [8].It determines the extreme emotion level after the individual receives the information and the impact of environment by the other individuals, denoted as 1 . Extreme Emotion.There are many researches on the emotion model and emotion agent [24,25]; however, it is difficult to build a uniform model to describe the emotion changing process due to the complexity of emotion and its influence factors.Based on the KISS rule (keep it simple and stupid) [26], we define three variables to model the extreme emotion in this specific application scenario, the individual selfgenerated emotion 2 , environmental influence emotion 3 , and individual extreme emotion 4 .Self-generated emotion describes the individual's own level of radicalization, and the environmental influence emotion describes the conformity degree of an agent to the others. Emotion Decay Rate.For individuals there is a process of mood self-adjustment, so extreme emotion has a decay process.If there is no external information within a period of time, individual mood will gradually calm down.This is another factor that affects the extreme emotion evolution and is denoted as 5 .Meanwhile, after receiving the information for a certain period of time, the emotions are hardly affected by other external individuals, which means 3 will be zero after a time threshold, labeled as . Personal Interaction Model Self-Generated Emotion.After an agent receives the event information, the self-generated emotion will be generated by the agent itself, and it can be modeled as a function of the individual autonomy and the event criticality 1 , which can be shown as follows: Actually, the individual self-generated emotion is related with many factors, such as the content of events, individuality, and other related social attributes.These factors involve more complex questions and event classification issues of individual emotional tendencies, which are out of scope of this paper.Considering the simulation scenario, we use a multiplication rule. Environmental Emotion.The extreme emotion contagion process occurs among the individuals through the relation network; thus, the emotion of agent will be affected by the others connecting with it.Assume the number of the agents who connect with agent and have received the event information is ; then, the environmental emotion of an agent can be expressed as where ̸ = , 1 ≤ , ≤ ; thus, the agent extreme emotion can be calculated as follows: where the 1 is used as a weight between self-generated emotion and environmental emotion.The extreme emotion of agent on the event at time can be represented as 4 ().Extreme Emotion Evolution.Individual extreme emotion is a process of accumulation, and it can reach a high value enough for people to take extreme actions.On the other hand, the extreme emotion has a period of decay.As suggested in [25], the emotion decreases with the time, but it does not go away instantaneously, so the decay process is represented via an exponential function.We assume that only the self-generated emotion decays, and the environmental emotion depends on the emotions of the connected agents.Given the initial emotion 2 ( 0 ), the current time , and the decay rate 5 , the decay function is given by The accumulation effect of the extreme emotion comes from the successive events influence.For each event, every agent has an event emotion according to the information.At time , the extreme emotion of agent should be calculated as follows: Intervention Strategies The purpose of this study is to understand the evolution process of the collective emotion.In addition, we are more interested in how to intervene in the extreme emotion spreading.We call the intervention method used in the spreading process as intervention strategies in this paper.Valente surveys four strategies on network intervention, which are individuals, segmentation, induction, and alteration [27]. Based on the formal description mentioned before, we give the details of the three strategies examined in this method. 4.1.Speakers Controlling.Speakers controlling strategy involves eliminating speakers of the network who are the source of the events information, which may also be taken as leader-focused strategy [18].The speakers here refer to the people that spread the information and emotion at the first time, such as the schemer of the event, the leader of the group, and a member of the terrorist organization.This strategy is commonly referred to as capturing targets, just as key players in social network mentioned in [28].The motivation of this strategy is to intervene in event information spreading and emotion diffusion from the source, so as to control or prevent the event occurrence, with the understanding that eliminating the speakers in the network should have the heaviest influence on the collective extreme emotion evolution process.Formally, we note speakers controlling strategy as 1 ; thus, removing the speakers could be treated as a function transformation from the node set and the link to the sets 1 and 1 .Let 1 denote the speakers set, so 1 = − 1 .Here we use 1 to denote the link subset of which includes all the links related with the agents in 1 ; that is, In addition, the agents in 1 will be inaccessible for any other agent once they are removed. Interaction Probability. The emotion contagion needs continuous interaction between the individuals [29].This strategy is based on the understanding that if the agents interact with lower probability, the extreme emotion would generate slower, and less people would be affected.This strategy usually associates with event popularity, and high popularity will make people exchange the opinion and information more frequently.Empirical studies based on online social media have provided the evidence of topics intervention [30], which is known as public opinion guidance. The probability in this strategy means the contact frequency in a fixed time period, and its interval is [0, 1].We use in to represent the interaction probability between each pair of agents.Generally it should be a probability distribution, and the probability is generated in each time step.We use 2 to represent this strategy; thus, 2 is a probability matrix of the link set , which generates the link active probabilities in each time step corresponding to the probability distribution.To simplify the simulation, it is used as a given value in this method. Communication Intervention. In contrast to interaction probability strategy, communication intervention strategy takes actions of deleting the interaction pathways between agents, with the idea that fewer links will mitigate the information diffusion and emotion cognation process [31].The communication in this strategy refers to the ways that agents keep in touch with each other, such as telephone, email, instant messenger, and website.The motivation of this strategy is to reduce the communication channels between individuals, so as to intervene in the process of emotion evolution and even the collective behavior. This strategy is different from the two previous strategies from the structural perspective.Firstly, the communication strategy interdicts the links between the agents which are still in the network, and the speaker control strategy removes the agents and the related links of the agents; secondly, all the agents' links may be affected by the communication strategy and only speakers' links in the speaker strategy; thirdly, the interaction probability strategy reduces the frequency of interaction, and the links still exist in the network, which would be removed in communication strategy.This method is not usually taken, because it needs to interrupt the daily communication device and related network, which may involve the security department. This strategy is denoted as 3 .Consequently, removing the links could be seen as a transformation of → 3 .We assume the obstructed link set is 3 , so 3 = − 3 .The key problem is how to get the link set 3 , that is, how to determine the links to be removed.As the communication strategy is not a target-focused method, here we use a probability re to represent the probability of each link to be removed. Strategy Effectiveness Evaluation. To evaluate effectiveness of the strategies, the criteria should be proposed.The basic principle of artificial society simulation is to observe the complex emergence results of simple agent interaction [32], and one of the effective approaches is to evaluate the statistical characteristics of the group attributes.As mentioned above, the effectiveness evaluation should be a series of observations with the time, using the notation ( 1 (), . . ., ()).Besides the statistical variables, the characteristics of the statistical results curves are also important to evaluate the effectiveness of the strategies.For distinction analysis, the characteristics of the dynamic results curves are noted as ( 1 , . . ., ).In this study, nine statistical criteria are used as follows. (i) The maximum number of extreme group agents during event ( 1 ).To specify the agents' emotion level, the distinction threshold is used to determine whether the agent belongs to the group.Specifically, in our model, the extreme emotion needs to reach a certain threshold 1 at which the agent can be seen with "extreme emotion." (ii) The maximum number of agitated group agents during event ( 2 ).The distinction method of agitated agents is similar to the extreme group with the threshold 2 .The agent that belongs to this group is between the states "extreme" and "gentle, " who may be incited to the state "extreme" or calm down to the state "gentle." (iii) The maximum value of average emotion during event ( 3 ).The average emotion is a measurement of the total group emotion state, and it is also a symbol of the collective behavior. (iv) The duration of event (the time from the event starting to the ending, denoted as 1 ). (v) The duration when average emotion value is above a threshold ( 3 ).This duration can be seen as the "risk period" of the incident ( 2 ). (vi) The length of time between the event starting and the time when the number of extreme group agents reaches the peak value ( 3 ). (vii) The length of time between the time when the number of extreme group agents reaches the peak value and event ending ( 4 ). (viii) The length of time between the event starting and the time when the number of agitated group agents reaches the peak value ( 5 ). (ix) The length of time between the time when the number of agitated group agents reaches the peak value and event ending ( 6 ). Simulation Experiments and Results Analysis In the previous sections, we have specified the mathematical description of the problem, the research framework of this study, the emotional dynamics of agents, the models used for simulation, and the intervention strategies with nine specific criteria for effectiveness evaluation.To numerically investigate the method proposed, we need to specify the setup in terms of the network, the initial conditions of agents, and the parameters for experiments, which are all described in this section.Simultaneously, the simulation experiments are given as well as the results analysis and validation, including a real case study. Simulation Scenarios Setting.The procedures of the simulation experiments are as follows. (1) The initial social network was generated by BA network model as the local relation structure of agents. (2) The simulation step was 200 steps for single event, and 250 for two correlated events.The time step was updated as follows: as the system time increases, the time step is updated, and each agent selects one of the agents to interact. (3) At the time the event occurred, the event information was given to a designated start node (speaker).The default node was the node with maximum degree in the network. (4) At each time step after the incident, the agent's selfgenerated emotion in the event was in attenuation with the decay rate, and the environmental emotion updated based on the other agents' emotions. (5) When an agent received new information, it would only be affected by the agents who had also received the latest information and no longer affected by the past events. (6) After an agent received information for a time threshold (), the agent was no longer affected by the emotions of other agents about the event. Before starting the simulation experiments, the settings of the initial parameters were fixed at default values (see Table 1). Collective Emotion Evolution of Single Events.We first investigated the main effect of different strategies taken on single event.In each run, the strategy was taken once at the same simulation time step (step 25, 5 steps after the event starting).Figure 3 shows the group emotion evolution process under different strategies.The numerical results are presented in Table 2. As in the figure, the curves of speakers controlling strategy and the curves without strategy are almost overlapping.The results indicate that if the speakers controlling strategy is taken after the event start time, it is almost invalid.This is because the event information has already been propagated, and the extreme emotion of the agents will also be generated without the speakers.This is an interesting result because removing the key nodes is usually considered as a very useful strategy in counterterrorism strategy, and it has little effect on collective emotion.As expected, reducing the interaction probability between the agents not only effectively reduces the number of agents in each group, but also slows the growth of emotion, as well as the information diffusion process.This may be because the information transfer process between agents has been delayed (Figure 3(d)).The communication strategy (removing part of the links) reduces the maximum of both groups and the average emotion, but it only slows the information diffusion process to some extent.Both the time period of groups and the average emotion have little difference.Table 2 presents the numerical results of strategies effectiveness on single event.The values in the parenthesis are the difference rates.As expected, the interaction probability strategy is the most effective method in reducing the number of each group's agents and the average emotion.The speakers controlling strategy makes nearly no difference as the event information has been propagated.The communication intervention strategy will reduce the number of agents and average emotion to some extent.From time aspect, the effect of speakers controlling strategy has no obvious difference, so is the communication strategy.The interaction probability significantly affects the "rise phase" of collective emotion ( 3 and 5 ) and the "fall phase" (about 20%).All the strategies have no obvious effect on the length of "risk period" ( 2 ).Furthermore, Figure 4 shows the collective emotion distribution of different strategies at the peak points.As in Figure 4(a), the interaction probability strategy reduces the mean value of the emotion and also the number of agents with high emotion value.The speakers controlling and communication strategies also have little effect.In Figures 4(c) and 4(d), the agents with emotion zero are those who have not received the event information.The self-generated emotion is approximately uniformly distributed (Figure 4(c)), given it is only determined by the autonomy of the agents, the decay rate, and the event criticality.On the contrary, the environmental influence emotion approximately follows a normal distribution (Figure 4(d)).This is an interesting result, and it indicates that, at the collective emotion peak point, the environmental influence emotion plays a dominant role for an agent to be an extreme agent.Moreover, in the scatter diagram of 2-dimension distribution (Figure 4(b)), more points are above the symmetry line (red line), and this also indicates that the environmental emotion is higher than the self-generated emotion for most agents.This interesting result means that the environmental emotion is the main cause for an agent to become extreme. Group Emotion Evolution of Events-Chain Model. We then discuss the results of experiments under different strategies with events-chain model.Because of the accumulation effect, the emotion evolution will be different from that in single event.Figure 5 shows the results of two correlated events under different strategies.The two events start, respectively, at time steps 20 and 60, and both with criticality of 5. All the other parameters are set as default values in Table 1. The curves of the two events are obviously different from the single event, especially the curves of the second event. The numbers of extreme group agents are much higher in the second event than in the first one (Figure 5(a)).This is due to the emotion accumulation influence, as the start time of the second event is in the influence time of the first event. In contrast, the numbers of the agitated group agents reduce because of the constant total number of agents (Figure 5(b)). According to the results, the speakers controlling strategy is the most effective method to reduce the collective extreme value, because the information of the second event cannot be transferred (Figure 5(d)) since the speakers have been removed from the network. As in single event, the same nine criteria are used for strategy effectiveness analysis.Unlike the single event, we are more interested in the accumulation influence on the second event.Thus, in this experiment all the time indicators describe the characteristics of the second event (except 2 ), and the numerical results are shown in Table 3.The percentage in the column of "without strategy" is calculated based on the single event (column 2), and the others are calculated based on the accumulation events (column 3, without strategy). Firstly, the difference between the single event and accumulation event is obvious, especially the number of extreme group agents (+1081.90%).On the contrary, the number of the agitated group agents decreases due to the constant total agents.Another difference is that the time indicators all increase, especially the "fall phase" of the two groups.It is interesting that although the maximum number of the agitated groups has decreased ( 2 , −24.76%), the time length still increases ( 6 , +48.65%).Secondly, the speakers controlling is the most effective strategy for correlated events; this is because removing the speakers from the network directly prevents the second event from taking place (as analysis of Figure 5).Thirdly, the effectiveness of interaction probability strategy on the collective emotion has no obvious difference as in single event, but the time indicators change a lot.The duration of event shortens a little ( 1 , −1.57%), as well as the "fall phase" of the two groups (resp., −30.65% of extreme group and −9.70% of agitated group).Lastly, the effectiveness of communication strategy becomes weaker, such as the duration of the event time ( 1 ), the "fall phase" of the extreme and the agitated groups ( 4 and 6 ).In sum, accumulation influence makes the collective emotion stronger, and it will weaken the strategy effectiveness except the speakers controlling strategy. Model Validation. Validation is a critical issue for any modeling approach applied to any system, especially when using ABS to model complex adaptive systems (CAS) [33,34].We validate the proposed simulation with the method mentioned in [35].This part presents the operational validation of the simulation, and the empirical validation will be performed in the real case study. Statistical Tests of Randomness Effects. In order to examine the extent to which the randomness affects the simulation results, experiment was repeated ten times.The results are shown in Figure 6, and the statistical significance results are shown in Table 4 ( < 0.01).From Figure 6, we could find that the randomness extent is changing with the rate of variables changing, and this is due to the speed of information spreading.From the entire period, the randomness has no significance effect on the evolution process.From the results in Table 4, we could find that the interaction probability strategy significantly affects all the outcomes, and speakers controlling strategy has little effect on the single event. Communication strategy works well on the statistical group emotion but not on the event time, except the rise time period of agitated group. Sensitivity Analysis. We then examined the effectiveness difference under different parameter values for sensitivity analysis, and the results are shown in Figure 7. As described in Sections 5.2 and 5.3, the effectiveness of speakers controlling strategy only depends on the execution time, while the other two depend on the probabilities.Thus, in this examination, we focus on the intervention strategy probability, that is, the interaction probability and link removing probability corresponding to the two strategies.From Figure 7, we could find that the probabilities have a significance effect on the effectiveness of the strategies.Besides, the closer the probability is to the critical value ( in → 0, re → 1), the more significant the effect is. A Case Study.The computational model mentioned above was applied to the Urumqi incident in Xinjiang province (China).The incident took place on July 5, 2009, when approximately 20,000 people had ever received the violence activity information through all kinds of channels, and almost 5000 people attended the activities.The incident started as the information spreading of Shaoguan (Guangdong, China) incident among the population of Xinjiang by the terrorists organization Hizb ut-Tahrir [36] with extreme violence emotion instigation.More details of the incidents can be found in [4].Based on the research method proposed, the effectiveness of intervention strategies was tested through the simulation experiments. To simulate the incident, the events-chain model was constructed based on the events time during the incident, and meta-events are presented in Table 5.According to the incident evolution information, the meta-events were modeled as related information diffusion events during the process.The time of the events was translated into the simulation steps as events attributes.The criticalities of the events were given manually based on the instigation degree of the information, which were also mentioned in [4].The agent number was 20000, and the other parameters were set as default values in Table 1. Figure 8 shows the average emotion evolution results during the incident, and derivative events are marked on the figure according to the event time.The blue events are the meta-events used in the events-chain model, and the red events are the derivative violence activities during the incident.It is easy to find that the derivative events usually occurred at the peak points of the collective emotions and had the highest participation at these points.Another interesting result is that the meta-events usually occurred at the concave points of the curve, which means that the violence activities information is propagated at the low points of the collective emotion curves. The emotion evolution results and the collective derivative events (events 6, 7, 8) show good consistency in Figure 8.This indicates the validity of proposed method from the macro tendency perspective.As the detailed individual data is impossible to obtain, the macro emergent results proved that ABS provides a feasible way to study the collective emotion evolution in this type of incidents.Furthermore, the consistency between the events and group emotion can be used in events forecasting, which is also an important issue in emergency management. To compare the simulation results with the real events for model validation, the derivative events are listed in Table 6.The time of derivative events is translated into simulation step.The start time of derivative violence events is obtained as the time at peak points.The number of extreme people is the number of extreme groups at the peak points.As a complex system simulation, the macro evolution tendency is much more important for understanding the emergent characteristics of the system.The consistency of the derivative events and simulation results also shows the validity of proposed method. Figure 9 shows the influences of different strategies on the collective emotion evolution.The tendency of curves is similar to the results in Section 5.3, which indicates that the effectiveness of the strategies has no a convenient tool for the strategy formulation to intervene in the incidents. Conclusion This paper studied the dynamics of the group emotion evolution and effectiveness of intervention strategies with ABS method in extreme events.Modeling and simulating the process of information spreading and emotion evolution provide some insight into potentially useful strategies.Through the use of ABS, the complexities of the collective emotion evolution and group behavior can be better understood.The strategies can be tested in simulation environment that behave similarly to real world situation.To further understand the accumulation influence of successive events, the events-chain model is proposed to model the continuous relevant events using simulation experiments.In this model, the continuous events are expressed as a series of metaevents and information spreading processes.The information diffusion between individuals is the driving power of emotion and behavior changing.In order to evaluate strategies' effectiveness, three strategies are modeled based on mathematical definitions.Nine criteria are defined from the statistical and temporal perspectives, which measure the collective motion dynamics with statistical properties and evolution curve characteristics.Based on the constructed simulation system, three experiments are performed as well as model validation. The simulation results allow us to derive several conclusions, either on the collective emotion evolution or on the strategies effectiveness. Our main conclusions are summarized as follows. (i) The group extreme emotion evolution process is the collective response dynamics of events, accompanying information diffusion.The dynamics generate wave pattern with the increase of time, which can be regarded as pulses of events. (ii) The environmental influence emotion is the main cause that makes individual's emotion become extreme.The extreme emotion infection between agents generates centralized emotion distribution, which shows convergence in collective emotion dynamics. (iii) For single event, interaction probability is the most effective strategy, either in reducing the number of group agents or in slowing the evolutionary process. Communication can only reduce the peak value of the group agents, and speakers controlling has little effectiveness if it is taken after the event starts. (iv) For successive events, because of the accumulation influence, the speakers controlling is the most useful strategy for preventing the derivative events, as it removes the information source of the events.The interaction probability strategy displays similar results in single event, as well as the communication intervention strategy. (v) The sensitive analysis indicates that the probabilities are the deterministic factor that determines the effectiveness of speakers controlling and communication intervention strategies.Meanwhile, the statistical test of randomness effects proves that the model is robust and valid. Moreover, the real case study shows consistency between the group emotion dynamics and the collective behavior (violence activities events).This makes it possible to predict group emotion evolution tendency and test effectiveness of various strategies.The predictive value would be significant for government in managing and preventing extreme incidents. In conclusion, the presented agent based simulations give a new insight into group extreme emotion dynamics in extreme events.It can be used to test variety of simulation experiments for understanding the emotion dynamics associated with other models.Meanwhile, based on the simulation framework, other intervention strategies can be evaluated through corresponding models.Hence, despite its mathematical complexity, our method provides a laboratory for further experiments on the group emotional behavior of agents (e.g., under different driving conditions, varied agent behaviors, and parameters values) and for comparative analysis of intervention strategies.A future development of the research would be to apply more models of sophisticated events, including events classification (distinguishing different types of events and agents' response to them) and events impact model (how the events influence the agents' behavior).Furthermore, heterogeneity of agents also should be considered in the simulation. Figure 1 : Figure 1: The research framework of collective emotion simulation and strategy evaluation. Figure 2 : Figure 2: The events-chain model of temporal successive events. Figure 3 : Figure 3: Collective emotion evolution curves of single event, (a) the number curves of extreme group agents, (b) the number curves of agitated group agents, (c) average emotion value evolution curves, and (d) the information spreading process curves. Figure 4 : Figure 4: Emotion components distribution at the peak points: (a) emotion distribution of agents in different strategies, (b) scatter diagram of 2-dimension emotion distribution (without strategy), (c) self-generated emotion distribution (without strategy), and (d) environmental emotion distribution (without strategy). Figure 5 : Figure 5: Group emotion evolution curves of successive events: (a) the number of extreme group agents; (b) the number of agitated group agents; (c) average emotion value evolution curves; (d) the information diffusion curves (the second event). Figure 6 : Figure 6: Randomness influence on the results (without strategy, single event). Figure 9 : Figure 9: Group emotion evolution of the case incident, (a) the number curves of extreme group agents, (b) the number curves of agitated group agent, and (c) average emotion value evolution curves.(The strategies were taken at simulation step 200.) Table 1 : The initial parameters settings found. Table 2 : Strategy effectiveness results of single event. Table 3 : Strategy effectiveness results of successive events. Table 4 : Statistical significance for the three strategies. Table 5 : Meta-events of the case incident.
9,842
2014-12-28T00:00:00.000
[ "Computer Science" ]
Outdoor particulate matter exposure affects metabolome in chronic obstructive pulmonary disease: Preliminary study Introduction The metabolomic changes caused by airborne fine particulate matter (PM2.5) exposure in patients with chronic obstructive pulmonary disease (COPD) remain unclear. The aim of this study was to determine whether it is possible to predict PM2.5-induced acute exacerbation of COPD (AECOPD) using metabolic markers. Methods Thirty-eight patients with COPD diagnosed by the 2018 Global Initiative for Obstructive Lung Disease were selected and divided into high exposure and low exposure groups. Questionnaire data, clinical data, and peripheral blood data were collected from the patients. Targeted metabolomics using liquid chromatography-tandem mass spectrometry was performed on the plasma samples to investigate the metabolic differences between the two groups and its correlation with the risk of acute exacerbation. Results Metabolomic analysis identified 311 metabolites in the plasma of patients with COPD, among which 21 metabolites showed significant changes between the two groups, involving seven pathways, including glycerophospholipid, alanine, aspartate, and glutamate metabolism. Among the 21 metabolites, arginine and glycochenodeoxycholic acid were positively associated with AECOPD during the three months of follow-up, with an area under the curve of 72.50% and 67.14%, respectively. Discussion PM2.5 exposure can lead to changes in multiple metabolic pathways that contribute to the development of AECOPD, and arginine is a bridge between PM2.5 exposure and AECOPD. . Introduction Chronic obstructive pulmonary disease (COPD) is a common chronic disease, mainly characterized by persistent respiratory symptoms and airflow restriction, resulting in airway and/or alveolar abnormalities. Its pathogenesis is not clear and is generally considered to be related to exposure to toxic air pollutants, especially particulate matters (1). The 2017 Global Burden of Disease Study showed that approximately 300 million individuals worldwide suffer from COPD, and the number of deaths due to COPD is as high as 3.2 million every year (2, 3), ranking fifth in the world's burden of disease. It is expected that COPD will become . /fpubh. . the third leading cause of death in the world by 2022 (4). The pathogenesis of COPD is very complex, and its potential cellular and molecular mechanisms remain unclear. COPD is a heterogeneous disease, which brings many challenges in its diagnosis, prognosis, and management. Therefore, it is imperative to identify reliable, easily measurable, and clinically relevant biomarkers of COPD to improve the prognosis of patients. Among the pathogenic factors of COPD, particulate matter (PM) is closely related to the occurrence and development of COPD, especially PM with particle diameter <2.5 µm (PM 2.5 ), the main pathogenic component of air pollutants. Due to the small particle diameter, PM 2.5 can be deposited in the alveoli, which leads to disorders of the airway immune system. PM 2.5 enhances lung inflammation and enters blood circulation through the blood-gas barrier, causing serious health hazards to the body (5). Exposure to PM 2.5 in the air is significantly associated with an increase in COPD prevalence and an increase in the acute exacerbation risk, hospitalization rate, and mortality of patients with COPD (6,7). Concurrently, exposure to PM 2.5 for a short time could lead to a systemic inflammatory response in patients with COPD (8). Therefore, it is important to explore the mechanism of PM 2.5 exposures leading to COPD. Metabolomics is the quantitative description of metabolites in organisms and their responses to changes in physiology, disease, and environment. It can provide mechanistic clues regarding biological processes and functions, which help deepen the understanding of the occurrence and development of diseases. It has been shown that disorders of phospholipid metabolism are associated with the development of emphysema and acute exacerbation of COPD (AECOPD) (9). Meanwhile, air pollutants are known to have an effect on metabolic disorders (10). Sun et al. showed that air pollution exposure can affect lipid metabolism disorders and ultimately lead to the development of diabetes mellitus (11). Air pollutants can affect arginine and proline metabolism, which can affect the cardiopulmonary system (12). Air pollutants can also promote atherosclerotic lesions by affecting cholesterol dysregulation (13). However, limited studies have examined the effects of air pollution on the development of COPD. Therefore, the present study aims to recognize which metabolites and metabolic pathways are influenced by outdoor air pollutants and the predictive value of the metabolites on AECOPD. We divided the patients with COPD into high and low pollutant exposure groups and explored the effect of PM 2.5 on metabolism by performing targeted metabolomic assays on plasma to identify relevant markers that can predict AECOPD due to air pollution. . . Study design and participants This was an observational clinical study conducted from January 2018 to March 2019. The participants were patients with COPD in the Beijing-Tianjin-Hebei region, China. The inclusion criteria were as follows: (1) age 45-75 years; (2) met the diagnostic criteria for the global initiative for COPD (GOLD-2018) and were in a stable stage of disease within 1 month at the time of enrollment; (3) resided in the local area continuously for more than 2 years, with no long-term plan to leave the area during the survey period; and (4) had no smoking history or had stopped smoking for more than half a year to rule out the effect of smoking on AECOPD. The exclusion criteria were as follows: (1) had complications, such as malignant tumor, severe cardiovascular and cerebrovascular disease, liver and kidney insufficiency, and active pulmonary tuberculosis; (2) received efficacy evaluation for epilepsy or mental illness; (3) underwent chest, abdominal or ophthalmic surgery in the past 3 months; and (4) pregnant and lactating women. All patients were surveyed using a questionnaire that included the basic information on the COPD assessment test (CAT), modified Medical Research Council (mMRC) dyspnea scale, St. George's respiratory questionnaire, pulmonary function examination, physical examination (height, weight, and blood pressure), and peripheral blood collection. After 3 months, telephone follow-up was conducted to monitor any acute exacerbations during this period, including requiring treatment with corticosteroids and/or antibiotics or hospitalization (9). The research protocol was approved by the ethics committee of the China-Japan Friendship Hospital, and all participants signed informed consent (IRB Approval Number: 2017-19). All methods were carried out in accordance with the Declaration of Helsinki. . . Outdoor PM . exposure assessment The outdoor PM 2.5 exposure concentrations in this study were obtained from the China National Environmental Monitoring Center (http://www.cnemc.cn/), and the daily estimates of pollutants at each monitoring station were calculated as the 24hour average concentrations at the corresponding monitoring station. After matching the PM 2.5 data with the subjects' addresses from the questionnaires, the average concentration of PM 2.5 for the month before the survey date in the participant's city was obtained. The participants were divided into a high exposure group (>75 µg/m 3 ) and a low exposure group (<35 µg/m 3 ) based on the PM 2.5 concentrations according to the National Ambient Air Quality Standards (GB 3095-2012). . . Plasma-targeted metabolomic assays Blood samples were collected from all patients in a fasting state and transferred to the Department of Laboratory Medicine of China-Japan Friendship Hospital and stored at −80 • C for metabolomic analysis. Targeted quantitative metabolomics analysis was performed using the Biocrates P500 platform (Biocrates Life Sciences AG, Innsbruck, Austria). Using a previously described method, 10 µL of each plasma sample was pretreated in 96-well plates (14) and then analyzed using liquid chromatography-tandem mass spectrometry (LC-MS) and flow injection analysis (FIA). LC-MS used two liquid chromatography separation methods, and mass spectrometry used the multiple reaction monitoring mode, collecting positive and negative ion modes, respectively. FIA used the same liquid chromatography separation method and collected data by two mass spectrometry methods in the positive ion mode. The raw mass spectrometry data were analyzed using the MetLIMS software (Biocrates Life Sciences AG), and the concentration of the metabolites in the samples was calculated using the 7-point calibration standard curve method and singlepoint method for LC-MS and FIA data, respectively. All samples were randomly assigned, and quality control (QC) samples with known concentrations were inserted into the sample cohort to assess the reproducibility of the data. The filter thresholds for metabolites were QC samples with coefficients of variation ≥ 25% and missing values ≥ 30%. The metabolites were annotated via the Human Metabolite Database (HMDB). . . Statistical analysis The differences in the demographic characteristics between the groups were analyzed using t-tests for continuous variables and chi-square tests for categorical variables. For metabolomic analysis, the Mann-Whitney non-parametric test and orthogonal partial least squares discriminant analysis (OPLS-DA) were used to analyze between-group differences in metabolites after metabolite concentrations were log-transformed. P-value < 0.01 and variable importance in the projection score (VIP) > 1.5 were used as the threshold for differential metabolite screening. Pathway analysis was performed for significantly changed metabolites with HMDB numbers using the Kyoto Encyclopedia of Genes and Genomes (KEGG; Kanehisa Laboratories, Kyoto, Japan) database, and enrichment analysis was performed using the Small Molecule Pathway Database. Logistic regression was used to assess the association of variables with the risk of acute exacerbation during the 3 months of follow-up, with sex, age, smoking history, and BMI as covariates. The predictive power of important metabolites was assessed using receiver operating characteristic (ROC) curves and the area under the curve (AUC). Data and statistical analyses were performed using SPSS 26.0 (IBM, Armonk, NY, USA). OPLS-DA was performed using SIMCA 14.0 (Umetrics, Umeå, Sweden). Pathway analyses were performed on the platform MetaboAnalyst 5.0 (https://www.metaboanalyst.ca). Logistics regression was calculated by SPSS 26.0 (IBM, Armonk, NY), and ROC curves were drawn by GraphPad Prism 9.0. A P-value <0.05 was considered statistically significant. . . Characteristics of the participants Thirty-eight participants with complete data were included in the analysis, with 19 in the high exposure group (mean age 62.8 ± 7.8 years, 84.2% male) and 19 in the low exposure group (mean age 62.7 ± 4.7 years, 63.2% male). The demographic characteristics of the study population are shown in Table 1. The average concentration of outdoor PM 2.5 exposure was 19.0 ± 1.02 µg/m 3 and 106.0 ± 15.3 µg/m 3 in the low and high exposure groups, respectively. The presence or absence of AECOPD during the 3 months of follow-up and CAT questionnaire scores were statistically different between the two groups (P = 0.02, 0.01, and 0.008, respectively). . . E ects of PM . exposure on metabolites Plasma metabolomics data from the 38 participants were collected. The LC-MS method and FIA method were used to obtain 106 and 524 metabolites, respectively. Thus, 311 metabolites were included in the analysis after data QC. OPLS-DA analysis showed a separation trend between the low-and high-exposure groups. Twenty-one metabolites were statistically different between the two groups based on a P-value < 0.01 and VIP >1.5 ( Figure 1A and Table 2, see also Supplementary material 1). These metabolites included cholesterol ester, triglycerides, sphingolipids, phospholipids, lyso-phospholipids, amino acids, choline, and bile acids. The volcano plot and heat map showed that six metabolites showed elevated expression, and the expression levels of 15 metabolites decreased with exposure to high PM 2.5 levels compared with exposure to low PM 2.5 levels (Figure 1, see also Supplementary material 2). Although the inclusion criteria are patients who had no smoking history or had stopped smoking for more than half a year to rule out the influence of smoking, we also divided the patients into two groups according to smoking status (ever smoker vs. never smoker), and compared the different metabolites after exposure to high and low PM 2.5 level (Supplementary material 3). The results showed that, in both . FIGURE Biological pathways associated with PM . exposure. The color gradient indicates the significance of the pathway ranked by P-value (blue: higher P-values and red: lower P-values), and circle size indicates the pathway impact score (the larger circle the higher impact score). PM . , PM with particle diameter < . µm. groups, most metabolites have certain statistical significance. This means that air pollution exposure had a certain impact on ever smoker and never smoker. To further investigate the effects of molecular pathways underlying PM 2.5 exposure and AECOPD, we performed pathway analysis based on the KEGG database ( Figure 2). It was observed that seven metabolic pathways were correlated with PM 2.5 exposure, including glycerophospholipid, alanine, aspartate, glutamate, arginine, proline metabolism, and . /fpubh. . arginine biosynthesis. These results suggest that metabolic disturbances occur after PM 2.5 exposure, including disturbances in amino acid, bile acid, and lipid metabolisms (Figure 3). . . Predicting the risk of AECOPD based on metabolic changes The association of 21 metabolites with the risk of AECOPD in the next 3 months is shown in Table 3. Arginine and glycochenodeoxycholic acid (GCDCA) were positively associated with AECOPD when corrected for sex, age, BMI, and smoking history (OR = 1.028, 95%CI 1.001-1.056, P = 0.045 and OR = 3.974, 95%CI 1.017-15.537, P = 0.047, respectively). Subsequently, the AUC obtained from the ROC analysis was used to assess the predictive performance of these two metabolites for AECOPD ( Figure 4). Arginine had better predictive performance (AUC = 72.50%, 95% CI 0.53-0.92) compared with that of GCDCA (AUC = 67.14%, 95% CI 0.45-0.90). . Discussion PM 2.5 exposure can reportedly lead to acute exacerbation in patients with COPD, but the exact mechanism of action is unclear. In this study, we divided the patients into high and low exposure groups according to the exposure level of outdoor PM 2.5 , and detected the plasma metabolites. We found that 21 metabolites showed significant changes between the two groups. Meanwhile, we found that PM 2.5 exposure could affect lipid-, amino acid-, and bile acid-related metabolism (Figure 3) by performing targeted metabolomic assays on the plasma of patients with COPD with different PM 2.5 exposure levels. Correlation analysis of the metabolites with the occurrence of acute exacerbations in 3 months revealed that arginine and GCDCA were positively correlated with the occurrence of AECOPD, while arginine better predicted AECOPD. These findings suggest that arginine is a potential marker of PM 2.5 -induced AECOPD. Several studies have confirmed the correlation between PM 2.5 and the development of COPD. Yang et al. found that longterm PM 2.5 exposure can reduce the airway and small airway function and impair lung function (15). Liu et al. conducted a 16year cohort study and found that for every 5 µg/m 3 increase in PM 2.5 concentration, the adjusted hazard ratio for associations with COPD incidence was 1.17 (16). The hospitalization and mortality rates for COPD increase during periods of high air pollution. Each 10 µg/m 3 increase in PM 2.5 is associated with a 1.61% increase in the risk of hospitalization for patients with COPD in the United States (17) and 0.82% in Beijing (18). In a meta-analysis, Chen et al. summarized the effect of air pollution exposure on COPD mortality and showed that a 10 µg/m 3 increase in PM 2.5 was associated with a 1.11% increase in the risk of COPD-related death (19). Chronic exposure to air pollution can lead to oxidative stress and inflammatory responses in the body. Air pollutants may act directly as oxidants or generate free radicals, causing oxidative stress (20). The glycerolphospholipid metabolism found in this study may be associated with this oxidative stress response. Air pollutants activate phospholipase A2, which hydrolyzes showed elevated expression of amino acid metabolites in the blood of participants from high pollution exposure areas close to roadways, including arginine, histidine, γ-linolenic acid, and hypoxanthine (24). Meanwhile, a 2021 analysis of the effects of air pollution exposure on amino acid metabolites in the plasma of participants using a lagged effects model showed that exposure to PM 2.5 was significantly and positively associated with elevated asparagine and glutamine over a 24-h period (25). These results suggest that amino acid metabolism plays an important role in PM 2.5 pathogenesis. On performing logistic regression of metabolites screened for association with PM 2.5 exposure and AECOPD in the next 3 months, we found that arginine and GCDCA were positively associated with AECOPD in controlling sex, age, BMI, and smoking history, while ROC curves indicated that arginine better predicted AECOPD. This is similar to the findings of a previous study, which found that arginine expression was elevated in the plasma of patients with COPD compared with that in the healthy population and was further elevated in AECOPD (26). Meanwhile, Grasemann et al. showed that the arginine content in sputum was negatively correlated with FEV 1 (27). Arginine is a semi-essential amino acid that is an important substrate for enzymes such as nitric oxide synthase (NOS) and arginase (28). Arginase converts arginine to ornithine and urea, while NOS converts arginine to citrulline with the production of NO. Exhaled NO is associated with airway inflammation, and ornithine is a precursor of proline and polyamines (e.g., putrescine, spermidine, and spermidine), which promote collagen production and cell proliferation (29,30). In addition, asymmetric dimethylarginine (ADMA) is an endogenous competitive inhibitor of NOS, produced by arginine (31). ADMA is evolving as a marker of cardiovascular risk and has been studied in allergic asthma models (32), obesityinduced asthma studies (33, 34), obstructive sleep apnea syndrome studies, and COPD studies (35). Therefore, an increase in arginine concentration will lead to an increase in the expression of NOS and ADMA, which has a potential effect on airway inflammation and airway remodeling. The present study has some limitations. First, the PM 2.5 exposure data obtained in this study were from atmospheric monitoring stations, and no indoor air pollution levels were detected, which is not a good representation of the true PM 2.5 exposure levels of patients. In addition, this study only explored the effects of PM 2.5 on COPD patients and did not investigate the effects of other pollutants on metabolites, which has some limitations. Second, we did not consider indicators related to the patients' diet, exercise, sleep, and region that may affect metabolism. Third, the information about AECOPD was obtained from the patient's memories, and the questionnaire and blood samples were not tested when the patients were in acute exacerbation. In the future study, patients will be evaluated for acute exacerbation, so as to determine the impact of PM on AECOPD. Moreover, this study had a small sample size, restricting the generalizability of our findings. Further large sample studies are needed to detect metabolites in sputum or bronchoalveolar lavage fluid and conduct related mechanism studies to further explore the effect of air pollution on metabolites and the mechanism of AECOPD. . Conclusions In summary, this study showed that PM 2.5 exposure can lead to changes in multiple metabolic pathways that contribute to the development of AECOPD using metabolomic assays in plasma from patients with COPD subjected to high and low exposure levels. Our results suggest that arginine is a bridge between PM 2.5 exposure and AECOPD; future studies can further explore the mechanism of elevated arginine expression caused by PM 2.5 exposure and the correlation between arginine and AECOPD. Simultaneously, reducing PM 2.5 concentration and going out in polluted weather are essential measures to prevent AECOPD. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee of the China-Japan Friendship Hospital. The patients/participants provided their written informed consent to participate in this study. Author contributions The study was proposed and the manuscript was revised by TiY. Material preparation and data collection were performed by TaY, HN, XL, and RD. Data analysis was performed by TaY, HW, FD, and QH. The first draft of the manuscript was written by TaY and HW. All authors commented on subsequent versions of the manuscript, read, approved the final manuscript, and contributed to the study conception and design. (81970043 and 82270038). The sponsors had no role in the design of the study, the collection and analysis of the data, or the preparation of the manuscript.
4,629.8
2023-03-21T00:00:00.000
[ "Medicine", "Biology" ]
Searching for $\Xi_{cc}^+$ in Relativistic Heavy Ion Collisions We study the doubly charmed baryon $\Xi_{cc}^+$ in high energy nuclear collisions. We solve the three-body Schroedinger equation with relativistic correction and calculate the $\Xi_{cc}^+$ yield and transverse momentum distribution via coalescence mechanism. For $\Xi_{cc}^+$ production in central Pb+Pb collisions at LHC energy, the yield is extremely enhanced, and the production cross section per binary collision is one order of magnitude larger than that in p+p collisions. This indicates that, it is most probable to discover $\Xi_{cc}^+$ in heavy ion collisions and its discovery can be considered as a probe of the quark-luon plasma formation. We study the doubly charmed baryon Ξ + cc in high energy nuclear collisions. We solve the threebody Schrödinger equation with relativistic correction and calculate the Ξ + cc yield and transverse momentum distribution via coalescence mechanism. For Ξ + cc production in central Pb + Pb collisions at LHC energy, the yield is extremely enhanced, and the production cross section per binary collision is one order of magnitude larger than that in p+p collisions. This indicates that, it is most probable to discover Ξ + cc in heavy ion collisions and its discovery can be considered as a probe of the quarkgluon plasma formation. The flavor SU(4) quark model predicts 22 charmed baryons [1]. Searching for them experimentally has been an active direction in the field of high energy physics, but most of them are not yet discovered. The main reason for hardly observing charmed baryons, especially multicharmed baryons, in elementary collisions like p+p and e + + e − is the rare production of charm quarks in these collisions. For instance, for the production of triply charmed baryon Ω ccc , it requires at least three pairs of charm quarks with small relative momenta in an event. This is very difficult even at LHC energy. In high energy nuclear collisions, however, there are plenty of off-diagonal charm quarks in the created fireball. For instance, the cc pair number can reach 10 in Au + Au collisions at RHIC energy and 100 in Pb+Pb collisions at LHC energy [2]. These uncorrelated charm quarks can be combined to form charmed hadrons via statistics. Obviously, the combination will largely enhance the yield of multicharmed baryons. The production of charmed hadrons in high energy nuclear collisions is closely related to the Quark-Gluon Plasma (QGP) formation in the early stage of the collisions. Due to the rapid expansion of the colliding system, the temperature of the formed fireball drops down fast, and one cannot directly see the QGP in the final state of the collisions. In 1986, Matsui and Satz pointed out the J/ψ suppression as a signature of the QGP formation [3]. Considering the combination of c andc, the J/ψ regeneration [4] in QGP and its competition [5][6][7] with the initial production explain well the experimental data of charmonium yield and transverse momentum distributions at RHIC and LHC energies. Extending to the production of multicharmed baryons, the largely enhanced production cross section via the combination of charm quarks in QGP makes them as a unique probe of the new state of matter [8]. In this Letter we investigate the production of doubly charmed baryon Ξ + cc in high energy nuclear collisions. The experimental search for Ξ + cc lasts for decades. The SELEX collaboration [9] claimed the observation of Ξ + cc in 2003, but the FOCUS [10], Belle [11], BaBar [12] and LHCb [13] collaborations failed to reproduce the results. In comparison with triply charmed Ω ccc [8], the production probability of doubly charmed baryons should be much larger, and the decay modes of Ξ + cc are already widely discussed theoretically and experimentally [9][10][11][12][13][14]. We will first solve the three-body Schrödinger equation, including relativistic correction, to obtain the wave function and Wigner function of the ground bound state of three quarks ccq, and then calculate the Ξ + cc yield and transverse momentum distribution in heavy ion collisions through the coalescence mechanism on the hypersurface of deconfinement phase transition determined by hydrodynamics. We employ the Schrödinger equation to describe the bound state of three quarks ccq, with the total energy E T and wave function Ψ. To be specific, we choose the index i = 1, 2 for the two charm quarks and i = 3 for the light quark. As a usually used approximation [15], we neglect the threebody interaction and express the potential as a sum of pair interactions, V = i<j v(r i , r j ). According to the leading order QCD, the diquark potential is only one half of the quark-antiquark potential. If one assumes such a relation in the case of strong coupling and take the Cornell potential, there is v(r i , r j ) = (−α/|r i − r j | + σ|r i − r j |) /2, where α = π/12 and σ = 0.2 GeV 2 are coupling parameters of the potential which together with the quark masses m c = 1.25 GeV and m q = 0.3 GeV reproduce well the D, J/ψ and ψ ′ masses in vacuum. In hot and dense medium, the interaction among quarks should be weakened. From the lattice calculation [16], however, the J/ψ spectral function is clearly broadened only when the temperature is much higher than the critical temperature T c . Therefore, we can still take the Cornell potential at the coalescence which happens at T c . We take the hyperspherical method [15] to solve the three-body Schrödinger equation. Introducing the global coordinate R and relative coordinates r x and r y through the transformation R = (m c r 1 + m c r 2 + m q r 3 )/M , r x = m c /(2µ)(r 1 − r 2 ) and r y = 2m c m q /(µM ) (r 3 − (r 1 + r 2 )/2) with the total mass M = 2m c + m q and an arbitrary parameter µ with mass dimension which disappears in the end and its value does not affect the result. With the new coordinates, the motion of the three-quark bound state can be factorized into the global motion and relative motion, Ψ(R, r x , r y ) = Θ(R)Φ(r x , r y ). By rewriting the amplitudes of r x and r y in terms of the hyperradius r = r 2 x + r 2 y and hyperpolar angle α = arctan(r x /r y ) and constructing the 6-dimension relative motion space (r, Ω) = (r, α, θ x , ϕ x , θ y , ϕ y ) where θ x , ϕ x and θ y , ϕ y are azimuthal angles of r x and r y , the kinetic energy in the center of mass frame can be separated into a radial part and an angular part with corresponding eigen states R(r) and Y κ (Ω) [17], where κ stands for the 5 quantum numbers (k, l, m, l x , l y ) in the triplet-singlet representation. However, the potential V (r, Ω) depends on both the hyperradius r and the 5 angles, the relative motion cannot be factorized into a radial part and an angular part. Considering that the eigen states Y κ (Ω) constitute a complete set, we express the relative wave function as a linear combination of them, Substituting the expansion into the Schrödinger equation and taking the orthonormal relations for Y κ (Ω) [17], we obtain the coupled ordinary differential equations for the hyperradial wave functions φ κ (r), with dΩ = sin 2 α cos 2 α sin θ x sin θ y dαdθ x dθ y dϕ x dϕ y and binding energy ǫ. For the ground bound state Ξ + cc , we take only the first two hyperspherical harmonic functions Y 0 = 1/π 3/2 and Y 1 = 2 cos(2α)/π 3/2 , This leads to two coupled equations for the first two hyperradial components φ 0 and φ 1 , The two ordinary differential equations can be numerically solved by using the Inverse Power Method [18], and the radial components φ 0 (r) and φ 1 (r) for the ground state Ξ + cc are shown in Fig.1. The second component φ 1 is much smaller than the first component φ 0 at any r, indicating that the truncation (5) is good enough for Ξ + cc . For Ξ + cc , it is necessary to consider the relativistic correction to the light quark motion. Taking the correction ∆Ĥ = − ip 4 i /(8m 3 i ) to the Hamiltonian, the resulted correction to the binding energy and relative wave function of Ξ + cc follow the perturbation formula, where the sum is over the excited states determined by the radial equations (6). With the correction, we obtain the Ξ + cc mass and average radius m Ξ + cc = 2m c + m q + ǫ 0 + ∆ǫ 0 = 3.584 GeV, The mass value agrees well with the previous calculations in MIT bag model [19], QCD sum rule [20,21], potential model [22][23][24] and lattice QCD [25,26], where the Ξ + cc mass is in between 3.5 GeV and 3.7 GeV. If Ξ + cc is considered as the ground bound state of the quark(q)-diquark(cc) system, its motion is described in two steps, the diquark motion and the quark-diquark motion. Both are controlled by a two-body Schrödinger equation. We recalculated the Ξ + cc wave function in this case, it is similar to the one obtained with the hyperspherical method, and the Ξ + cc mass and average radius are respectively 3.62 GeV and 0.41 fm. The similarity between the two methods is due to the large mass difference between the light and heavy quarks. Using the relativistically corrected wave function, we now construct the Ξ + cc Wigner function in the center of mass frame, where p is the 6-dimension momentum corresponding to the relative coordinate r. Taking the first axis of the integrated vector y in the direction of p and the second axis on the plane constructed by p and r, the Wigner function is largely simplified and depends only on r, p, α and the angle θ between r and p. By integrating out the angles, we obtain the probability to find the three quarks in the ground bound state Ξ + cc with relative distance r and relative momentum p, P (r, p) = r 5 p 5 24π W (r, p, α, θ) sin 2 α cos 2 α sin 4 θdαdθ (10) which is shown in Fig.2. It is close to a double Gaussian distribution e −(r− r ) 2 /σ 2 r e −(p− p ) 2 /σ 2 p with the most probable position at ( r , p ) =(0.41 fm, 1.4 GeV) and the standard deviation (σ 2 r , σ 2 p ) = ( (r − r ) 2 , (p − p ) 2 ) = ((0.16 fm) 2 , (0.39 GeV) 2 ). We now calculate the Ξ + cc production via coalescence mechanism in high energy nuclear collisions. The coalescence mechanism [27] has been successfully used to describe the light hadron production, especially the quark number scaling of the elliptic flow [28] and the enhancement of the baryon to meson ratio [29,30]. In coalescence models, the change in the constituent distribution before and after the coalescence process is required to be small, namely the number of constituents involved in the coalescence must be small compared with the total particle number of the system. In this sense, the coalescence mechanism is more suitable for the production of rare particles like Ξ + cc . The coalescence probability, namely the Wigner function, is usually parameterized as a Gaussian distribution [27,31] and the width is fixed by fitting the data. For multicharmed baryons, however, there are currently no data, and an adjustable coalescence probability will lose the prediction power of the calculation. For Ξ + cc we use the above calculated Wigner function W as the coalescence probability. The QGP created in the early stage of a heavy ion collision is very close to an ideal fluid and its spacetime evolution is controlled by hydrodynamic equations ∂ µ T µν = 0 with T µν being the energy-momentum tensor. The initial condition at time τ 0 = 0.6 fm/c is determined by the colliding energy and nuclear geometry, which leads to a maximum initial temperature T 0 = 484 MeV in central Pb+Pb collisions at √ s N N = 2.76 TeV [32]. To close the evolution equations, we take the equation of state of the hot medium with a first order phase transition between the ideal QGP and hadron gas at critical temperature T c = 165 MeV. By solving the hydrodynamic equations, one obtains the local temperature T (x) and fluid velocity u µ (x) which will be used in the coalescence. The coalescence happens on the hadronization hypersurface σ µ (R), and the 4D coordinates R µ = (t, R) on the hypersurface is constrained by the critical temperature T (R µ ) = T c which leads to the coalescence time t = t(T c , R). The observed momentum distribution of Ξ + cc via coalescence mechanism can be calculated from the Wigner function [27], where P µ = (P 0 , P) is the 4D Ξ + cc momentum with energy P 0 = m 2 Ξ + cc + P 2 and 3D momentum P = (P T , P z = P 0 sinh η), corresponding to the coordinate R, the constant C = 1/18 comes from the intrinsic symmetry, and F is the distribution function of the three quarks in phase space. Remember that the Wigner function obtained above is derived in the center of mass frame of Ξ + cc and the Ξ + cc moves with a 4-velocity v µ = P µ /m Ξ + cc in the laboratory frame, the coordinates r i and momentap i in the quark distribution function F and r i and p i in the Wigner function W are related to each other via a Lorentz transformation [8]. The three quark distribution F can be factorized as F (r 1 ,r 2 ,r 3 ,p 1 ,p 2 ,p 3 ) = Sf c (r 1 ,p 1 )f c (r 2 ,p 2 )f q (r 3 ,p 3 ), where the constant S = 1/2 counts the symmetry of the two charm quarks. The light quark motion in QGP is controlled by the Fermi distribution f q (r 3 ,p 3 ) = N q f (r 3 ,p 3 ) = N q /(e uµp µ 3 /T + 1) with the degeneracy factor N q = 6 and local velocity u µ (r 3 ) and temperature T (r 3 ) of the fluid. The charm quarks are produced through initial hard processes (their regeneration in the QGP at √ s N N = 2.76 TeV is very small and can safely be neglected [33]) and then interact with the hot medium. The single charm quark distribution f c is in principle between the pQCD distribution with weak interaction and equilibrium distribution with strong interaction. From the experimental data at LHC [34], the observed large quench factor and elliptic flow for charmed mesons indicate that charm quarks are almost thermalized with the medium. Therefore, one can take, as a good approximation, a kinetically equilibrated is the normalization factor of the Fermi distribution. Different from light quarks, charm quarks are not chemically equilibrated in QGP [35], and the spacetime evolution of the number density ρ c is governed by the conservation law during the expansion [33], with the initial number density ρ c (τ 0 ,r i ) determined by the colliding nuclear geometry and the cross section σ cc pp of charm quark pair production in p+p collisions. We can now calculate the Ξ + cc yield and transverse momentum distribution through the coalescence approach (11) in heavy ion collisions. The yield as a function of the number of binary collisions N coll at middle rapidity |y| < 1 in Pb+Pb collisions at √ s N N = 2.76 TeV is shown in Fig.3. In the calculation we have taken the charm quark cross section dσ cc pp /dy = 0.7 mb [36]. If we consider a homogeneous fireball with volume V and a momentum independent coalescence probability, the Ξ + cc yield can be estimated as where we have assumed that both the charm quark number N c and the fireball volume V are proportional to N coll . This explains the approximate linear increase in Fig.3. To compare with the Ξ + cc yield in p+p collisions, we introduce the total production cross sectionσ In heavy ion collisions, transverse motion is developed during the dynamical evolution of the system and is sensitive to the hot medium properties. In order to understand the Ξ + cc production mechanism and extract the properties of the medium, we calculated the Ξ + cc transverse momentum distribution shown in Fig.4. Due to the statistical law, the Ξ + cc s produced via coalescence mechanism are mainly distributed at low momentum with averaged momentum P T ≃ 2 GeV which should be much smaller than that through hard processes in p+p collisions. As a consequence of the coalescence mechanism, the feature of the increasing baryon to meson ratio at intermediate and high transverse momentum in heavy ion collisions [29,30] remains for the Ξ + cc to J/ψ ratio, see the dashed line in Fig.4. We briefly discuss the decay modes of Ξ + cc which are closely related to its experimental discovery. Since Ξ + cc is the ground state of the baryons with three quarks ccq, its decay is via weak interaction. Possible decay modes include [9][10][11][12][13][14] Ξ + cc → (Λ + c → pK − π + )K − π + , Ξ + cc → D 0 pK − π + , Ξ + cc → D + pK − , Ξ + cc → Ξ + c π + π − and Ξ + cc → Ξ 0 c π + . The lifetime calculated via optical theorem is in between 110 and 250 fs [38,39]. The short lifetime is probably a challenge to the experimental measurement of Ξ + cc . In summary, we studied the doubly charmed baryon Ξ + cc in high energy nuclear collisions. We solved the Schrödinger equation for the bound state of three quarks ccq with relativistic correction, and calculated the corresponding Wigner function which is the coalescence probability for the three quarks to combine into a Ξ + cc in phase space. For Pb+Pb collisions at LHC energy, we computed the Ξ + cc yield and transverse momentum distribution through coalescence. We found that, the Ξ + cc production is extremely enhanced, and the effective production cross section per binary collision is already one order of magnitude larger than that in p+p collisions. This indicates that, it is most probable to find Ξ + cc in heavy ion collisions at LHC energy, and its discovery can be taken as a signal of the QGP formation.
4,537.4
2016-03-15T00:00:00.000
[ "Physics" ]
Quantification of the Coordination Degree between Dianchi Lake Protection and Watershed Social-Economic Development: A Scenario-Based Analysis Dianchi Lake is the largest freshwater lake on the Yunnan–Guizhou Plateau near Kunming City, China. As one of the most polluted lakes in China, although billions of U.S. dollars have been spent trying to clean it up, water pollution and eutrophication are still a bottleneck for regional sustainable development. This research established an integrated approach for the evaluation of the coupling coordination degree to support future planning of the Dianchi Lake basin. Ten future scenarios for possible development directions of Dianchi Lake basin were designed to find the best balance between development and protection. Among these scenarios, a high protection–medium development scenario is the most suitable scenario for future development planning. To further improve the coordination degree, economic growth control and non-point source governance were the most effective and feasible approaches. Furthermore, a water quality model was used to verify the coordination degree. It was found that the high protection–medium development scenario can reach the water quality target in 2025. The coordination degree evaluation could be a practical link to help equilibrate the socio-economic development and environmental protection of the Dianchi Lake basin. Introduction Since the mid-20th century, many countries and regions have witnessed rapid socioeconomic development, but this remarkable success has also brought about a disaster for the environment. In order to better solve these problems, the concept of sustainable development has received more attention. Coordination is originally a physics concept. In recent years, it has been adopted much more in the environmental science field, representing a balance between environmental benefit and socio-economic benefit, which is at the core of sustainable development. There are many studies about coordination. Tomal et al. analyzed the coordination degree of socio-economic-infrastructural development in Poland [1]; Ariken et al. performed a coordination analysis between urbanization and the eco-environment in Yanqi Basin [2]. The concept of coordination can be calculated by a coordination degree model. There are multiple coordination models created to solve different coordination problems, such as the membership function coordination degree model, coordination development dynamic change model, coupling coordination degree (CCD) model and dynamic CCD model [3][4][5][6]. Among all these models, the coupling coordination degree model (CCD) is the most mature and widely used, which can reflect the intersection of coordinated development among subsystems [7][8][9][10]. This model cannot directly simulate a future coordination degree because it needs to be evaluated based on completed events. However, combining it with the scenario analysis method, future coordination trends of the system can still be calculated [11,12]. Moreover, this model can also reflect which factors in the subsystems have the greatest impact on the development of the subsystem [13,14]. In this research, the Dianchi Lake basin, located in Kunming City, China, is taken as a case study. As one of the most seriously polluted lakes in China, water pollution and eutrophication has become a bottleneck for regional sustainable development. In the past, most studies about Dianchi Lake focused on reducing the pollutant concentration or cutting down the pollutant discharge from a technical perspective [15][16][17]. This study pays more attention on policy recommendations, combining a coordination model with a scenario analysis to evaluate the degree of coordination between development and environmental protection. On this basis, the environmental fluid dynamics code (EFDC) model is adopted to verify whether the simulated future scenarios can meet the water quality target of Dianchi Lake. The objective of this study is to find the most suitable development scenarios and use these as a reference for the preparation of future development plans of the Dianchi Lake basin. Study Area This study uses Dianchi Lake (102 • 30 -103 • 00 E, 24 • 28 -25 • 28 N) as a case study. Figure 1 shows its location. Dianchi Lake is the largest fresh water lake in Yunnan Province, China, which covers 330 km 2 and has a 5 m average depth. Waihai is the main part of it, accounting for 96.4% of the total water area [18,19]. In terms of meteorology, the average annual temperature in the Dianchi Lake basin is 15.1 degrees Celsius and the average annual rainfall is 1075 mm. [20] The Dianchi Lake basin is a highly developed region. It only covers 0.75% of the province's land, but nurtures 8% of the province's population and contributes 23% of the province's GDP (gross domestic product) [21]. By the end of 2018, the population of the Dianchi Lake basin had exceeded 4 million with a GDP of 62 billion U.S. dollars, and still has a great chance to grow continuously in the next 10 to 20 years, according to the development plans. It can be said that this area is the most densely populated, economically developed and urbanized area in Yunnan Province. However, at the same time, serious water pollution of the lake is still a huge concern for this area although billions of U.S. dollars have been spent trying to clean it up. According to the latest data in 2018, the water quality rank of Dianchi Lake was still class IV with mild eutrophication, while chemical oxygen demand (COD) and ammonia nitrogen were the main pollutants [22]. Unfortunately, this phenomenon may continue in the foreseeable future. The Coupling Coordination Degree Model The coupling coordination degree (CCD) model is the most mature and widely used coordination model, which can reflect the intersection of coordinated development among subsystems. A series of indicators are the basis of it. These indicators are the keys to calculating the development level of subsystems, which are the basic components of the whole system. Table 1 shows all indicators used in this study to evaluate the development level of the socio-economic subsystem and the water environment subsystem. These indicators are all directly derived from the relevant plan of the Dianchi Lake basin, and can best represent the actual development level of subsystems, to ensure the scientificity. Further, the criteria for selecting them are that they should fully reflect the main characteristics of the basin with minimum indicators. Specifically, under the socio-economic subsystem, nine indicators are set to describe three important aspects of the subsystem, which are social development, economic development and industrial development; the water environment subsystem focuses more on indicators related to the current water environment quality in Dianchi Lake. Six indicators are set for it based on three aspects: urban sewage, water environment pressure and water environment carrying capacity. In terms of the selection of pollutant indicators, there are three main reasons for choosing ammonia nitrogen and COD as indicators. First, the concentration of phosphorus in Dianchi Lake reached the water quality standards of China in 2011, but the concentration of COD and ammonia still exceed permitted levels. Thus, as a study to provide policy advice, it is necessary to pay more attention to COD and nitrogen, which will still be the key pollutants to be concerned with in the future. Second, the Dianchi Lake basin is surrounded by a chunk of farmland with serious non-point source pollution, and the ammonia nitrogen produced by poultry farming is more or less discharged into the lake without treatment. Therefore, from the perspective of pollutant sources, ammonia nitrogen accounted for the largest proportion of total nitrogen. Third, under the current water environment management system of China, COD and ammonia nitrogen are the most important and widely monitored pollution indicators. Therefore, their emission data are easier to obtain. Conversely, it is much harder to obtain complete phosphorus emission data. Determination of Indicators' Weight The indicator system should be normalized to eliminate the dimension, by Equation (1): where X ij ' and X ij represent the standardized and the primitive value of indicator j in year i; and α ij and β ij represent the standard value of each indicator. Here, different symbols are used to distinguish whether the indicators are positive or negative. If an indicator has a positive impact on the subsystem, such as the growth rate of GDP (gross domestic product), it could be considered a positive indicator. On the contrary, indicators with negative impact such as pollutant emissions are negative indicators. The weights of indicators represent their importance to the model. Instead of an average weighting method, the entropy weighting method is used to determine the indicators' weight. This method introduces the concept of entropy, regarding indicators with greater variations and fluctuations as having greater impact on the subsystem, while those with less fluctuation have less impact on the subsystem. The weight is set based on the magnitude of it, to avoid the subjectivity of using the experience to determine it. Table 1 also shows the results of the weights, which are calculated by Equations (2)-(5) as follows: where, P ij represents the proportion of indicator j in year i; e j represents the entropy of indicator j; d j represents the difference coefficient of indicator j; ω j represents the weight of indicator j; m means the total number of years and n means the total number of indicators. Establishment of the CCD Model Assume that x 1 , x 2 , x 3 . . . x p represent the indicators of the socio-economic development subsystem, and y 1 , y 2 , y 3 . . . y q represent the indicators of the water environment subsystem. The development index of each subsystem can be calculated by Equations (6) and (7). Among them, s(x) and e(y) represent the development index of the socio-economic development subsystem and the water environment subsystem; ω s and ω e represent the weight of each indicator in the two subsystems: The CCD model can be constructed by Equation (8): The coordination degree C indicates the difference between the development levels of the two subsystems (0 ≤ C ≤ 1). Specifically, C = 1 means the coupling degree between the two subsystems reaches the peak, while C = 0 means completely irrelevant and uncoordinated. Comparing s(x) with e(y), s(x) > e(y) means the socio-economic development level exceeds the carrying capacity of the water environment, while s(x) < e(y) means the water environment capacity cannot be fully utilized to support the social and economic development. We defined T to represent the overall system development index: Assuming that socio-economic development is as important as water environment protection, a = b = 0.5. D represents the coordinated development degree, which can simultaneously reflect the coordination and development level. It is possible that some situations with a poor coordination degree will have a higher coordinated development degree. In fact, a little sacrifice on coordination can be accepted if the overall development level is very high [23,24]. Scenario Definitions The simulation interval of scenario analysis in this study is 2016-2025. City planning is the main reference standard for future development directions and government works. The 13th Five-Year Plan of the Dianchi Lake basin is an important plan composed by the Kunming municipal government in 2015, outlining the expected development direction of the Dianchi Lake basin during 2016 to 2020 [25]. On the basis of this, a reasonable extrapolation is made and a scenario called the "Planning scenario" is set, which indicates that the future development is fully carried out as expected. The values of the other eight scenarios are taken on both sides of the Planning scenario. Each scenario is set by different development speeds of two important aspects: socio-economic development and water environment protection. Specifically, three different socio-economic development speeds are set based on different population, economic growth and industrial structure, and can be referred to as the "High development", "Medium development" and "Low development". Similarly, three types of water environment protection, "High protection", "Medium protection" and "Low protection", are defined based on different levels of water pollutant emissions. The Medium development-Medium protection (M-M) scenario is exactly the Planning scenario mentioned above. In addition, another scenario called the "Historical scenario" is set, which maintains the historical mode and is irrelevant to the city planning. Each development scenario can represent a possible future development direction of the Dianchi Lake basin. For example, the Low protection-High development (L-H) scenario means the Dianchi Lake basin will incline more resources to socio-economic development in the future, which will bring rapid development, but the cost is a sacrifice to the water environment. The Low protection-Medium development (L-M) scenario indicates that the socio-economic subsystem will grow as expected, but the water environment protection fails to reach the goal. Another example is the High protection-Low development (H-L) scenario, which focuses too much on water environment protection but ignores the development of the economy. Because the purpose of this study is to find the most appropriate scenario according to its coordination degree, most indicators used to set the scenario are the growth rates of those indicators set to build the CCD model (list in Table 1) so that the coordination degree can be calculated easily. In particular, the water use situation is subdivided to distinguish the emissions of different industries. The values of each scenario are shown in Table 2. EFDC Model In order to further analyze the accessibility of Dianchi Lake protection targets, the EFDC model will be used to simulate the water quality under typical scenarios in 2025. The environmental fluid dynamics code (EFDC) model is a typical three-dimensional water quality model that can simulate the water quality change process by analyzing the different input of pollutants [26]. This model has been widely used in various water bodies, including rivers, lakes, reservoirs and coasts [27][28][29][30]. For example, Gong et al. investigated the responses of water quality to different extreme hydrological conditions associated with rainstorms [31]; Arifin et al. used the EFDC model to simulate thermal behavior in Lake Ontario [32]. The Waihai EFDC model used in this paper can simulate the mutual transformation process and reaction of various substances under different situations by changing the parameter values such as the flow of the river into the lake and the concentration of pollutants. This model has been calibrated and verified in past research [33]. Among the input parameters, the flow and temperature of each river are derived from historical data, while the inputs of COD and ammonia nitrogen are obtained from the scenario analysis and need to be divided by time and space. The pollution loads of COD and ammonia nitrogen in the Dianchi Lake basin have two parts: point source and non-point source. The point source mainly comes from the factory and domestic sewage, which is irrelevant to rainfall intensity, so its discharge intensity can be assumed to be evenl throughout the year [34]. The non-point source emission will be affected by different rainfall intensity, so it can be split according to historical monthly rainfall [35]. There are 15 rivers entering the Waihai with great differences in drainage areas and locations, which provides a basis for dividing the pollution load by space [20,36]. The development indexes of the socio-economic subsystem in all scenarios improve significantly, among which the Historical scenario has the smallest increase and the H-H scenario has the largest increase. This trend reflects that the Dianchi Lake basin is still in a relatively rapid development stage. In other words, development is still the top priority in the Dianchi Lake basin, and it is not advisable to sacrifice economic development and completely incline resources to water environment protection. The development indexes of the water environment subsystem (e(y)) in almost all the scenarios have a relatively low growth rate during these 10 years. This indicates that insufficient improvement in protection will lead to the deterioration of the water environment system, which cannot be accepted in the future. Analysis of the Coupling Coordination Degree The coordination degree C curve is shown in Figure 3, and the coordinated development degree D curve is shown in Figure 4. The coordination degree of all scenarios increases first due to the increase of s(x) in the near future, and will eventually show a lagging development of the water environment system in the long term because s(x) grows more quickly than e(y) and will eventually exceed e(y). The L-H scenario will reach the turning point first in 2021. If it continues to develop in this way in the future, the coordination degree would fall to 0.7 in five years (general maladjusted system), and to 0.5 in 2032 (serious maladjusted system). Although the L-H scenario will bring certain growth dividends due to its rapid social and economic development in the short term, the huge system imbalance caused by the destruction of the water environment will have an irreparable impact in the long run. Therefore, it is obviously unacceptable to take this scenario as a reference for future planning. Similarly, the L-M and M-H scenarios will reach an inflection point in 2022, and the coordination degree of these scenarios will fall to 0.5 in the next decade after 2025, becoming a serious maladjusted system. Although the gap between the socio-economic subsystem and water environment subsystem in these scenarios is not as large as that in the L-H scenario, the imbalance will appear only three to five years later than the L-H scenario and it will still cause great damage to the system coordination. Therefore, these scenarios are also not suitable for future development decision-making. In addition, it can be found that the COD discharge will exceed the COD capacity of Dianchi Lake after 2024 under the L-H, L-M and L-L scenarios due to the substantial increase of COD discharge and decrease of water environment capacity, which further confirms their disadvantage. Except for these scenarios mentioned above, none of the other scenarios has seen a significant decline in coordination degree, which means the benefits of their coordinated increase can be fully enjoyed in the foreseeable future. Therefore, from the perspective of coordination degree, these scenarios are all reasonable. The coordinated development degrees will all increase during the simulation period, with various speeds. The coordinated development degree can further reflect the overall development speed compared with the coordination degree. Generally speaking, if the coordination degrees of several scenarios were the same, a scenario with a higher coordinated development degree would be a better choice. The coordinated development degree of the Historical scenario grows most quickly at the beginning, but slows down quickly after then. By 2025, it has fallen to the last among all the scenarios, caused by its fast decline in the development index of the water environment subsystem. In other words, under this scenario, the deterioration of the water environment will make the Dianchi Lake basin much more uncoordinated and unbalanced. The L-L scenario is similar to the Historical scenario, so this scenario is also not suitable for future planning. The H-H, H-M, M-L and Planning scenarios (M-M) all have a good trend of coordination degrees, and the coordinated development degree can basically reach 0.9 in 2025. However, the H-H and M-L scenarios still have some disadvantages. The High protection-High development (H-H) scenario is overfulfilled in both socio-economic development and water environment protection, which is hard to maintain due to the limited human and material resources. The future development plan should follow practical principles, so obviously, this scenario is unable to meet it. The Medium protection-Low development (M-L) scenario means that water environment protection will be carried out successfully, but the socio-economic development speed cannot meet the goals. This scenario may happen, but it is not welcomed for the government because it means a recession caused by some unforeseen reasons. According to the results of the coordination analysis, the H-M scenario and M-M scenario (Planning scenario) perform best, so these two scenarios are the best reference scenarios for future planning. Identification and Optimization of Key Factors in Coordinated Development It can be found from the scenario analysis that in order to improve the coordination degree based on the premise of keeping the coordinated development degree unchanged, the gap between the development index of the social economic subsystem (s(x)) and the water environment subsystem (e(y)) needs to be narrowed, which means reducing s(x) or increasing e(y) in the long term. The weight of indicators in the coordination model can reflect their influence on subsystem development index, so those indicators with high weight are the key factors to improve system coordination. In the socio-economic subsystem, economic growth, residents' income and industrial added value are the key factors to consider. Reducing the growth rate of residents' income is contrary to the government's goal of improving people's livelihood, so this method is unpractical. Reducing the economic growth rate can also control industrial production, which means it is a great method for reducing s(x). Generally speaking, the growth rate of GDP (including the growth rate of added value of each industry) has a marginal effect, that is to say, the growth has a certain limit. At present, Kunming's GDP growth rate is 9%, while the national GDP growth rate is only 6.5% [37]. In the long term, this high-speed development will naturally fade, so it is possible to reduce the economic growth rate by macro-control and the investment of more resources into environmental improvement. Specifically, if the expected GDP growth rate could be reduced to 6.5%, s(x) in 2025 would be reduced by 15%, which would effectively improve the coordination degree. A more effective way to improve coordination is to increase e(y), which can be effectively achieved in terms of urban sewage, pollutant discharge and water environmental capacity. As for the water environment capacity, several water diversion projects have been implemented in the Dianchi Lake basin, and it is not feasible to further improve water environment capacity through the establishment of new water diversion facilities. Furthermore, upgrading the sewage treatment plant is not cost effective as existing sewage treatment plants in the basin have already met rather stringent discharge standards, which means a rather high cost for further improvement. Therefore, increasing urban non-point source pollution reduction is the most feasible way to increase e(y). In addition, reducing the amount of sewage by improving the wastewater reuse rate is also a good choice. Water Quality Simulation under Typical Scenarios This section analyzes the water quality of Dianchi Waihai in 2025 under three typical scenarios: the H-M scenario, M-M scenario (Planning scenario) and Historical scenario, to confirm whether these scenarios can meet the water quality target. There are eight national control points (hereinafter called the "NCPs") in the Dianchi Waihai, and the simulated data of these NCPs are the most direct basis for evaluating the water quality. Figures 5-7 show the COD and ammonia nitrogen concentrations of these NCPs under three typical scenarios. Under the Historical scenario, the COD concentration curve can be divided into two stages. The first stage is from the first day to the 200th day. The COD concentration continues to rise and reaches the peak with a maximum value of 130 mg/L, which can be explained by rainfall intensity. Non-point source pollution affected by rainfall intensity accounts for a large proportion of COD emission; therefore, the change trend of COD concentration is similar to that of rainfall intensity, which is lowest in January and highest in July. After the 200th day, the rainfall intensity decreases, and the amount of COD entering the lake falls, which leads to a balance of biochemical reactions and COD concentration. As for spatial distribution, four NCPs located in the middle of the lake have the lowest concentration (about 65 mg/L), while the concentration of the other four points is about 90 mg/L. It can be found that a point far away from the emission source will show a lower concentration due to the dilution and degradation of pollutants during the migration process. According to the monitoring data in 2015, the annual average COD concentration in the Waihai was 48 mg/L, and the maximum value was 62 mg/L [37]. The simulated data of the Historical scenario in 2025 is even worse compared with that. Under the Historical scenario, the concentration of ammonia nitrogen in the middle of the lake is stable throughout the year, which may be because the ammonia nitrogen will convert rapidly into other forms of nitrogen with the increase of migration distance. In comparison, the ammonia nitrogen concentration of the NCPs near the main urban area are relatively high, which may be explained by their shorter migration distance. In addition, the highest value of ammonia nitrogen concentration will break the Class V water standard of China, which will also be a huge step back compared with the average value in 2015 (0.2 mg/L). This consequence further confirms that the Historical scenario is not conducive to the development of the Dianchi Lake basin. The concentration curves of the H-M and M-M scenarios are similar to that of the Historical scenario, with various values. Under the H-M scenario, the peak value of COD concentration will appear in the Luojiaying point (78 mg/L), and it will also briefly exceed 70 mg/L in the Guanyinshan East point. At the end of the simulation period, the COD concentrations in the north and south points are stable at around 50 mg/L and that in the middle points are stable at about 40 mg/L, which will be a great improvement compared with the monitoring data in 2015. As for the ammonia nitrogen concentration, except for the Luojiaying and Haikou West points, all points can achieve the Class III water standard of China all year round, and some can even reach the Class II water standard. The simulation result of the M-M scenario is slightly inferior to that of the H-M scenario, but still much better than the Historical scenario. Through the analysis of pollutant concentration in the national control points, it can be found that both the H-M and M-M scenarios can improve the water quality of Dianchi Lake, and the H-M scenario performs better. Comparing this conclusion with the previous coordination analysis results, it can be found that the H-M scenario is the most suitable pattern for the reference of future planning, that is, at least 30% water use efficiency improvement and 20% pollutant emission reduction are necessary to achieve the goal of the water quality target, and moderate socio-economic development is conducive to regional sustainability. To further discuss the annual change of water quality under the H-M scenario, Figure 8 shows the spatial distribution of COD and ammonia nitrogen concentration on the 30th, 120th, 210th and 300th days of the simulation period. It can be seen that the change trend of the COD concentration basically starts from the northwest of the lake, that is, the part near the main urban area of Kunming City. From January to April, due to the increase of rainfall intensity, the increasing COD emission entering the lake diffuses from all estuaries to the whole lake, especially the west and north sides, showing a phenomenon of high concentration in the estuary area and low concentration in the central area. In July, COD concentration in most areas of the lake is stable at the highest value. By October, due to the reduction of COD emission into the lake, the COD concentration in the central area will be higher than that in the estuary area, especially in the northwest area. As for the ammonia nitrogen, it can be seen that in all four seasons of the year, the spatial distribution of ammonia nitrogen shows a phenomenon of high concentration in the estuary area and low concentration in the central area. As the transformation rate of ammonia nitrogen in the lake is much faster than that of COD, the concentration of ammonia nitrogen will decrease rapidly with the increase of diffusion distance, which make it lower in most areas of the lake. The northwest area of the lake is close to the city with a high pollution load, so the concentration in this area is the highest. Overall, the year-round COD and ammonia nitrogen concentrations are satisfactory, which further confirms the feasibility of the H-M scenario. Conclusions In this study, an integrated approach for evaluation of the coupling coordination degree (CCD) between socio-economic development and environmental protection in the Dianchi Lake basin was established. This model can not only propose a development pattern as a reference for future planning, which would be helpful for continuously improving the regional sustainability, but also be applied in other watersheds, helping to calculate their coordination degree. After research, it can be found that the High protection-Medium development (H-M) scenario performed best in coordination, and the EFDC analysis further confirmed that the water quality under this scenario had the most significant improvement. Therefore, the H-M scenario is the most suitable pattern as a reference for future planning, that is, at least 30% water use efficiency improvement and 20% pollutant emission reduction are necessary to achieve the water quality target, and moderate socio-economic development is conducive to regional sustainability. Policy makers can achieve the key indicators set by this scenario through the following methods. In terms of macro-economy, the expected development speed should be reduced to 9.5% and the population growth rate should be controlled at approximately 1.9%, which can be achieved by industry structure regulation, construction of satellite town and population redistributing in the main urban area. The growth speed of industrial development should be maintained at approximately 10.7%, by eliminating backward technology enterprises, accelerating technological upgrading and establishing industry admittance. Since most of the pollution sources in the agriculture and service sectors are not equipped with collection and treatment facilities, their scale most be strictly controlled, by adjusting the layout of agricultural production or reducing the planting area of crops and flowers. As for pollutant reduction, the amount of COD emission should be reduced by 18% and that of ammonia nitrogen emission should be decreased by 22%. The specific implementation methods include: reducing the rural non-point source pollution by using environmental protection fertilizers, continuous planting and other emission reduction measures; reducing urban sewage discharge by improving facilities and eliminating high water consumption enterprises; improving the sewage treatment standard; and increasing the sewage treatment rate to approximately 98% to reduce the output of pollution load, etc. In addition, further improvement of system coordination can be achieved according to the weight of the indicators. On the one hand, it is possible to reduce the development level of the socio-economic subsystem by reducing the economic growth rate in the long run; on the other hand, a more effective measure to do this is to increase the development level of the water environment subsystem by improving water supply pipelines and increasing wastewater reuse rates to reduce the total amount of urban sewage. There are still some deficiencies in this article, such as a short time range. Further research in the future can start from two aspects: expanding the time scale and increasing the number of indicators, such as other forms of water pollutants.
7,155.4
2020-12-24T00:00:00.000
[ "Economics", "Environmental Science" ]
A New Block-Predictor Corrector Algorithm for the Solution of , , , y f x y y y We consider direct solution to third order ordinary differential equations in this paper. Method of collection and interpolation of the power series approximant of single variable is considered to derive a linear multistep method (LMM) with continuous coefficient. Block method was later adopted to generate the independent solution at selected grid points. The properties of the block viz: order, zero stability and stability region are investigated. Our method was tested on third order ordinary differential equation and found to give better result when compared with existing methods. Introduction This paper considers the general third order initial value problems of the form y f x y y y y x y y x y y x y Conventionally, higher order ordinary differential equations are solved directly by the predictor-corrector method where separate predictors are developed to implement the correct and Taylor series expansion adopted to provide the starting values.Predictor-corrector methods are extensively studied by [1][2][3][4][5].These authors proposed linear multistep methods with continuous coefficient, which have advantage of evaluation at all points within the grid over the proposed method in [6] The major setbacks of predictor-corrector method are extensively discussed by [7]. Lately, many authors have adopted block method to solve ordinary differential equations because it addresses some of the setbacks of predictor-corrector method discussed by [6].Among these authors are [8][9][10]. According to [6], the general block formula is given by where is e s s  vector, is r-vector and is vector,  s is the interpolation points and is the collection points. Given a predictor equation in the form Equation ( 4) is called a self starting block-predictorcorrector method because the prediction equation is gotten directly from the block formula as claimed by [11,12]. In this paper, we propose an order six block method with step length of four using the method proposed by [11] for the solution of third order ordinary differential equation. Derivation of the Continuous Coefficient We consider monomial power series as our basis function in the form The third derivative of (5) gives The solution to (1) is soughted on the partition π N : within the integration interval [a,b] with constant step length given as s r j j j f x y x y x y x j j j a x ; , (7) Interpolating (5) at collocating (7) gives a system of equations Solving ( 8) and ( 9) for j a 's and substituting back into (5) gives a LMM with continuous coefficients of the form where j a 's and j  's are given as Derivation of the Block Method The general block formula proposed by [6] in the normalized form is given by Evaluating (10) at 1 , 0,4 the first and second derivative at and substituting into (10) gives the coefficient of ( 11 Order of the Method We define a linear operator on the block (11) to give Expanding in Taylor series , (12) gives The block (11) and associated linear operator are said to have order if p The term 2 p is called the error constant and implies that the local truncation error for the block is given by Hence the block (11) Zero Stability of the Block The block ( 11) is said to be zero stable if the roots of the characteristic polynomial satisfies where  is the order of the differential equation, Hence our method is zero stable. Convergence A method is said to be convergent if it is zero stable and has order . 1 p  From the theorem above, our method is convergent. Test Problem We test our schemes with third order initial value problems: Problem 1.Consider a special third order initial value This problem was solved by [13] using self-starting predictor-corrector method for special third order differential equations where a scheme of order six was proposed. Problem 2. Consider linear third order initial value problem x This problem was solved by [14] where a method of order six was proposed.They adopted predictor corrector method in their implementation.Our result is shown in Table 1. Numerical Results The following notations are used in the table. XVAL: Value of the independent variable where numerical value is taken; ERC: Exact result at XVAL; NRC: Numerical result of the new result at XVAL; ERR: Magnitude of error of the new result at XVAL. Discussion We have proposed a new block method for solving third order initial value problem in this paper.It should be noted that the method performs better when the step-size is chosen within the stability interval.The Tables 1 and 2 had shown our new method is more efficient in terms of accuracy when compared with the self starting predictor corrector method proposed by [11] and [15].It should be noted that this method performs better when the step size (h) is within the stability interval.
1,150.8
2012-12-18T00:00:00.000
[ "Mathematics" ]
Dynamical Features of the Seismicity in Mexico by Means of the Visual Recurrence Analysis The recurrence, based in the recurrence Poincare theorem, is a fundamental property of dynamical systems that has been exploited to characterize the behavior of dynamical systems in the phase space. Recurrence is defined when an orbit visits a region of phase space that was previously visited [1]. In this context, the so-called recurrence plot (RP), introduced by Eckmann et al in [2], is a powerful tool for the visualization and analysis of the underlying dynamics of the systems in the phase space like determinism, divergence, periodicity, stationarity among others, for instance the lengths of the diagonal line structures in the RP are related to the positive Lyapunov exponent [3]. Methods based in RP have been successfully applied to wide class of data obtained in physiology, geology, physics, finances and others. RP are especially suitable for the investigation of rather short and non-stationary data [4], and complex systems [5]. For deterministic systems the analysis in the phase space is relatively direct because their solutions are represented as time series, nevertheless, for real natural systems like clime, atmosphere, dimetilsulphure production [6], some of their dynamical variables are gathered as punctual processes. Particularly, the representation of a seismic sequence as a time series is one of the most debated questions in Geophysics, nevertheless natural time analysis, introduced by in [7] by Varotsos et al considers sequences of events and, with this methodology has been possible to identify signals with noises [8] and characterize seismic processes from an order parameter properly defined [9]. Taking in consideration that faults and tectonic plates can be considered as dynamical systems which permit to apply techniques derived from the dynamical systems theory and nonlinear analysis like recurrence plots, clustering, entropy, fractality, correlation, memory among other. In this work we studied properties of the seismicity occurred in Mexico in the period 2006-2014, reported by Servicio Sismológico Nacional, (www.ssn.unam.mx), by considering the occurrence of events as sequences of magnitudes (time series) and determining the recurrence plots. Based on the Visual Recurrence Analysis (VRA), it is possible to get dynamical features of the seismicity. The most important seismic region in Mexico is located along the Mexican Pacific coast. In [10] was proposed a division of 19 regions of the seismo tectonic zone. They took into account seismic characteristics of the existing catalogs for the seismicity occurring in Mexico from 1899 to 2007, for details see [9] and references therein. In order to characterize the Mexican seismicity as a dynamical system driven by the tectonic movements in the Pacific edge, we proposed to investigate the qualitative dynamical features for the dispersion zone, Baja California, and for the subduction zone formed by La Ribera and Cocos plates subducting beneath the Northamerica plate. Taking into account the Geophysical structure, the seismic activity and the bvalues in the Gutemberg-Richter law, in this work, six selected regions have been considered. These six regions are named: Baja California (Z1), Jalisco-Colima (Z2), Michoacán (Z3), Guerrero (Z4) and Oaxaca (Z5) and Chiapas (Z6), the first one corresponds with a dispersion zone and the other ones are subduction regions, [11]. The analyzed data set in this work correspond with the Mexican catalogue which is complete for magnitudes greater than 3, within the mentioned period. Since a geophysical point of view, the seismic activity should be individually characterized in each region because the underlying dynamics shows particular features, as is described in the next section. First, seismic data are represented as a temporal sequence of magnitudes, then, the phase space is reconstructed, by estimating the time delay and the embedding dimension. In order to distinguish some features of the underlying dynamics of each Mexican region seismic, the aim of this work is to study the recurrence plot behavior based on the visual recurrence analysis, taking into account the sequence of events (magnitudes) in time and, on the other hand, analyzing the inter-events time series. Our analysis shows important differences in the recurrence maps of each region. Our finding suggest that the patterns obtained could be associated with the local geophysical structures of each subduction and dispersion zones driven by their characteristic nonlinear dynamical features of each region. Phase space reconstruction To reconstruct the dynamics of the system is necessary to convert the time series in state vectors, being the principal step the phase space reconstruction. Takens [12] showed that it is possible to recreate a topologically equivalent picture of the original multidimensional system behavior by using the time series of a single observable variable. The idea is to estimate a time delay τ, and an embedding dimension m. The parameters, m and τ, must be properly chosen, although there exist some algorithms to do that, appropriated and tested methods are the Mutual Information function to obtain the time delay and, to get the embedding dimension, the False Nearest Neighbors. Once the time delay and the embedding dimension have been approached, the phase space can be reconstructed. For a time series of a scalar variable the next vector is construct in phase space where i = 1 to N -(m -1)τ, τ is time delay, m is the embedding dimension and N -(m -1)τ is number of states in the phase space. According with the embedding theorem [12] dynamics reconstructed using this formula is equivalent to the dynamics in the phase space in the sense that characteristic invariants of the system are conserved. Mutual information function Formally, the Mutual Information is defined, for two stochastic variables X and Y, as I(X;Y) = H(Y,X) -H(X|Y) -H(Y|X) where H(Y,X) is the joint entropy, H(X|Y) and H(Y|X) are the conditional entropies. If X represents the sequence x(t i ) and Y the respective sequence x(t i + τ) the Mutual Information Function (MIF) is able to calculate the global correlation in a time series taking into account the linear and non linear contributions, being MIF the most frequently used to calculate the time delay and described as follows: The mutual information I(τ) is a measure of the relative entropy between the joint distribution and the product of distributions P(x(t i )) and P(x(t i + τ)), where P(x(t i ), x(t i + τ)) is the joint probability of the signal measured between the times t i and t i + τ. The expressions P(x(t i )) and P(x(t i + τ)) indicates the and marginal probabilities. MIF is a nonlinear generalization of the linear autocorrelation function. According to [13] the suitable value of τ is attained with the first local minimum of I(τ). When the time series is uncorrelated or random, like white noise, the next equality holds and I(τ) = 0 [14]. Typical cases are white noise, periodic and periodic + noise and Rossler time series. In Figure 1 the MIF behavior for this typical cases are depicted. False nearest neighbors False Nearest Neighbors method (FNN) is an iterative algorithm which estimates the minimal embedding dimension of the system proposed by [15]. The idea is based in the uniqueness theorem of a trajectory in the phase space. A nearest neighbor P of a point Q in a phase space of d-dimensional embedding is labeled false if these points are close only due to the projection from a higher-dimensional (d+1)-dimensional phase space. Thus, the FNNs will separate if the data is embedded in (d+1)-dimensional space, while the true neighbors will remain close. When all the FNNs have been detected, then the minimal sufficient embedding dimension can be identified as the minima dimension needed to achieve zero fractions of the FNNs being the embedding dimension required. For each vector Y(i), its nearest neighbor Y(j) is looked in a m-dimensional space. In order to do a comparison, the distance d((Yi).Y(j)) is calculated. By iteration of points, the condition: If M i exceeds a threshold M th , this vector is marked as a nearest neighbor. When the fraction M i of vectors that satisfy the condition M i >M th , tends to zero the embedding dimension is attained in this case. The FNN is exemplifying with a simple case in Figure 2. phase space of d-dimensional embedding is labeled false if these points are close only due to 11 the projection from a higher-dimensional (d+1)-dimensional phase space. Thus, the FNNs 12 will separate if the data is embedded in (d+1)-dimensional space, while the true neighbors 13 will remain close. When all the FNNs have been detected, then the minimal sufficient 14 embedding dimension can be identified as the minima dimension needed to achieve zero 15 fractions of the FNNs being the embedding dimension required. For each vector Y(i), its 16 nearest neighbor Y(j) is looked in a m-dimensional space. In order to do a comparison, the 17 distance d((Yi).Y(j)) is calculated. By iteration of points, the condition: 20 21 If M i exceeds a threshold M th , this vector is marked as a nearest neighbor. When the 22 fraction M i of vectors that satisfy the condition M i >M th , tends to zero the embedding 23 dimension is attained in this case. The FNN is exemplifying with a simple case in Figure Recurrence Plot (RP) In Recurrence Plot (RP) In [2] Eckmann et al introduced the so-called Visual Recurrence Analysis (VRA) based on a graphical method designed to locate hidden recurrent patterns, nonstationarity and structural changes observed in the phase space of a dynamical system. The aim of the Recurrence Plot (RP) method is to visualize the recurrences of dynamical systems as a function of time. A brief description is as follows: assuming an orbit of the system in the phase space Each vector of this trajectory corresponds with a state of the system where their components can represent physical quantities, for example, position and velocity for mechanic systems like a pendulum or, pressure, volume and temperature for thermodynamical states. Then, in order to get the RP, the recurrence matrix must be constructed which is defined as [5]: were ε > 0 is an error. Roughly speaking, the matrix elements represent the distance between two vectors in the times i and j. Once the RP has been obtained from the recurrence matrix, a quantification of the features can be done, for example from periodic patterns to chaotic behavior [5,16]. As it is well known, in an experiment only a sequence of scalar values can be measured and it is assumed that the information is available on an univariate time series, which is part of a larger n-dimensional (maybe deterministic) model. Figure 4 shows examples of RP of the four cases described above: white noise, periodic, periodic with noise and chaotic systems. Seismic regions and data set We analyzed the whole seismic catalog of the Mexican SSN, (www.ssn.unam.mx) from 2006 to 2014 Due to geophysics features of the Mexican subduction zone, it has been described in [9 and references [20,21,24] therein], sugesting that it can be studied in segments where the six regions are: Baja California (Z1), Jalisco-Colima (Z2), Michoacán (Z3), Guerrero (Z4), Oaxaca (Z5) and Chiapas (Z6), in Figure 5 the six regions are showed. The following panel of Figure 6 As is showed in Figure 6, the number of earthquakes reported by the SSN in each region is different. Firstly, the seismicity in the region Z1, peninsula of Baja California, shows two periods of time, the fist one from 2006 to 2010 and the second one from 2011 to 2014, in the first period little seismic activity compared to the second is observed, however, a sudden change in the seismic activity is observed, this behavior in the seismic activity before and after 2010 can be observed in the regions Z2 and Z3 corresponding with Jalisco-Colima and Michoacán respectively This situation may be due to system upgrades monitoring stations. As mentioned above, the Z1 region evolve with a process in which the peninsula of Baja California is separated from the continental plate, while in the other regions, the dynamics is driven by subduction between continental plate and plates Rivera and Cocos. Specifically, Jalisco-Colima region (Z2) the subduction is given between La Rivera and The north-America plates where the stress and strain fields determine the direction of movement in which La Rivera subducts being different from the case of the Cocos plate. According with the catalogue, Guerrero (Z4), where <M> is the average magnitude and M c is the completeness magnitude of the seismic sequence that represents the minimum magnitude over which the frequency-magnitude distribution behaves as the power-law, N~10 -bM [18]. The b-values calculated for each region are depicted in Figure 7 and resumed in Table 1. Table 1, the values are resumed. Because geophysics shaping the seismic zone as well as the various mechanisms and processes that take place in each of the regions, the statistical properties associated with the seismicity of each of these are reflected as different values to the parameters of the laws scaling as in the case of the Gutenberg-Richter law. This situation indicates that the local stress fields and stress must drive the interactions between different parts of a complex system, such as the Earth's crust. So, through a study in the context of dynamic systems, where the seismic activity is considered as a response to the underlying dynamics is possible to observe different characteristics of the system and not observed directly from a statistical point of view. Although seismicity is considered, as a sequence of events whose measurable variable is the magnitude, cannot be put aside the temporal component, that is, the distribution of interevent defined as the time between two earthquakes within a region while. It is noteworthy that there are time series studies interevent by multifractal analysis for seismicity in Italy [19]. REGION <M"/> b-Value The Figure 8 shows the interevent time series of the seismicity studied. For this time series the behavior of the MIF and FNN fraction is calculated. Results In this work six seismic sequences and their respective interevent time series were analyzed by means of the RP method. Firstly the phase space is reconstructed for each sequence. The MIF and FNN fraction for the magnitudes sequences are depicted in Figures 9 and 10. In order to reconstruct the phase space, the time delay τ, and the embedding dimension m, were estimated by taking the first minima of the mutual information function and the False Nearest Neighbors algorithms respectively. Then the recurrence matrix is obtained whose matrix elements, R i j , are the distances D ij , between states Y(t i ) and Y(t j ) in the reconstructed phase space, to calculate D ij , the Euclidean norm was chosen. The τ-values and m-values are contained in the Table 2. Once the recurrence matrix is obtained the distribution of distances between states computed, in Figure 12 the probability distribution function (pdf) for distances is showed. The r max -value of the pfd for Z2, Z4, Z5 and Z6 are located around r max ≈1.2 and r max < 1 for Z1 and Z3. As has been mentioned, Z1 is a region where the peninsula is separating from the continental plate and Z3 is located where the border between La Rivera and Cocos plates are in contact and subducted into the continental plate. Regarding the shape of PDF in all cases an exponential tail is observed. REGION τ m Baja California (Z1) 5 13 Jalisco-Colima (Z2) 2 12 Michoacán (Z3) 3 7 Guerrero (Z4) 3 9 Oaxaca (Z5) 9 12 Chiapas (Z6) 4 11 From the behavior of MIF is possible to identify the correlation in seismic sequences. According to this behavior, the region with the lowest correlation is Z2 (Jalisco-Colima) and the highest correlation is Z5 (Oaxaca), as is showed in Figure 9. However, the behavior of the FNN fraction as function of the embedding dimension of the seismic sequences indicates that all of them are in phase spaces of high dimension being the shorter Z3 with m = 7. To calculate the recurrence matrix, the distances between pairs of vectors were computed in phase space. By definition this matrix is symmetric because D ij = D ji, and the principal diagonal D ii = 0. The Recurrence Plot is the graphical representation of the recurrence matrix. The examples of RP depicted in Figure 4, were drawn in black and white because the recurrence matrix was built according to definition given in Eq. (1). In order to take into account all distances computed and perhaps their distribution (Figure 12), a color code allows characterizing qualitatively some features of the dynamics as are displayed, in the panel of Figure 11, the RP of the six seismic sequences. According [5,20,21], RP of some cases of different topologies can be distinguished, for instance: Stationary systems display homogeneous RP like white noise, for periodic or cuasi-periodic systems appears recurrent structures as diagonal lines and checkerboard forms, for non-stationary systems drifts are present, abrupt changes in the systems indicates extreme events, vertical (horizontal) lines represent time intervals where a state remains constant or changes very slowly. In general, the RP of the seismicity occurred from 2006 to 2014 are displayed in Figure 11 where, in a first glance, typical patterns, like periodicity or cuasi periodicity and white noise are not observed. Nevertheless, clusters bordered for vertical and horizontal lines are present suggesting slow changes in the system. The color distribution and the clusters in RP of BC(Z1), JC(Z2), and Ch(Z6) suggest drifts indicating non stationary dynamics associated possibly with their geophysical features because in BC, the Pacific-North America plate boundary in southern California and the north of Baja California peninsula where many faults are connected in a complex geometrical pattern, continuing into a divergent tectonic plate in the Gulf of California. In Jalisco-Colima region, the Rivera plate subducts at a steep angle plate in Central America and for Ch, the Cocos plate subducts beneath the coast but two perpendicular faults, Clipperton and Tehuantepec, contribute wlth their local dinamical evolution. More similar RP structures are dysplayed in M, G and O, which seems to show more stability because the respectives RP are more uniform which could be indicating that the dynamics is driven by the interacion between the Cocos plate which subducts in the same direction beneath the North America plate and the dip angle of the Cocos plate decreases gradually. Our findings are consistent with the results reported in [11] where an analysis of non extensive model of the similar regions were studied indicating that JC region is the most unstable seismic zone in Mexico. The distribution function associated with the distances between states in phase space allows us to observe the most likely value, and suggest a other possible criterion for determining characteristics distances among all states in the phase space and to determine the ε-value in the definition of the matrix recurrence (Ec. 7). The Figure 12 shows the pdf of the distances for the six sequences of seismic magnitudes. It is observed that the maximum values of the pdf are located in different possitions and possibly this behavior could be associated with the geophysics features of the regions: Z2, Z4, Z5 and Z6 are subduction zones where Z3 is determined by the relative motion between the Rivera plate and the Continental plate, while the other three are determined by the interaction of the Cocos plate subducting under the continental plate. For Z1 the seismic activity is produced by the movement of separation between the peninsula and the continental shelf of North America and the Z3, the seismic activity is determined by the interaction of the plates Rivera in contact with the plate subducting Cocos and both the continental plate. Concluding remarks It is well known that the Mexican Pacific is an important seismic region where large earthquakes have occurred with devastating consequences producing significant economic downturns and especially many human losses. Due to interactions between subduction zones and the slow separation of the Baja California Peninsula, this region is considered a complex system that evolves as consequence of many processes that occur in the interior of the Earth as well as in the areas of contact between the surfaces involved. In this context, [10] proposed a division of 19 regions of the seismo tectonic zone taking into account seismic characteristics of the existing catalogs for the seismicity occurring in Mexico from 1899 to 2007 and a seismic historic compilation from previous publications and of some catalogs. In order to distinguish some features of the underlying dynamics of each Mexican region seismic, the aim of this work is to study the recurrence plot behavior based on the visual recurrence analysis, taking into account the sequence of events (magnitudes) in time and, on the other hand, analyzing the inter-events time series. Our analysis shows important differences in the recurrence maps of each region. In a similar way by considering the seismicity monitored by SSN within the period 2006-2014, and identifying clusters of earthquakes that can be associated with the geophysical features of the Mexican Pacific, six regions were considered to study ( Figure 5). In this paper we studied dynamical features of the six seismic regions located along Mexican Pacific coast. The analyzed data set corresponds with the Mexican seismic catalogue reported by the SSN. First, sequences of magnitude of earthquakes and the interevent time series were studied. Their analysis was performed by means of the phase state reconstruction and the RP of each region. Our findings indicate a possible correlation between the RP calculated and the geophysical features characteristics in each zone (panel of Figure 11). RP displayed of BC, JC and Ch show non periodicities, correlation (not white noise structure), non stationariety. For M, G and O, RP are more similar and stability is observed. The results for the interevents time series, short correlation and large embedding dimension, suggest the possibility to establish a combination between a detremiistic model plus a stochastic noise. Our finding suggest that the patterns obtained could be associated with the local geophysical structures of each subduction and dispersion zones driven by their characteristic nonlinear dynamical features of each region.
5,149.2
2015-05-20T00:00:00.000
[ "Geology", "Physics" ]
On Street-Canyon Flow Dynamics : Advanced Validation of LES by Time-Resolved PIV The advanced statistical techniques for qualitative and quantitative validation of Large Eddy Simulation (LES) of turbulent flow within and above a two-dimensional street canyon are presented. Time-resolved data from 3D LES are compared with those obtained from time-resolved 2D Particle Image Velocimetry (PIV) measurements. We have extended a standard validation approach based solely on time-mean statistics by a novel approach based on analyses of the intermittent flow dynamics. While the standard Hit rate validation metric indicates not so good agreement between compared values of both the streamwise and vertical velocity within the canyon canopy, the Fourier, quadrant and Proper Orthogonal Decomposition (POD) analyses demonstrate very good LES prediction of highly energetic and characteristic features in the flow. Using the quadrant analysis, we demonstrated similarity between the model and the experiment with respect to the typical shape of intensive sweep and ejection events and their frequency of appearance. These findings indicate that although the mean values predicted by the LES do not meet the criteria of all the standard validation metrics, the dominant coherent structures are simulated well. Introduction Numerical codes for computational fluid dynamics (CFD) have been developed and used in industrial CFD applications where a variety of practical problems are predicted and tested (e.g., [1]).Owing to the enormous complexity of turbulence and extremely variable boundary conditions, the modelling of the micro-meteorological scale has been delayed with respect to their practical implementations.After years of intensive development, the numerical codes for near-surface atmospheric flow have attained a level of sufficient precision in mathematical description and spatial resolution.Time-resolving frameworks, such as Large-Eddy Simulation (LES) and Direct Numerical Simulation (DNS), have the potential now to become truly credible calculation tools for solving air quality issues since these models are capable of capturing the time-dependent behaviour of turbulence. The CFD model needs to undergo a thorough validation procedure before its practical implementation.The validation determines to what extent the model is in agreement with real physics.An extensive review of the evaluation methodologies as well as the definition of the nomenclature used can be found in Oberkampf and Trucano [2]. For turbulent boundary-layer flow problems, the best data sources are field measurements.Unfortunately, field measurements are very rarely available and, as Schatzmann and Leitl [3] pointed out, the micro-meteorological flow found in field measurements exhibits much larger scatter than the data from a closely-controlled wind-tunnel experiment.Therefore, field data represent a greater challenge in terms of post-processing and preparation for a validation.Thus, the CFD results are often compared with those from wind-tunnel experiments [4][5][6][7][8][9].That said, the CFD validations against data from street-canyon field experiments were performed as well (e.g., [10][11][12]). A validation procedure for atmospheric boundary layer dispersion or velocity distribution predictions by CFD was compiled within the frame of COST 732 [13][14][15], COST C14 [16] and AIJ Tominaga et al. [17].These guidelines addressed various types of models including Gaussian models, and RANS and LES models.The latter, LES model, represents an affordable combination of direct simulation of large turbulent structures while modelling small unresolved scales by means of an embedded sub-grid model [18]. Since common validation techniques target just the variables that are available from all the discussed models, only temporally-averaged values are usually retrieved for comparison.Illustrative applications of the validation of temporally-averaged values from LES against various experiments can be found in Jimenez and Moser [18].The most suitable experimental data currently available are those obtained from Particle Image Velocimetry (PIV) measurement techniques as they can provide multi-points time-resolved synchronised data.The use of PIV for the validations is, however, still rare [4,19,20].Since there may be a lack of a sufficiently long time period from LES or from PIV to achieve legitimate statistical averages, a feasible strategy for the validation of LES is, according to Hertwig et al. [20], not only the comparison of mean values but also a multi-point time-series analysis by means of advanced statistical tools, which allows us to detect transient structures, their shape and also frequency distribution. In this paper, we present an extension of a validation procedure introduced by the research of [20][21][22] and Hertwig [23], in which a three-level validation hierarchy, consisting of global statistics, eddy statistics and flow structure statistics was employed.The extension comprises time-series analyses such as the spectral and spatial modification of the quadrant analysis, and the temporally-spatial analyses, namely the spatial correlation or the Proper Orthogonal Decomposition (POD).By this extension, we demonstrate that spatial data from the time-resolved particle image velocimetry (TR-PIV) improve the validation procedure.As suggested in Jimenez and Moser [18], the Reynolds stress is also a very sensitive quantity for testing the LES sub-grid model.Hence we present the investigation of the momentum flux by an additional spatial quadrant analysis as well. This strategy respects the time-dependent nature of the coherent features and reveals the similarity between the predicted and measured turbulent flow in a more detailed manner.Our goal is to validate the capability of the LES to simulate both the larger-scale coherent features above the canyon, known to be crucial for street canyon ventilation (e.g., [7,[24][25][26]), and to model the small-scale swirls, responsible for the intense mixing process inside the street canyon itself (e.g., [6]).Strengths and shortcomings of the well-established validation metrics are thoroughly discussed and the benefits of the innovative methods are introduced. Experimental Method The experiment was performed in a pressure-driven open-circuit wind tunnel with a test section of cross-sectional dimensions 0.25 × 0.25 m and a length of 3 m.A fan, followed by a 3 m long tunnel duct and a contraction pipe, was installed upstream of the test section.The street-canyon model covered the entire test section.The model was built from 30 identical parallel street canyons, spanned laterally across the width of the tunnel (0.25 m).The building height and the roof height was 0.03 m and 0.02 m, respectively.The aspect ratio of the street canyon (street-canyon height H = 0.05 m to street-canyon width W) was equal to one H/W = 1 (Figure 1). The triangular shape of the street-building roofs was chosen according to roofs typical for European city centres.Kellnerova et al. [24] pointed out that triangle roofs generate more turbulent dynamics than flat ones and to successfully validate the flow behind the triangle roofs is therefore more challenging.The presented experimental set-up was a part of the measurement campaign comparing a skimming flow and its dynamics between street canyons with the flat and pitched roofs in a neutrally stratified boundary layer flow.A detailed description of the campaign can be found in Kellnerova et al. [24] or Kellnerova [27]. The PIV measurements were taken downstream behind the 20th street canyon and only at the tunnel axis in order to guarantee a fully developed flow and to avoid a wall effect.Based on an empty wind-tunnel measurement, the wall affects the mean velocity in distances up to 40-60 mm from the walls.Although we do not consider the boundary layer above the street canyons as fully representative of the atmospheric flow due to the excessive aerodynamical blocking caused by buildings (achieving value of 20%), the lowest part of the boundary layer shows significant similarity with a true atmospheric layer in terms of normalised lower and higher moments and cross-moments of the velocity [27].The numerical large-eddy simulation copies the cross-sectional test section dimension and simulates the aforementioned effects of the walls.The wind-tunnel coordinate system (x, y, z) corresponds to the streamwise (u), lateral (v) and vertical (w) instantaneous velocity, where the mean velocity components are labelled U, V and W, and their fluctuations according to Reynolds decomposition, as u , v and w .The mean wind speed measured above the street-canyon centre (x/H = 0, z/H = 2) was U 2H = 5.7 ms −1 .Thus, the Reynolds building number became Re 2H = U 2H H/ν = 19 000, where ν is the kinematic viscosity of the air.The velocity measurements were conducted with the TR-PIV system, providing 2-D snapshots of the instantaneous velocity vectors in a single streamwise-vertical plane (xz) placed 2000 mm (40 H) downstream from the test section entrance (see Figure 1).One run of PIV measurements consisted of 1634 image-pairs.An overview of the PIV parameters is listed in Table A1 in Appendix A. Additional measurements above the height of z/H = 1 were performed by means of CTA hot-wire anemometry (HWA), with a single-wire probe DANTEC 55P01 (tungsten wire with a diameter of 0.005 mm and a length of 1.25 mm).The HWA provided a sampling frequency of 25 kHz for the 1-dimensional streamwise velocity component u.Prior to the validation procedure, the HWA velocity data were compared with the PIV data in terms of the mean velocity, turbulence intensity, skewness, flatness, histogram and spectra for the streamwise velocity component.Good agreement was found in all cases [27].The spanwise homogeneity of the mean velocity was 2.3%.The HWA system reached a very good accuracy of 1% for the streamwise velocity component. Numerical Method For this study, we used the LES model called the Charles University Large-eddy Microscale Model (CLMM) developed by Fuka [28].CLMM is an in-house finite difference solver of the incompressible Navier-Stokes equations that uses LES to model the smallest scales in the flow.The code uses uniform staggered Cartesian grids.Solid obstacles are treated by using the immersed boundary method [29,30]. CLMM uses implicit filtering, i.e., it is assumed that the variables are filtered by using approximate numerical schemes on a finite grid.The solution of the discrete Poisson system is performed using the open source library PoisFFT [28].The spatial derivatives are computed using the second-order central differences with the exception of the momentum advection, which is computed using the fourth-order central differences [31] that reduce to second order differences in cells closest to the wall.The discrete system is integrated usign the 3rd order Runge-Kutta method with variable time step keeping the Courant number below 0.9. The subgrid scale eddy viscosity is computed by the σ-model by Nicoud et al. [32] and the shear stresses on solid walls are computed using a wall model with a logarithmic wall function when the wall coordinate is y + > 11.2.The wall model uses the instantaneous velocity in the closest gridpoint to compute the shear stress.The wall coordinate y + was lower than 30 in this simulation. For this study the model was run on a uniform grid with a constant resolution of 2 mm in each direction.The boundary conditions of the model follow the boundary conditions of the wind-tunnel test section as much as possible.The vertical and lateral dimensions of the wind tunnel are fully solved by LES over the whole extent (5 H of both vertical and lateral direction) with the no-slip boundary conditions on the tunnel walls.The streamwise dimension covers the length of 16 H (0.26 of the full test-section length).Cyclic boundary conditions were specified only in the x-direction for the upstream and downstream boundaries, and therefore the model represents an infinite number of idealised street canyons along the streamwise direction.A detailed description and the principal equations can be found in Appendix B. Data Pre-Processing For the LES validation procedure, we performed two levels of data analyses.The first level comprises a qualitative and quantitative comparison between the temporally and spatially averaged quantities of the wind-tunnel experiment and the LES model by means of profiles and standard validation metrics.The second level represents comparisons of intermittent flow dynamics between the experiment and the LES model by means of well-established statistical methods for detection of the turbulent coherent structures (spectral and quadrant analysis, correlation and POD).For each analytical approach, we performed a specific data pre-processing procedure. Analyses of Time-Averaged Flow All of the velocity data were normalized by the reference velocity, U re f , which was temporally and horizontally averaged along the streamwise direction at the reference height (x/H ∈ [−0.5,0.5],H re f = z/H = 1.5),where the LES data as well as the PIV data were available.The HWA measurement was conducted at the canyon centre and the reference velocity was achieved at a single point (x/H = 0, H re f = z/H = 1.5).The temporal scale was converted into the dimensionless time, t * , based on the formula [33] where t is the real or simulated time during the experiment or LES simulation, respectively.For the time-mean analysis, all the experimental and numerical data available were gathered and averaged.The data from three PIV runs, each containing 1634 snapshots, were combined into one ensemble with 4902 snapshots in total, labelled PIVa.The total dimensionless time corresponds to t * PIVa = 1295.All LES computational periods were equal to 18 s and steady-state results were obtained from the last 10 s of the computations.Since the cyclic boundary conditions were implemented at the inlet and outlet of the domain boundaries, the streamwise position of the individual canyons did not matter.The overall statistics of the LES data were derived from the temporal and spatial averaging across all eight canyons and are labelled LESa.By assembling this large database, the effective simulation time increased to a value of t a =80 s and the number of snapshots reached 80,000.The dimensionless total time became t * LESa = 1528. Analyses of Intermittent Flow For validation of the simulated time-dependent flow dynamics, we used the continuous time-resolved measurements from the PIV run (labelled PIVc) with the following parameters: 1634 snapshots, t c = 3.2 s, t * PIVc = 181.The continuous HWA time-series (the streamwise velocity only, labelled HWAc) were recorded over t c = 30 s, and corresponded to t * HWAc = 1704, respectively.For the continuous time-resolved LES data (labelled LESc), the time-series lasting t c = 10 s and t * LESc = 191 from one canyon were employed in the validation procedures concerning the flow dynamics. The grid resolution of the PIV data is finer (1.2 mm) than the LES resolution (2 mm).For the purpose of the validation, we started to prepare a systematic comparison between the different grid resolutions of the LES.Based on the very preliminary, not-yet-published results from the LES with half resolution (1 mm), the number of grid points, P, is considered by the authors of this paper to have a crucial impact on the LES performance.An overview of the methodology regarding the set-ups for PIV and LES is presented in Table 1. Measurement Uncertainty It is common practice to determine the uncertainty (also referred to as the measurement error) by means of the standard deviation (STD) of many repeated measurements at reference points.Unfortunately, obtaining a high number of repetitions with PIV requires large data storage.Therefore, only a few runs are usually executed with the PIV system.We repeated the PIV measurement three times.Two measurement runs lasted 3.2 s with a sampling frequency of 500 Hz in order to capture the transient flow dynamics.A third run was performed in order to acquire turbulence statistics and lasted 16.3 s with a low sampling frequency of 100 Hz.The detailed specification of the variation observed between the individual runs is inserted in Appendix C. Due to the occurrence of spurious vectors just next to the solid walls and edges of the area measured by PIV, the outmost grid points were omitted in the validation metric procedures, which will be discussed later in Section 4.3.We determined the measurement error from the PIV measurements as the spatial average of the STD within the remaining restricted rectangular area inside the street canyon.The spatially-averaged STD of the given dimensionless quantities from three PIV samples, the streamwise and vertical velocity component, U/U re f and W/U re f , the momentum flux, < u w > /U 2 re f , and the reduced turbulent kinetic energy, TKE/U 2 re f = 0.5(< u 2 > + < w 2 >)/U 2 re f , are listed in Table 2.These values also serve as the absolute deviation criterion for the Hit rate validation procedure discussed in Section 4.3.1. Profiles The vertical profiles of the dimensionless mean streamwise velocity, U/U re f , at the centre of the street canyon are compared in Figure 2a, where the PIVa data are displayed with the horizontal bars representing the measurement error derived from the STD at each elevation.The PIVa (black squares) and HWA (grey squares) data match remarkably well with the output from LESa for the upper part of the streamwise velocity profile, i.e., within the interval 0.9 < z/H < 1.5.The lower part is predicted less successfully since the LES model fails to capture properly the exact shape of the recirculation zone inside the canyon, owing to an underestimation of the negative streamwise velocities near the canyon bottom.These discrepancies in the velocity and the corresponding momentum flux deviation plays a significant role in the recirculation vortex pattern, as will be shown later. The simulated vertical velocity, W/U re f , deviates from the measured one inside the street canyon (0 < z/H < 0.8), but agrees well at the roof level (Figure 2b).Likewise, the simulated dimensionless reduced turbulent kinetic energy, TKE/U 2 re f = 0.5(< u 2 > + < w 2 >)/U 2 re f , and the vertical momentum flux, < u w > /U 2 re f , of LESa exhibited very good agreement with those from PIV (Figure 2c,d) within the upper canyon (0.5 < z/H < 1.5).The peak of reduced TKE at z/H = 1.1 in Figure 2d is predicted extremely well.The peak of momentum flux was slightly overestimated by LESa in terms of magnitude, but mostly within the range of the PIV measurement error.A noticeable discrepancy appears in the lower part of the momentum flux profile (0.3 < z/H < 0.4 in Figure 2c).As will be explained later, this positive increase in the total momentum flux in Figure 2c relates to the formation of secondary vortices inside the street. Velocity and Momentum Flux Fields The mean velocity xz fields for both the streamwise and the vertical velocity, and for the corresponding momentum flux, are depicted in Figure 3a,b.The first line shows the results from the PIVa, and the second line represents the LESa performance.All quantities are normalised by the reference velocity U re f . Generally, the streamline pattern exhibits a single vortex with the core slightly shifted upward and downstream, compared to the flat roof case [27,34].This matches previous wind-tunnel studies (e.g., [35,36]) and CFD studies (e.g., [34]) since the exact position depends strongly on either the specific roof geometry [37] or the roof aspect ratio (the roof height to the building width) (e.g., [34]). A comparison between the dimensionless fields of PIVa (Figure 3a) and LESa (Figure 3b) shows a very good agreement in the region above the street canyon, but a less successful agreement in the region inside the canyon.The streamlines presented in the first column of Figure 3a,b show that LESa predicts a single primary vortex of similar shape as the observed one.The position of the primary vortex centre is predicted by LESa slightly higher compared to PIVa.Further, the secondary windward bottom-corner counter-rotating vortex in the street canyon is over-predicted by the LES model whereas the leeward bottom-corner vortex is completely missing in the simulation (see right and left bottom of the street canyon in Figure 3aI,bI).This corresponds with the notably streamwisely tilted pattern of the simulated vertical velocity field in the second column of Figure 3b, compared to the experiment.Another important observation can be interpreted from the fields of the momentum flux (third column in Figure 3a,b), namely the LES very well predicts the vortex shedding in the roof-top region but it over-predicts the positive streamwise momentum flux within the canyon.As will be shown later in Section 5.3, the vortex in the roof-top region is part of a rather large-scale and smooth up-roof flow, which is well simulated by the LES in general.The zone inside the canyon, on the other hand, consists of many small vortices and shear patterns under the spatial resolution of the LES grid. Validation Metrics A set of general performance metrics to quantitatively assess the agreement between a model and an observation was established by the U.S. Environmental Protection Agency [38] to provide a common framework for various air dispersion model comparisons.The metrics were designed to compare the concentration values by means of their mean or maximum values; they specifically target statistical quantiles and percentiles in order to assess the performance of the model.Only some of the metrics were selected for this paper.Since we work solely with velocity-based data, we modified the following formulae with respect to the used quantities X (X = U, W, < u w >, TKE): Fractional bias (FB): Root normalised mean square error (RNMSE): Geometric mean bias (MG) and geometric variance (VG): Pearson correlation (r): where the subscripts e and m refer to the experiment and modelled data, respectively, X denotes the space-averaged value of the time-mean quantity over the restricted rectangular area and the angle bracket X = x denotes the time-averaged value at a point.The application of FB, RNMSE, MG and VG to values achieving both positive and negative values can lead to a non-physically-justified experiment-simulation agreement since those values can compensate each other in Equations ( 2)-( 5) as pointed out by Schatzmann et al. [15].The visual comparison of the data (Figure 3) shows that there is no area where the experimental and modelled data differ in the sign and have a non-dimensionless magnitude higher than 0.1.Therefore, the authors applied those metrics to the absolute values of the investigated quantities with the awareness that the results of the metrics are affected by this phenomenon, but not significantly.An overview of these general metrics is listed in Table 3.The bold values denote the satisfaction of the particular criterion listed in the third column. The general validation metrics are successfully satisfied in all cases.In comparison with the qualitative validation previously done on the velocity fields, the quantitative validation by means of the general metrics shows a surprisingly good agreement between PIVa and LESa.The main explanation could be that the general metrics work with fixed threshold values, which are not based on the uncertainty of the experimental data, and which were originally proposed for the concentration values.The velocity data more easily comply with these not-so-stringent criterion thresholds.We propose, at least for the Pearson correlation coefficient, that the acceptance criterion should be 0.9 instead of 0.8.The uncertainty itself can be introduced via the Hit rate metric, specifically designed by VDI's guidelines for validations of predicted mean velocity fields.The bold values denote the satisfaction of the particular criterion listed in the third column. Hit Rate The Hit rate metric applied in guideline VDI [39] for the validation of prognostic micro-scale wind-field models was used in order to quantitatively compare the experimental and modelled results.The Hit rate metric, q, is calculated by the equation using the normalised numerical model data M i (modelled) and the normalised observed data E i (experiment) as where D and A represent the relative and absolute deviation, respectively, according to the guidelines [15,39], with P being the number of grid points within the restricted rectangular area.The Hit rate absolute deviation value, A, was attributed to the normalised measurement error obtained from the given quantity (see Table 2).The relative deviation limit D = 0.25 was adopted from Schatzmann et al. [15].Both acceptance levels A and D are presented in the scatter plots in Figure 4 along with each quantity.The algorithm ( 6) is applied to a limited region (−0.5 < x/H < 0.5; 0 < z/H < 1.5).The LES computed grid (1120 spatial points) was fitted to the PIV measured grid (2400 points) by a bilinear interpolation in order to provide one unified spatial domain.The interpolation error was evaluated from a mutual comparison of the linear, cubical and spline version of the interpolating MATLAB procedure applied to typical comparative quantities U, W, < u w > and reduced TKE.During the interpolation the values at the points located within the buildings were artificially set to zero.This resulted in lower interpolated values just next to the walls in comparison with those measured by the PIV.The points next to the wall were, however, excluded from Hit rate and other metrics.The difference between the type of interpolations achieved a normalised value of 0.0001.Therefore, the interpolation error is not involved in the absolute value of A. The final Hit rate values between the LES simulation of the inter-canyons averaged data (labelled LESa) and the averaged values of the repeated experiments (PIVa) are listed in Table 4. Again, the bold values meet the acceptance criterion of 0.66 as defined in VDI [39] for each velocity component.We adopted this criterion in agreement with other studies (e.g., [15,20]) for all the investigated quantities. Based on the numbers in Table 4, our LES simulation fails to simulate the mean velocity field for U and W, according to the established Hit Rate metric.On the other hand, the higher moments are seemingly calculated with a sufficiently high precision.However, without a proper simulation of the velocity gradient, the moments can not be captured well by LES.To understand the nature of the problem, the scatter plots of measured and simulated values from LESa are shown in Figure 4. The scatter plot between the PIVa and the LESa values in Figure 4a is located along the 1:1 line, though the absolute deviation A (dashed and dotted lines) for the streamwise velocity component is so small that most of the points do not fall in the prescribed region.Most of them fulfil only the relative deviation criterion D (dashed lines) in Figure 4a.This leads to a preliminary conclusion that the LES model fails in the prediction of the very low velocity values inside the canyon where small-scale vortices and intense shear exists, but it succeeds in the prediction of the fast velocity flow above the canyon. The Hit rate scatter plot illustrates that the streamwise and vertical velocity components are subtly and moderately underestimated by the LESa according to Figure 4a,b, respectively.The momentum flux (Figure 4c) and the reduced TKE (Figure 4d) are predicted very well and reach a high Hit rate score.The additional discussion on the Hit rate results can be found in Section 6.The bold values meet the acceptance criterion of 0.66 as defined in VDI [39] for each velocity component. Spatial Hit Rate The STD of each quantity significantly differs from position to position in the street canyon due to the influence of long-term, large-scale flow structures on the stationarity of the data.For example, in our case, the values of the absolute deviation A, as derived from the STD of the streamwise velocity, exhibit a spatial scatter of A(x) ∈ (0.001 − 0.040) inside the rectangular area.The area where the long-term structures occur (e.g., in the up-roof region) supposedly has a high STD, while the area of an intense small-scale mixing (e.g., canyon centre) exhibits low STD.If the measurement error is determined from a one-point measurement only, this reference point should be chosen with the utmost care. To test the role of the space-dependent differences in the deviation criterion A(x) = A(x i , z i ) for the validation, we included a particular measurement error into the Hit rate formula at each location separately as The final spatially sensitive Hit rate agreement is shown in Table 5.The spatially sensitive Hit rate is supposed to be more precise since it reflects the spatial inhomogeneity of the flow.In our case, the spatial Hit rate on LESa is consistent with the standard Hit rate metric output.The larger A(x) occurs mostly around the roof level, where velocities were predicted sufficiently well, and A(x) remains small inside the canyon, where velocities were predicted poorly.Considering the fact that the rounding-off upward effect of the spatially averaged error A = A(x) poses less stringency on the Hit rate validation procedure in the inside-canyon area than a local A(x), we found the standard Hit rate metric was resulting in slightly more optimistic results than the spatially sensitive Hit rate.Although the spatially sensitive Hit rate is considered by the authors of this paper as more precise, the standard single Hit rate validation metric does not deviate significantly in terms of results, and it is much more easily-applied.Hence, single Hit rate criterion is regarded as a suitable comparable tool for the comparison of a simulation with an experiment.The bold values meet the acceptance criterion of 0.66 as defined in VDI [39] for each velocity component. Spectral Analysis Both the TR-PIV experiment and the LES simulation have an advantage over the RANS and Gaussian models in providing the time-resolved data with a satisfactorily high temporal resolution, to which the spectral analysis is worth being applied.The power spectral densities were calculated as the square of the fast Fourier transformation of the streamwise velocity fluctuations time series.The resulting spectra were smoothed by non-overlapping rectangular blocks.The length of the blocks was exponentially increasing with the increasing frequency to get equidistant points on the logarithmic axis (about eight estimates per frequency decade).The comparison of the power spectral density obtained from the HWAc and the LESc time series at height z/H = 2 is shown in Figure 5. Contrary to the locations inside the street canyon, at this elevation Taylor's hypothesis on frozen turbulence can be considered.Above the reduced frequency of n = f z/U = 0.1, where f is the frequency, z is the elevation and U is the mean streamwise velocity component, there is agreement between the numerical and experimental data. Since the LES is inherently limited for a correct simulation of the whole turbulent spectral range, it is, by definition, capable of simulating only low-frequency ranges and the inertial subrange, provided the grid resolution is high enough.The sub-grid vortices are only modelled ones, leading to the energy deficiency in the high-frequency tail of the spectral density plot (n > 10).Still, the LESc spectra follow the well-known Kolmogorov −2/3 law (−5/3 in non-weighted representation), as depicted by the solid line in Figure 5. On the low-frequency tail of the LESc spectra, the sudden drop in spectral energy at approximately n = f z/U = 0.1 is clearly notable in comparison with the dashed Karman's theoretical curve.The drop was detectable in all the LES canyons.With a LES periodic domain length of 16 H, only structures up to 8 H can be safely computed.However, structures of 16 H can exist and can in fact be very strong. Quadrant Analysis The quadrant analysis allows for a deeper insight into the dynamical behaviour of the flow, since it groups certain events based on their specific contribution to the total momentum flux, and provides their statistical overview.To be more specific, we obtain information about a particular contribution of the momentum flux < u w > from the particular flux direction (i.e., quadrant) with the quadrant analysis.The definition of the quadrants, derived from the scatter plot of the streamwise and vertical velocity component fluctuations, was inspired by Willmarth and Lu [40]: The particular contribution of the i-th quadrant τ i to the total momentum flux τ xz =< u w > is obtained from a formula of weighted average: where < u w > i means the time-averaged momentum flux within the i-th quadrant, n i is the number of events belonging to the i-th quadrant and N total is the global number of events recorded during the time period t * PIVa = 1295 and t LESa * = 1528.The vertical profiles of the total momentum flux, < u w >, with each of the event contributions are plotted in Figure 6a.The total momentum flux is a simple sum of all the contributions.First, it is worth pointing out that the peak shape (at z/H = 1.1) is quite different from the peak shape of the street canyons with the same aspect ratio, but with flat roofs (e.g., [7,25,41]).While the peak of the profile of the total momentum flux is narrow and located just above the roof-top of the street canyons with flat roofs, the pitched roofs generate a vertically more extensive shear layer and consequently produce a stronger exchange between the canyon cavity and the boundary layer aloft (for more details see [24]). The quadrant analysis confirms that the sweep events (orange triangles in Figure 6a) and ejection events (green triangles in Figure 6a) clearly contribute the most to the total momentum flux, while the inward (light blue triangles in Figure 6a) and outward (dark blue triangles in Figure 6a) interactions are negligible.As the contribution of the sweep events is forecasted by LESa (solid lines) extraordinarily well in both the peak elevation and the magnitude above the roof top, the moderate over-estimation of the ejections leads to a slight over-prediction of the total momentum flux at the peak level.Again, the general tendencies of strong sweep and ejection events are predicted extremely well. The momentum flux exhibits a significant diversion from the measured data in the lower part of the canyon (z/H < 0.5).The over-prediction by LESa is caused by a strong contribution from the inward interaction quadrant compared to the completely missing inward interaction in the case of PIVa.The LES with a grid resolution of 2 mm is probably unable to resolve the small-scale shear motion properly.The preliminary results from a finer grid resolution of the LES (1 mm) indicates much better agreement (not shown here). Considering the hypothesis that a model, primarily, has to correctly simulate the momentum flux in order to predict velocity properly [18], the over-prediction of the inward interaction provides a possible explanation why the LESa fails to correctly predict the mean velocity profiles and 2-D fields, as presented in previous sections in Figures 2 and 3. Figure 6b reveals that the increase of the inward interaction significantly correlates with the flow pattern of the second POD mode calculated from LESa data, indicated by the red streamlines (the details are explained later in Section 5.3). The sweep and ejection events often travel in a compact shape across the street canyon.For a better demonstration of these phenomena, it is convenient to plot their conditionally averaged shape as derived from several strong events.An example of a typical sweep from PIVc and a typical sweep from LESc, in the canyon, is presented in Figure 7a,b, respectively.The negative (orange) values denote the region where a strong sweep takes place whereas the positive (green) values denote the existence of a dominant ejection.Further, we evaluated the spatial distribution of each quadrant in the planar vector field xz by means of a spatially relative cumulative contribution ε i (t * ) from particular event τ i (t * ) to the total momentum flux τ total (t * ) according to the formula: where k = 1, 2...K and l = 1, 2...L are the row and column numbers in the vector field, |...| representing the absolute values of the inner product.The total momentum flux τ total (t * ) in Equation ( 11) is the sum of absolute values from each quadrant at each time.A definition of the intensive events is explained in the following procedure and Figure 8. The calculation of the contribution of the particular event ε i (t * ) from Equation ( 9) revealed that the sweep and ejection events occasionally represent up to ε 2,4 (t * ) > 80% of the total momentum flux (see the red and green lines in Figure 8).Such a high contribution agrees with the many experimental and numerical studies dealing with the role of sweep and ejection in turbulent boundary layers (e.g., [40,[42][43][44]).The specific time instants with ε 2 (t * ) > 80% from the sweep quadrant serve as the data input for the conditional averaging of the momentum flux field in order to achieve a typical intensive sweep event in the canyon as shown in Figure 7. The clear tendency for both events to pass the street canyon in an alternating fashion can be seen also in Figure 8a,b.This is supported by the large and negative value of the correlation coefficient between the relative contributions of sweeps and ejections, R sw,ej = −0.90.When the sweep enters the canyon, the ejection is suppressed and vice versa.The frequency of such events occurring was analysed from the time series of their relative contribution ε i (t * ) defined by Equation ( 9).The mean value derived from all instantaneous values ε i (t * ) was subtracted from the time series and the spectral function was obtained by the same algorithm used in the case of velocity fluctuations described as in Section 4.4.The spectra revealed that the characteristic frequency of this pseudo-wavy pattern occurs approximately at f z/U = 0.2 ≈ 6H, taking into account an estimated convective velocity of the quadrant events [27] for both the PIVc and the LESc (Figure 9).The sweep and ejection events pass the canyon or are induced by the canyon geometry with the same pseudo-frequency and with the same relative intensity in both the experiment and the simulation.It is necessary to note that sweep and ejection events are not part of any vortex since their wavelength is larger than either the vertical or lateral dimension of the wind tunnel.In conclusion, the quadrant analysis proves that the LES is capable of reliably modelling large intermittent and organised structures in the flow above the obstacles. Spatial Correlation Spatial correlation helps to explore the link between transient flow dynamics across the field in a statistical sense.The normalised correlation coefficient at the point x + dx, where x is the reference point, was calculated according to formula where N is the number of time-steps (snapshots) obtained from the continuous PIVc measurement and LESc simulation, and σ u is the standard velocity deviation.For both PIVc and LESc vector fields, we calculated the correlation between the time series at the chosen reference point x = (x/H = 0, z/H = 1.25) and all other spatial locations.The correlation coefficients were calculated for velocity fluctuation u and w , and for the momentum flux fluctuation (u w ) .Since the spatial quadrant analysis (Section 5.1) reveals that appreciable regions of high momentum fluxes (i.e., the sweep and ejection events) pass the canyon, we aim to investigate the correlation between the time-series of these momentum fluxes u w instead of a simple correlation between the quantity u and w .The contour-plots in Figure 10 display typical patterns of the spatial correlation above the street canyon similar to the one obtained in, e.g., Michioka et al. [7].Regarding the qualitative comparison, the plots confirm a good agreement between the experimental (Figure 10a) and the numerical (Figure 10b) results.The contour-plots allow us to directly compare the correlation coefficient fields from the PIVc and LESc.To express the degree of similarity between the plots in Figure 10a,b, we calculated the final correlation coefficient via the following formula where P = 4977 is the number of locations x in the measurement area and j = u, w or u w .The results of R uu , R ww and R u w u w obtained for the shared grid points of both PIVc and LESc (together 132 points) are plotted in Figure 11.The appropriate final correlation coefficient R jj is presented by means of the coloured scale and is assigned to a specific location (square) within the investigated area.Figure 11 suggests that the greatest deviation in the LES predictions, compared with the experimental results, is located mainly inside the street canyon, close to the vicinity of the walls and near the canyon bottom, especially in the case of the momentum flux, R u w u w .Again, the LESc apparently fails to predict the recirculation zone together with the small-scale structures within the canyon cavity, probably due to the coarseness of the grid resolution.Other lower correlations can be seen in the vicinity of the upstream building roof, where small-vortex shedding occurs.This, in accordance with previous results of the Hit rate validation metric, confirms that the transient dynamics, and consequently the ventilation processes, at these critical areas will not be predicted by the presented LES model in the proper manner.The range of the mean values of the correlation coefficient that corresponds to the individual canyons, and which is spatially-averaged over the entire investigated area within each canyon (always 132 shared points between the PIV and LES grids) for u, w and u w , are listed in Table 6.The relatively high unified values in Table 6 indicate that the correlation method for a single velocity component u or w is rather tolerant in measuring the degree of similarity between the predicted and observed short-time intermittent motions within the shear layer at the canyon bottom.In other words, this spatially-average single number from each canyon will not reveal the subtle differences between the measured and forecasted transient dynamics and will yield a rather high percentage of agreement.We presume that validations based solely on the spatially-averaged correlation of u and w would lead to an overestimation of the LES model with respect to the prediction of intermittent motions.However, the detailed picture of locally dependent final correlations, as depicted in Figure 11, is a suitable tool for the detection of problematic areas in the LES prediction. Proper Orthogonal Decomposition Contrary to a one-point spatial correlation, which simply carries out the statistical behaviour of the fluctuations with respect to one arbitrarily chosen reference point, the POD groups the correlated motions into contextual merit-based ensembles and thus provides a certain insight into the coherent intermittent dynamics [45].Simply stated, for fluid mechanic applications, POD assembles the intermittent events together by finding their most appropriate representation (basal vectors).This representation expresses the highest TKE content in each vector from a statistical point of view.The POD may be applied for both the experimental data (e.g., [20,24,25,46]) and the data obtained from CFD simulations (e.g., [47]).Hence, it provides a valuable tool for comparing the results from numerical modelling with those from a physical experiment. The POD was proposed by Lumley [48] as a tool for the detection of large coherent structures in the flow.This method is based on the assumption that we can extract special functions (alias basal vectors {ϕ m }) from the chaotic turbulent flow, those which possess the mathematical description of the coherent structures as the most probable flow features.It is thus possible to describe every value of the instantaneous velocity fluctuation at every instant of time and space u (x, t) by a set of new basal vectors ϕ p and their corresponding expansion coefficients a p , where p = 1, 2, ...2P is equal to double the number of grid points P, as corr(a p , a q ) = δ pq (16) where corr represents the correlation coefficient and δ pq the Kronecker delta achieving value of 1 for p = q and 0 otherwise.The basis {ϕ p } meets the orthogonality and normality criteria, so that the vectors ϕ p (i.e., POD modes) are perpendicular to each other and are properly normalised.In the case of POD, this basis is not chosen a priori as in the Fourier or the wavelet analysis, but according to the input data.It is important to note that the POD modes are functions of space, not of time.The p-th eigenvalue λ p corresponds to the p-th POD mode and contains information about the contribution of the p-th mode to the total turbulent kinetic energy (TKE).A reordering of the eigenvalues based on the descending TKE (and the ascending order of p) reveals the most dominant modes in the flow according to their relative contribution Π to the TKE [49]: We applied the POD to the horizontal and vertical velocity components inside a limited rectangular area of the street canyon.Data from both the LES and PIV were normalised by the reference velocity.Since the number of grid points obtained from PIV post-processing (2400 points) differs from the number of grid points in the LES simulation (1120 points), and also because the slightly different dimensions of the investigated areas (due to the different outer boundaries of the grids) might cause certain differences in the POD results, we tested the sensitivity of the POD to these aspects first.We observed that it is better to keep the dimension of the region identical, irrespective of the differences in spatial resolution.It is also better to interpolate only the POD output, rather than the input data.Thus, every POD mode was calculated for the original grid, and then each LES mode ϕ p was interpolated to the PIV grid. In order to illustrate the spatial distribution of the modes ϕ p , the four most dominant POD modes for PIVa and LESa in terms of TKE are depicted in Figure 12a-d.It has to be emphasized that the black lines with the arrows, the so-called streamlines, were painted manually, hence their density and precise location serve for display purposes only.The first mode in Figure 12a displays the vortex behind the roof, which is well pronounced and dominant in the case of the intermittent flow.Kellnerova et al. [24] proved that this vortex accurately captures the dominant street-canyon flow dynamics.The LESa version, involving the data from all of the simulated canyons, shows a good agreement with the PIV results with respect to the core of the vortex.This is a very important finding, since the first mode contributes the most (by 32%) to the total turbulent energy budget.The deviation of this roof-vortex core of PIVa from that of LESa (∆x/H = 0.1) can be considered small enough, resulting in an overall consistency of the modal shape and leading to a satisfactory validation. Regarding the second mode (Figure 12b), the inflection point at the area above the roof is predicted precisely.The LES seems to push the center of the vortex between the walls.The recirculation vortex, containing 7% of total TKE, is the main dynamical pattern responsible for the acceleration and deceleration of the mean recirculation vortex.The shape of the recirculation zone also moderately deviates from the experimental results since the LES enhances the backward flow near the downstream edge of the vortex.This backward flow contributes to the inward momentum flux (see Figure 6b) leading to an elevation and an inclination of the primary recirculation zone (see Figure 3a,b). The third mode (Figure 12c) shows definite deviation from the experimental results concerning the curved trajectory of wind in the whole lower canyon.The LES's third mode further contributes to the inward interaction momentum flux in this area.Meanwhile, the other flow patterns, such as the vortices above the windward and leeward roof, are appropriately predicted.The fourth mode (Figure 12d), again, is very well forecasted.The fourth mode represents the late phase of the recirculation vortex, when the vortex is moving toward the ground.Although the latter two modes contribute only 6% and 4%, respectively, to the total TKE, Kellnerova et al. [24] showed that they occasionally play a significant role in street-canyon flow dynamics. To evaluate the agreement between the simulated POD modes and the measured ones more clearly, the scatter plots for the first four dominant modes are displayed in Figure 13.The excellent similarity between the experiment and model for the streamwise and vertical velocity fluctuations, u /U re f and w /U re f , respectively, is further confirmed quantitatively by the Hit rate metrics results, and are listed in Table 7.The Hit rate score is evaluated in a similar way as in Section 4.3.1,i.e., the absolute deviation boundary A is taken as A = 0.01 and A = 0.005 for streamwise fluctuation and vertical fluctuation, respectively.These values of A are calculated as the mean standard deviation of corresponding velocity fluctuation.The only difference is that here, the member q i in Equation ( 6) achieves a value of one when both the streamwise and vertical fluctuations satisfy the criteria for the streamwise and vertical deviation boundaries simultaneously.The bold values meet the acceptance criterion of 0.66 as defined in VDI [39] for each velocity component. Figure 14 displays the relative and cumulative contribution of each POD mode to the TKE.The relative contribution of the first mode calculated from all-the-canyons data (LESa), matched well with the experiment.Generally, in the case of higher modes, which mostly represent the vortices inside the canyon, the LES predicts their systematically higher percentages of the contribution in terms of the turbulent kinetic energy.The LES forecast therefore exhibits a higher degree of coherency in correspondence with the faster rate of the cumulative contribution convergence.This indicates that the LES generates either a higher number of vortices or a higher rotational speed of the vortices within the recirculation zone compared to the experiment, especially in connection with the vortex associated with the second and third POD mode.Incidentally, Figure 6b illustrates the close relationship of the second POD mode with the inward interaction.The stronger second and third modes in LES consequently enhance the inward interaction of the momentum flux at the lower part of the canyon, thereby leading to a higher positive total momentum flux (the third column in Figure 3) and a streamwise inclined recirculation pattern (the first and the second column in Figure 3) that modifies the velocity profiles in the lower canyon part (Figure 2a). Discussion on Validation Metrics Since the general validation metrics Section 4.3 are originally proposed for the concentration data, the velocity data easily comply with their criterion thresholds.As mentioned previously, we therefore propose for the Pearson correlation coefficient to increase the acceptance criterion to 0.9, instead of 0.8. The Hit rate metric points out issues encountered while using a PIV measurement technique for a street-canyon model topology.Firstly, the Hit rate benefit lies in a simple implementation of the measurement error, as obtained from repeated measurements at a well-chosen reference position or in the investigated region.With the PIV and other multi-points techniques, however, it might be difficult to achieve a sufficiently high number of repetitive runs, while maintaining a sufficiently long acquisition time.Hence, the results from the Hit rate method should not be regarded as the "benchmark" validation metric before the PIV runs provide enough repetitive data.Near-wall areas, where the PIV post-processing is likely to substitute a large portion of spurious vectors, have to be cautiously dealt with. Secondly, the Hit rate is extremely sensitive to the magnitudes of the preselected data.In our case, the relative deviation boundary, D = 0.25, is a much less stringent criterion than the absolute deviation boundary, as determined from a very low experimental uncertainty.If the data are selected from locations containing only large values (above the roof area, for example), the majority of the data fits into the relative deviation limits.This factor can be effectively demonstrated for the above-roof region (−0.5 < x/H < 0.5, 1 < z/H < 1.5) with large values of the streamwise velocity, U, the momentum flux and reduced turbulent kinetic energy, and with almost negligible vertical velocity, W. The final Hit rate score achieves a value of 1 for U/U re f , TKE/U 2 re f and < u w > /U 2 re f , seemingly suggesting the ideal model situation.On the other hand, the vertical velocity component, W/U re f , achieves an extremely low value of 0.20, indicating a rather poor LES performance (see the left column in Table 8).The bold values meet the acceptance criterion of 0.66 as defined in VDI [39] for each velocity component. In the region containing a small streamwise velocity component, U, and the large quantities, W, TKE, and < u w >, as in the windward region of the canyon (0.25 < x/H < 0.5, 0.25 < z/H < 1), the results would be different.The Hit rate score would achieve high values for W/U re f , TKE/U 2 re f and < u w > /U 2 re f , while the streamwise component U/U re f would not satisfy the threshold criterion (see the right column in Table 8).The result suggests that the validation technique should employ data from all the possible locations available from both the experimental and numerical output. Hypothetically, when using precise measurement techniques in a well-controlled experiment (ensuring a low uncertainty), a fast-motion flow region would always comfortably satisfy the Hit rate relative criterion, whereas a slow-motion flow region would hardly meet the low experimental uncertainty.An absolute deviation criterion depending on the precision of the measurement technique that varies from study to study may make the inter-studies comparison of the Hit rate scores problematic.It might be more practical to pose one unified absolute deviation criterion A in the form of a percentage (e.g., 2% of the nominal velocity, A = 0.02U re f ). Conclusions The advanced validation procedures were performed, and others introduced, in order to compare the time-resolved LES simulations with the time-resolved PIV measurements on street-canyon topology.The validation procedure followed the general procedures recommended by Schatzmann et al. [15], Cox and Tikvart [38], which employ time-space averaged statistics.Further, an innovative approach based on coherent detection methods was implemented to compare the predicted and observed values of both the intermittent large-and small-scale features in the flow. The general metrics, encompassing the time-space averaged data, were fulfilled for all the tested quantities.The LES performance failed to satisfy the minimum Hit rate criterion (0.66) in both the mean streamwise and vertical velocity components, while the quantities based on the fluctuations provided a satisfactory agreement.The strengths and weaknesses of the Hit rate metric were thoroughly discussed. The large-scale structures in the up-roof area were predicted accurately, in contrast to the small-scale vortices near the street-canyon bottom-area.The discrepancies between the modelled and the measured mean velocities at the bottom area, where a strong shear takes place, are apparently associated with the deviation in the momentum flux. The advanced validation techniques, the quadrant analysis together with POD, revealed that the enhanced inward interaction in the case of the LES simulation is associated with a more intensive vortex formation inside the canyon.Further, the quadrant analysis shows that the dominant flow structures in the up-roof region -the strong sweep and ejection events -have similar compact shapes and pass through the street canyon with identical frequency and intensity in both the simulation and the experiment.The detailed Fourier spectral analysis revealed an undefined drop of energy in LES in the energy-containing range across the entire time domain.Nevertheless, the inertial subrange was modelled extraordinarily well.Finally, the POD demonstrated that the shape of the coherent structures from the LES is strikingly similar to the shape of the observed structures.The Hit rate metrics applied to the POD modes showed an excellent agreement.The turbulent kinetic energy contained in the most dominant LES structure was identical with the measured one, while the less dominant LES structures exhibited a slightly more turbulent kinetic energy.This suggests that the simulated flow is more organised and less noisy than the flow observed in the experiment. In an ideal case, the tested model should pass all the available validation metrics.However, in the real situation we might need to prioritize some metrics over others to prove satisfactory model performance in given applications.In our study we focused on the flow within an urban roughness layer, where the dynamical exchange of pollution and heat is a crucial factor.The dominant momentum events, which are correlated with heat and emission transport [50,51], can be quite easily described by the quadrant analysis in terms of contribution to TKE and their spectral characteristics, and we recommend to use it.The quadrant analysis can be executed either as one-point method or spatial multi-point analysis, as drafted in this paper. If the synchronised multi-points data are at disposal, the POD would be a quick and useful tool to compare the large structures, both visually and numerically.On the other hand, using a spatial correlation might overestimate the LES performance with respect to the simulation of intermittent motions. The validation methods presented, including the coherent detection methods, proved to be an indispensable tool for the proper validation of the time-resolved numerical codes.The analysis of the coherent features helped us to easily validate, qualitatively and quantitatively, the prediction against the observation, as well as to explain their possible contradictions.We demonstrated that although the modelling of sub-grid, small-scale structures inside a street canyon did not meet the Hit rate validation criteria, the dominant dynamics of the larger scale can be modelled in a satisfactory manner. < u w > /U 2 re f , and 7% for the reduced turbulent kinetic energy, TKE/U 2 re f .Thus, we can conclude that the PIVc runs lasting for 3.2 s contain a sufficiently high number of representative flow events for the intermittent flow analysis.The same holds true for the LES performance, as the maximum deviation from the mean velocity magnitude value of both velocity components quickly achieves a low error percentage of 2%, the momentum flux of 13% and TKE of 8% after reaching 10,000 samples.The performance of LES with 10,000 snapshots for each canyon is therefore considered to be adequate for the validation of flow dynamics. We also performed the sensitivity test to an acquisition time by comparing the mean values point-by-point between the individual runs.Using the Frobenius norm, we compared the mean velocities, the standard deviations and reduced TKE.The averaged deviation of the mean velocity between all three runs was 2.6% for the streamwise component and 5.5% for the vertical component.The standard deviation had a scatter of 11% and 6% for the U-component and W-component, respectively.Also, the difference for reduced TKE was only 3%.All results can be found in [27]. Further, the POD modes from individual runs were compared by means of the Frobenius norm.The first POD modes derived from the individual runs did not deviate more than 4.2% from each other.The results for other modes are published in [27]. These values were considered to be sufficiently low enough to assume that even a short measurement, lasting for 3.2 s and assembling 1634 snapshots, captures a representative number of the dominant transient features in the flow.Therefore, we collected all three runs together and performed the statistics introduced in this paper.Some advanced statistics are not sensitive to the sampling frequency, for example the POD, when the essential dynamics in the flow is captured by the snapshots.For the spectral characteristics, where either the sampling frequency or continuity of the data are crucial, only the continuous data from the 500 Hz run were used. Generally, the repetitions of the measurements involve all the possible random variations in an experimental performance-the switching off of the facility (e.g., a wind tunnel) and the devices (e.g., a laser and a particle seeding generator).The measurement also involves a bias error caused by both the PIV system and the post-processing procedure (the presence of spurious vectors, a strong gradient velocity field, a sub-pixel interpolation, a pixel-locking effect).The authors considered the bias error to be notably greater than the random error.Since we were unable to estimate the type of hypothesised distribution of the velocity mean values, we just simply calculated the standard deviation as a true error uncertainty, which presumably involves both types of error. Figure 1 . Figure 1.Isometric view of the model positioned in the tunnel test section.The measured area by TR-PIV system is indicated by the red rectangle within the green laser sheet.The dimensions are in mm. Figure 2 . Figure 2. Comparison of the profiles of the mean dimensionless (a) streamwise and (b) vertical velocities, (c) momentum flux and (d) reduced turbulent kinetic energy at the centre of the street canyon (x/H = 0).The solid lines represent the LES simulation; the symbols denote the wind-tunnel experiment. Figure 3 . Figure 3.The mean dimensionless streamwise velocity with streamlines (first column) and vertical (second column) velocity, and momentum flux (third column) for (a) PIVa; and (b) LESa. Figure 4 . Figure 4.The Scatter plots of the measured (PIVa) and simulated (LESa) values obtained within the street canyon for the dimensionless (a) streamwise velocity; (b) vertical velocity; (c) momentum flux and (d) turbulent kinetic energy.Dashed lines denote the relative deviation D; dashed-and-dotted lines denote the absolute deviation A. 2400 points are plotted for each run. The 8 HFigure 5 . Figure 5.Comparison of the energy spectral density of the streamwise velocity at position x/H = 0, z/H = 2 for HWA (black triangles) and LESc (red squares).The solid black line represents the slope of the Kolmogorov −2/3 law, the dashed grey line represents the Karman's Theoretical curve.The solid vertical red line indicates the smallest scale not strongly affected by the sub-grid model and discretization errors with Λ LES ≈ 0.24H = 12 mm. Figure 6 .Figure 7 . Figure 6.(a) The vertical profiles of the total vertical momentum flux and the particular event contributions to the total momentum flux from LESa (lines) and PIVa (triangles) at position x/H = 0.The colours correspond to the schematic diagram of the quadrant analysis in the bottom left corner in Figure 7; (b) 2D plane of the inward interaction from LESa (light blue contours) with the red streamlines of the second POD mode from LESa. Figure 8 . Figure 8.The segment with the time evolution of the sweep (red lines) and ejection (green line) relative contribution to the absolute total momentum flux for (a) PIVc and (b) LESc. Figure 9 . Figure 9.The spectral density plot of the relative contribution of the sweep and ejection events to the total momentum flux for PIVc (triangles) and LESc (lines). Figure 10 . Figure 10.Spatial correlation of the normalised streamwise velocity fluctuation r uu (first column), of the vertical velocity r ww (second column) and of the fluctuation of the momentum flux r u w u w (third column) for the chosen point at x/H = 0, z/H = 1.25 for (a) PIVc and (b) LESc.The LES plot is interpolated onto the PIV grid (4977 points). Figure 11 . Figure 11.The final correlation coefficients between the PIVc and the LESc spatial correlation fields of the streamwise velocity R uu (first column), of the vertical velocity R ww (second column) and of the momentum flux R u w u w (third column).Each point shows the comparison based on the spatial correlation of the correlation coefficient fields between PIVc and LESc at one of the 132 shared points of the PIV and LES grids. Figure 12 . Figure 12.The POD modes for the restricted rectangular area within the street canyon for PIVa and LESa.(a-d) correspond to Mode 1-4.The black streamlines denote PIVa, the red streamlines denote LESa.The LESa modes are interpolated on the PIV grid. Figure 13 . Figure 13.Scatter plots for modes 1-4 between the PIVa (experiment) and LESa (model) for (a) u /U re f and (b) w /U re f . Figure 14 . Figure 14.The comparison of the relative (squares) and cumulative (triangles) contributions of the first 100 individual POD modes of PIVa and LESa to the TKE. Table 1 . Parameters of the methodologies HWA, PIV and LES. Table 2 . Measurement error expressed as spatially-averaged standard deviation of the given dimensionless quantity (used as the Hit rate absolute deviation criterion A in Section 4.3.1). Table 3 . General metrics applied to PIVa and LESa. Table 4 . Hit rate metric q. Table 5 . Hit rate metric q without and with spatial sensitivity. Table 6 . Final correlation coefficient spatially-averaged over the each canyon. Table 7 . Hit rate metric q for POD modes. Table 8 . Hit rate metric q.
14,687.4
2018-04-25T00:00:00.000
[ "Physics" ]
The structural basis for RNA slicing by human Argonaute2 SUMMARY Argonaute (AGO) proteins associate with guide RNAs to form complexes that slice transcripts that pair to the guide. This slicing drives post-transcriptional gene-silencing pathways that are essential for many eukaryotes and the basis for new clinical therapies. Despite this importance, structural information on eukaryotic AGOs in a fully paired, slicing-competent conformation—hypothesized to be intrinsically unstable—has been lacking. Here we present the cryogenic-electron microscopy structure of a human AGO–guide complex bound to a fully paired target, revealing structural rearrangements that enable this conformation. Critically, the N domain of AGO rotates to allow the RNA full access to the central channel and forms contacts that license rapid slicing. Moreover, a conserved loop in the PIWI domain secures the RNA near the active site to enhance slicing rate and specificity. These results explain how AGO accommodates targets possessing the pairing specificity typically observed in biological and clinical slicing substrates. Figure S1. Structural comparisons and purification of the HsAGO2−miR-7−target ternary complex. (A) Crystal-packing contacts for HsAGO2 RISC in the two-helix conformation (PDB: 6N4O 31 ).N and MID domains form packing contacts between two copies of HsAGO2 RISC in the two-helix conformation (colored red and yellow).The fully paired HsAGO2 structure (cyan) is overlaid, showing that it is in a conformation that would disrupt these packing interactions.(B) Crystal contacts for TtAGO complex with a complementary target.At the top is the asymmetric unit of TtAGO, which contains two copies of TtAGO (colored yellow and purple) (PDB: 4NCB 35 ), which form contacts between the N and PAZ domains.At the bottom the fully paired HsAGO2 structure (cyan) is overlaid, showing that it would not accommodate the packing contacts.(C) Clash observed when the N domain from the HsAGO2 two-helix structure (PDB: 6N4O 31 ) is overlaid with a 21-bp RNA duplex model generated in ChimeraX bound within the central channel.(D) Clash observed with the N domain when the RNA duplex from the AtAGO10 2−16-paired structure (PDB: 7SWF 34 ) is extended to position 22. (E) MpAGO bound to an RNA−DNA hybrid fully paired to position 20 (PDB: 5UXO 38 ) (red).The fully paired HsAGO2 structure (cyan) is overlaid.(F) Purification scheme for the HsAGO2 D669A −miR-7−target ternary complex.(G) Size-exclusion chromatography of the ternary complex on a Superdex 200 3.1/200 column, followed by analysis on an SDS-polyacrylamide gel, visualizing protein with Imperial (Coomassie R-250) staining. Figure S5. Analyses of the central channel expansion and the EI Loop. (A) Frames 0−39 of 3DFlex movie (Movie S3).Gradient depicts contracted (red) to expanded (cyan) states.(B) A ~15 Å distance between the EI loop modeled in our HsAGO2 structure and the target RNA from the HsAGO2 two-helix structure (PDB: 6N4O 31 ).The EI loop is shown as a cartoon from residues 820−850, and the rest of HsAGO2 is shown as a surface.(C) Schematic of miRNA guide-target duplexes examined in target-dissociation assays.The 32 P radiolabel at the 5′ end of target is indicated as an orange star.Otherwise, this panel is as in Figure 1B.(D) In vitro dissociation-rate constant (koff) values for either wildtype (WT) HsAGO2 (dark grey) or HsAGO2 with phosphomimetic substitutions in the EI loop (red), for the two miRNA−target sets tested.Otherwise, this panel is as in Figure 2E.(E) Dissociation of target RNA from either wildtype (WT) HsAGO2 or HsAGO2 with phosphomimetic substitutions in the EI loop.Curves represent nonlinear best-fits to an exponential decay equation.Number of data points for each set is indicated as n.Time points beyond the limit of the x axes are not shown. Figure S2 . Figure S2.Cryo-EM data collection and processing.(A) Representative micrograph low-pass filtered to 10 Å.This micrograph is representative of 12,106 micrographs.Scale bar is shown at the bottom left.(B) Representative 2D classes.Scale bar is shown.(C) Classification tree for determining the structure of HsAGO2 in a slicing-competent conformation. Figure S3 . Figure S3.Quality and resolution of cryo-EM data.(A) Estimate of average resolution.Dotted lines indicate Fourier shell correlation (FSC) of 0.5 and 0.143.Solid lines indicate FSC between half-maps of the reconstruction.(B) Angular distribution plot for particles used to reconstruct Map 1. Shading from blue to yellow indicates the number of particles at a given orientation.(C) Reconstruction of the fully paired HsAGO2 complex, colored by local resolution (Map 1).(D) Model of the RNA duplex in density from Map 2. (E) Model of the N domain in density from Map 2. (F) Model of the L1 domain in density from Map 2. (G) Model of the PAZ domain in density from Map 2. (H) Model of the L2 domain in density from Map 2. (I) Model of the MID domain in density from Map 2. (J) Model of the PIWI domain in density from Map 2. (K) Model of PIWI-domain residues 672−695 in density from Map 2. (L) Model of active site in density from Map 2. Figure S4 . Figure S4.Residues in N and PIWI domains regulate HsAGO2 slicing activity.(A) On the left is a multiple sequence alignment of select AGO and PIWI homologs at the proposed Ndomain contacts.Residues identical to HsAGO2 are shaded in teal.Residues that changed from HsAGO2 but remained basic are shaded in yellow.An H56L substitution in HsAGO3 is shaded in pink.Species codes are as follows: Hs, Homo sapiens (human); Mm, Mus musculus (mouse); Gg, Gallus gallus (chicken); Dm, Drosophila melanogaster (fruit fly); Bm, Bombyx mori (silkmoth); Ce, Caenorhabditis elegans (nematode); Sm, Schmidtea mediterranea (flatworm); Nv, Nematostella vectensis (starlet sea anemone); Ef, Ephydatia fluviatilis (river sponge); Ol, Oscarella lobularis (sea sponge); Dr, Danio rerio (zebrafish); Ca, Candida albicans; Kp, Kuyveromyces polysporus; Nc, Naumovozyma castellii; Sp, Schizosaccharomyces pombe (fission yeast); At, Arabidopsis thaliana; Os, Oryza sativa (rice); Bo, Brassica oleracea (wild cabbage); Nt, Nicotiana tabacum (tobacco); Aa, Aquifex aeolicus; Mj, Methanocaldococcus jannaschii; Pf, Pyrococcus furiosus; Tt, Thermus thermophilus.On the right is a multiple sequence alignment of select AGO and PIWI homologs at the central loop.Residues identical to HsAGO2 are shaded in green; otherwise, as in the left panel.(B) Fraction of target RNA sliced over time by either wildtype (WT) or N-domain mutants of HsAGO2−miR-7, across different RISC concentrations (gray gradient).Solid lines represent best-fit lines from fitting to the ordinary differential equation system.The extrapolated reaction curve at infinite RISC concentration, which represents a reaction rate determined by only kslice and not kon, is plotted in black.Magenta dashed lines indicate simulated results that would have occurred if there were a substantial (10fold) defect in elementary rate constant for target association (kon).Number of data points for each set is indicated as n.Time points beyond the limits of the x axes are not shown.(C) Analysis of major-groove width deviation from A-form of the RNA duplex in HsAGO2 in the slicingcompetent conformation and the AtAGO10 2−16-paired conformation (PDB: 7SWF 34 ).(D) Central loop in the AtAGO10 2−16-paired conformation (PDB: 7SWF 34 ).Colors are as in Figure 4D.(E) Glutamate finger, central loop, and active site of HsAGO2 in the fully paired conformation.Otherwise, as in D. (F) Fraction of target RNA sliced over time by either wildtype (WT) or central-loop mutants, across different RISC concentrations.Dashed lines indicate simulated results that would have occurred if there were a substantial (10-fold) defect in elementary rate constant for target association (kon).Colors are as in Figure 4F; otherwise, as in B. Values with WT HsAGO2 are replotted from B for reference.(G) Fraction of centrally mismatched target RNAs sliced over time by either wildtype (WT) or central-loop mutants, across different RISC concentrations; otherwise, as in F.
1,713.6
2024-08-20T00:00:00.000
[ "Biology" ]
The Antioxidant and Safety Properties of Spent Coffee Ground Extracts Impacted by the Combined Hot Pressurized Liquid Extraction–Resin Purification Process Hot pressurized liquid extraction has been used to obtain polyphenols; however, its operating conditions can generate hydroxymethylfurfural, a potential human carcinogen. The addition of ethanol can reduce process temperatures and retain extraction efficiencies, but the ethanol may reduce the recovery of polyphenols in the subsequent purification stage, affecting the antioxidant properties of the extracts. This study evaluates a combined hot pressurized liquid extraction—resin purification process to obtain polyphenol extracts from spent ground coffee reduced in hydroxymethylfurfural. A multifactorial design was developed to determine the combined effect of the extraction (ethanol content: 0–16% and temperature: 60–90 °C) and purification (ethanol: 60–80%) conditions on some chemical properties of the extracts. The highest recovery of polyphenols (~8 mg GAE/g dry coffee solids) and reduction of hydroxymethylfurfural (95%) were obtained at 90 °C and 16% of ethanol during extraction and 80% of ethanol during purification. These operating conditions retained the antioxidant capacity of the crude extract between 60% and 88% depending on the determination method and recovered 90, 98, and 100% of 4-feruloylquinic acid, epicatechin, and 5-feruloylquinic acid, respectively after purification. The combined process allows differential polyphenols’ recovery and enhances the safety of the extracts. Our computational chemistry results ruled out that the overall selectivity of the integrated process was correlated with the size of the polyphenols. Introduction Coffee is one of the major contributors to the dietary intake of polyphenols worldwide [1,2]; however, its high worldwide consumption (~9.6 million TM/year) has generated large amounts of residues which represent serious environmental problems [3]. The solid residue obtained after processing roasted coffee with hot water is known as spent coffee grounds (SCG), which contain A better understanding of the physical and chemical interactions that govern the extraction and purification of polyphenols can contribute to optimize this integrated process. In this sense, the use of computational chemistry tools is an attractive option to determine some features of polyphenols that could help elucidate the main causes governing the differential separation [28][29][30][31]. This research assesses the effect of the operating conditions of a combined hot pressurized liquid extraction-resin purification process (HPLE-RP) on the differential recovery of polyphenols and HMF from SCG. Materials and Methods Following a full factorial experimental design, SCGs were extracted by HPLE with the addition of small amounts of ethanol at different extraction temperatures. The extracts were subsequently purified (RP) using ethanol/water solutions as a desorption eluent. Their total polyphenol content (TPC) and HMF concentration were quantified to select the best operating conditions (within the range assessed in this study), defined as those in which the highest recovery of polyphenol compounds and the lowest HMF content is reached. Then, the antioxidant capacity (AOC) of the crude and purified extracts obtained at the best operating conditions was determined using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) and the oxygen radical absorbance capacity (ORAC) methods and compared with those obtained using a HPLE-RP process. Additionally, the polyphenols' profile of the best purified extract was determined and analyzed using computational chemistry. Spent Coffee Grounds (SCG) SCGs (Coffea arabica L.) were obtained, after preparing the coffee with water at 80 • C and 4 min of repose, using water in a Inox Oster filter coffeemaker (model OEMP50, Sunbeam Products, Inc., Boca Raton, FL, USA). Before extraction experiments, SCGs were dried at 40 • C during 15 h approx., until reaching a moisture content of 36.8 ± 0.1% (w/w). Then, dried SCG samples were reduced to a particle size lower than 1 mm diameter by an Oster blender (Sunbeam Products, Inc., Boca Raton, FL, USA) and immediately frozen (−20 • C) until extraction (less than two months). Hot Pressurized Liquid Extraction SCG (~5 g) homogenized samples were subjected to HPLE in an Accelerated Solvent Extractor (ASE 150, Dionex, Sunnyvale, CA, USA) according to Vergara-Salinas et al. [32]. Polyphenol extraction was carried out at different extraction temperatures and ethanol contents, following a multifactorial experimental design described below. An additional SCG extraction was carried out using pure water (HPWE) at 200 • C. After extraction, crude extracts were collected and stored in amber vials at −20 • C prior to the subsequent RP process and chemical analysis. Macroporous Resin Purification of SCG Extracts SCG extracts were purified using a polystyrene column packed (Ø: 25 mm; h: 100 mm) with~10 g of HP-20 resin (Diaion, Tokyo, Japan). This resin was selected in preliminary tests performed at our laboratory, due to its high SCG polyphenols adsorption capacity compared to other resins (Sepabeads SP850, Amberlite XAD7). For polyphenols adsorption, 100 mL of SCG extract were passed through the resin with a flow rate of 5 mL/min. Then, polyphenol desorption was carried out using different ethanol/water solutions as eluents at a rate of 5 mL/min. The adsorption/desorption polyphenol experiments were performed at 30 • C. Finally, purified extracts were stored at −20 • C until chemical analysis. Determination of the Total Polyphenol Content (TPC) The TPC of SCG extracts was determined by a Folin-Ciocalteu assay [33] (Spectrometer UV 1240 Shimadzu, Kioto, Japan). Results were expressed as g of gallic acid equivalent (GAE) per g of dry spent coffee. Determination of Antioxidant Capacity The antioxidant capacity of SCG extracts was determined by DPPH and an improved version of the ORAC assay developed by Ou et al. [34]. DPPH was performed by spectrophotometry (Spectrometer UV 1240, Shimadzu, Kioto, Japan) according to the free radical 2,2-diphenyl-1-picrylhydrazyl method (DPPH) [35]. The efficient concentration of extracts, which is the concentration necessary to inhibit 50% the absorption of DPPH (EC50; mg/mL), was determined in triplicate and expressed as mg of Trolox equivalents per g of dry SCG (mg TE/g). The ORAC assay was carried out in a PerkinElmer 2030 Multilabel Reader with 96-well black plates [34]. The Trolox equivalent molar concentrations of the samples were calculated using a linear regression equation between the Trolox concentration and the corresponding net AUC [34]. To compare the antioxidant activity of the extracts, we decided to calculate the relative ORAC values as mg of Trolox equivalents present in 1 g of dry SCG. Quantification of HMF HMF concentrations of the extracts were measured by HPLC-DAD (Thermo Scientific Dionex Ultimate 3000, Waltham , MA, USA) equipped with a reverse phase AcclaimTM 120 C18 column (5 µm 120 Å 4.6 × 150 mm) according to the methodology of Toker et al. [36]. Analyses were performed in triplicate and results were expressed in mg of HMF per g of dry spent coffee grounds. Determination of the Polyphenol Profile The polyphenol profile of the extracts obtained at optimal conditions was determined by solid phase extraction on polymeric cartridges and liquid chromatography with diode array detection (SPE/HPLC-DAD) according to the methodology of Del Alamo et al. [37]. Experimental Design and Statistical Analyses A full factorial experimental design (3 × 4 × 3) with three replicates was developed in order to investigate the effect of extraction temperature (at three levels: 60, 75, and 90 • C), co-solvent content (at four levels: 0%, 5.3%, 10.5%, and 16.0% of ethanol) and eluent concentration (at three levels: 60%, 70%, and 80% of ethanol) over the total polyphenol and HMF content of the SCG extracts obtained using an integrated HPLE-RP process. The experimental design consisted of 36 combinations of the independent variables (extraction temperature, co-solvent content and eluent ethanol concentration) performed in random order. The HPLE-RP process and chemical analysis were performed in triplicate with the data presented as mean and coefficient of variation (CV). To study the effects of the studied factors and their interactions on purification performance, analysis of variance (ANOVA), and least significant difference tests were applied to the response variables with a significance of p ≤ 0.05. The program Statgraphics Plus for Windows 4.0 (Statpoint Technologies, Inc., Warrenton, VA, USA) was used for statistical analysis. Computational Chemistry Calculations were carried out using the Gaussian 09 [38] program package (Gaussian, Inc., Wallingford, CT, USA) running in a Microsystem (Sun Microsystem, Menlo Park, CA, USA). Geometries were calculated at density functional theory (DFT) M062x/6-31+G(d,p) level. No imaginary vibrational frequencies were found at the optimized geometries, which indicate that they are true minima of the potential energy surface. Impact of the Operating Conditions of a HPLE-RP Combined Process on the Chemical Composition and Antioxidant Properties of SCG Extracts The impact of the operating conditions on the TPC and HMF contents of the purified extracts obtained by a HPLE-RP combined process are shown in Table 1. The best operating conditions found were 90 • C and 16% of ethanol during extraction and 80% of ethanol during purification in which the TPC and HMF contents of SCG extracts were 8.46 mg/g dry coffee solids and 1.82 µg/g dry coffee solids, respectively (Table 1). An increase in the operating conditions during HPLE (temperature: from 60 • C to 90 • C and co-solvent concentration from 0% to 16%) improved almost 70% the extraction of total polyphenols (from 8.21 to 13.87 mg GAE/g dry SCG) as can be observed in Figure 1a. A higher extraction temperature favors the mass transfer of polyphenols from the raw material to the extraction solvent. Additionally, because of their polarity, the higher the co-solvent content, the higher the total polyphenol content of the extracts [22]. In our study, we found that the SCG crude extracts contain considerable amounts of HMF ( Table 1). Roasting of coffee beans is commonly performed at extremely high temperatures (~220 • C), triggering the generation of Maillard compounds such as HMF [20,21]. However, we observed that the addition of ethanol, up to 16% during HPLE, decreased~50% the HMF content of the SCG extracts (Figure 1b). The addition of a non-polar co-solvent in the HPLE process reduced the polarity of the medium affecting the solubility of HMF and disfavoring its extraction [39]. An increase in the operating conditions during HPLE (temperature: from 60 °C to 90 °C and cosolvent concentration from 0% to 16%) improved almost 70% the extraction of total polyphenols (from 8.21 to 13.87 mg GAE/g dry SCG) as can be observed in Figure 1a. A higher extraction temperature favors the mass transfer of polyphenols from the raw material to the extraction solvent. Additionally, because of their polarity, the higher the co-solvent content, the higher the total polyphenol content of the extracts [22]. In our study, we found that the SCG crude extracts contain considerable amounts of HMF ( Table 1). Roasting of coffee beans is commonly performed at extremely high temperatures (~220 °C), triggering the generation of Maillard compounds such as HMF [20,21]. However, we observed that the addition of ethanol, up to 16% during HPLE, decreased ~50% the HMF content of the SCG extracts (Figure 1b). The addition of a non-polar co-solvent in the HPLE process reduced the polarity of the medium affecting the solubility of HMF and disfavoring its extraction [39]. Successful formulations of nutraceutical and functional ingredients using SCG extracts will depend not only on their antioxidant capacity, but also on their purity as well as their safety. Consequently, the application of a subsequent RP stage was assessed. Increasing the ethanol content of the eluent (from 60% to 80%) during RP improved the recovery of polyphenols (Figure 2a) by 20%. On the contrary, higher ethanol contents reduced HMF in the purified extract up to 95% (Figure 2b). These results indicate that, at the best operating conditions found, the combined HPLE-RP process improves the selective recovery of polyphenols, since HMF is practically eliminated. Successful formulations of nutraceutical and functional ingredients using SCG extracts will depend not only on their antioxidant capacity, but also on their purity as well as their safety. Consequently, the application of a subsequent RP stage was assessed. Increasing the ethanol content of the eluent (from 60% to 80%) during RP improved the recovery of polyphenols (Figure 2a) by 20%. On the contrary, higher ethanol contents reduced HMF in the purified extract up to 95% (Figure 2b). These results indicate that, at the best operating conditions found, the combined HPLE-RP process improves the selective recovery of polyphenols, since HMF is practically eliminated. An increase in the operating conditions during HPLE (temperature: from 60 °C to 90 °C and cosolvent concentration from 0% to 16%) improved almost 70% the extraction of total polyphenols (from 8.21 to 13.87 mg GAE/g dry SCG) as can be observed in Figure 1a. A higher extraction temperature favors the mass transfer of polyphenols from the raw material to the extraction solvent. Additionally, because of their polarity, the higher the co-solvent content, the higher the total polyphenol content of the extracts [22]. In our study, we found that the SCG crude extracts contain considerable amounts of HMF ( Table 1). Roasting of coffee beans is commonly performed at extremely high temperatures (~220 °C), triggering the generation of Maillard compounds such as HMF [20,21]. However, we observed that the addition of ethanol, up to 16% during HPLE, decreased ~50% the HMF content of the SCG extracts (Figure 1b). The addition of a non-polar co-solvent in the HPLE process reduced the polarity of the medium affecting the solubility of HMF and disfavoring its extraction [39]. Successful formulations of nutraceutical and functional ingredients using SCG extracts will depend not only on their antioxidant capacity, but also on their purity as well as their safety. Consequently, the application of a subsequent RP stage was assessed. Increasing the ethanol content of the eluent (from 60% to 80%) during RP improved the recovery of polyphenols (Figure 2a) by 20%. On the contrary, higher ethanol contents reduced HMF in the purified extract up to 95% (Figure 2b). These results indicate that, at the best operating conditions found, the combined HPLE-RP process improves the selective recovery of polyphenols, since HMF is practically eliminated. On the other hand, the integration of the RP stage after the extraction process decreased the TPC and the AOC, both in the HPLE (TPC: 34%, DPPH: 12% and ORAC: 40%) and the HPWE processes (TPC: 46%, DPPH: 34% and ORAC: 43%) as seen in Figure 3a,b, respectively. However, different AOC responses were observed for DPPH and ORAC (Figure 3a,b). It can be attributed to some intrinsic characteristics of the assays as well as the variations in the polyphenol profile of crude and purified extracts. ORAC analysis evaluates the AOC against small radicals, while DPPH is a bulky radical which generates steric inaccessibility, specifically for polyphenols that possess strong antioxidant activities. This size difference increases the probability of reaction when the ORAC method is applied. On the contrary, when the AOC of extracts is determined by DPPH, the system may react slowly or may even be inert to DPPH [40]. Additionally, some polyphenols present different values of ORAC and DPPH. Therefore, if the polyphenol profile of the extracts changes, their ORAC and DPPH values do not necessarily change in the same magnitude [41,42]. Although HPWE-RP could obtain an extract with higher AOC and TPC than the best HPLE-RP operating conditions found in this study, the HPWE-RP extract contained considerable amounts of HMF (~7.6 mg/g dry SCG), due to the high extraction temperature (200 • C) applied [43]. On the other hand, the integration of the RP stage after the extraction process decreased the TPC and the AOC, both in the HPLE (TPC: 34%, DPPH: 12% and ORAC: 40%) and the HPWE processes (TPC: 46%, DPPH: 34% and ORAC: 43%) as seen in Figure 3a,b, respectively. However, different AOC responses were observed for DPPH and ORAC (Figure 3a,b). It can be attributed to some intrinsic characteristics of the assays as well as the variations in the polyphenol profile of crude and purified extracts. ORAC analysis evaluates the AOC against small radicals, while DPPH is a bulky radical which generates steric inaccessibility, specifically for polyphenols that possess strong antioxidant activities. This size difference increases the probability of reaction when the ORAC method is applied. On the contrary, when the AOC of extracts is determined by DPPH, the system may react slowly or may even be inert to DPPH [40]. Additionally, some polyphenols present different values of ORAC and DPPH. Therefore, if the polyphenol profile of the extracts changes, their ORAC and DPPH values do not necessarily change in the same magnitude [41,42]. Although HPWE-RP could obtain an extract with higher AOC and TPC than the best HPLE-RP operating conditions found in this study, the HPWE-RP extract contained considerable amounts of HMF (~7.6 mg/g dry SCG), due to the high extraction temperature (200 °C) applied [43]. The interactions between the polyphenols and the resin, as well as the effect of the polarity of the solvent over them, can contribute to explain the differential recovery obtained. HP-20 is a polyaromatic adsorbent resin without polar groups. It is reasonable to assume that, in addition to van der Waals forces, π-stacking can occur between the aromatic rings of the polyphenols and the aromatic rings in the surface of the resin [44,45]. However, all these forces can be altered by the properties of the solvent after a given polarity threshold [46,47]. To assess a possible association between molecular size and differential recovery of polyphenols, the molecular dimensions of all compounds identified in the profile were determined by quantum chemical calculation (DFT M062x/6-31+G(d,p) level). From optimized geometries for each compound, at the level indicated above, the largest diameters and molecular volumes were determined. We found no correlation between these molecular dimensions and the observed polyphenol differential recovery (Table 2). Additionally, Table 2 shows no correlation between the number of aromatic rings in the molecules and their overall recovery in the integrated process. The analyzed SCG polyphenols present specific interactions with the resin or fulfil specific topological requirements that explain the high selectivity of the optimized process. We hypothesized that, even under the best operating conditions, the adsorption polarity threshold of some SCG polyphenols was not reached. Therefore, even though the co-solvent addition level was increased, the hydrophobic interactions were not affected, and the polarity of the HPLE extracts was still appropriate to efficiently recover these compounds. It seems, however, that for the SCG polyphenols not found in the purified extract, either the polarity of the optimum extraction solvent altered their interaction with the resin, or they were irreversibly adsorbed in the pores of the resin. To elucidate the true mechanism, further experiments are necessary, which are beyond the scope of this study. Conclusions The use of ethanol as co-solvent during HPLE improves the extraction efficiency of polyphenols at moderate temperatures and disfavors the recovery of the HMF from SCG. Ethanol addition up to 16% in the extraction stage shows overall recoveries over 90% for 4-feruloylquinic acid, 5-feruloylquinic acid, and epicatechin, although some polyphenols, such as gallic acid, caffeic acid, 5-p-coumaroylquinic acid, 3,5-dicaffeoylquinic acid, and 3-feruloyl-4-caffeoylquinic acid, were not recovered in the purified extract. It was ruled out that the high polyphenols selectivity observed in the RP process was related to the size of the molecules. Additionally, the best HPLE-RP operating conditions eliminated 95% of the HMF from the purified extract (1.82 µg HMF/g dry SCG) retaining the antioxidant capacity of the crude extract between 60% and 88%, depending on the determination method (DPPH and ORAC, respectively). Extractions with pure water at high temperatures (200 • C) produce purified extracts with undesirable high contents of HMF (8.69 mg HMF/g dry SCG).
4,485.6
2017-12-22T00:00:00.000
[ "Chemistry", "Environmental Science" ]
A Model ‐ Based Flood Hazard Mapping on the Southern Slope of Himalaya : Originating from the southern slope of Himalaya, the Karnali River poses a high flood risk at downstream regions during the monsoon season (June to September). This paper presents comprehensive hazard mapping and risk assessments in the downstream region of the Karnali River basin for different return ‐ period floods, with the aid of the HEC ‐ RAS (Hydrologic Engineering Center’s River Analysis System). The assessment was conducted on a ~38 km segment of the Karnali River from Chisapani to the Nepal–India border. To perform hydrodynamic simulations, a long ‐ term time series of instantaneous peak discharge records from the Chisapani gauging station was collected. Flooding conditions representing 2 ‐ , 5 ‐ , 10 ‐ , 50 ‐ , 100 ‐ , 200 ‐ , and 1000 ‐ year return periods (YRPs) were determined using Gumbel’s distribution. With an estimated peak discharge of up to 29,910 m 3 /s and the flood depths up to 23 m in the 1000 ‐ YRP, the area vulnerable to flooding in the study domain extends into regions on both the east and west banks of the Karnali River. Such flooding in agricultural land poses a high risk to food security, which directly impacts on residents’ livelihoods. Furthermore, the simulated flood in 2014 (equivalent to a 100 ‐ YRP) showed a high level of impact on physical infrastructure, affecting 51 schools, 14 health facilities, 2 bus ‐ stops, and an airport. A total of 132 km of rural–urban roads and 22 km of highways were inundated during the flood. In summary, this study can support in future planning and decision ‐ making for improved water resources management and development of flood control plans on the southern slope of Himalaya. soft Limited of soft measures Introduction Extreme weather conditions (e.g., intense precipitation events) increase the probability of disasters that may cause unusual and unexpected events such as flooding and flood-related hazards [1][2][3][4][5][6]. Continuous but varying precipitation ultimately causes the flow of a river to exceed a threshold, such that it breaches the river bank or previous flood restoration work, resulting in flooding [7]. Furthermore, a lack of any proper development plan, land-use changes [8], the random building of infrastructures in floodplains, and the blocking of rivers, all tend to increase the likelihood of flooding. Therefore, floods are considered as one of the most severe and most frequent water-induced natural disasters, causing major damage to habitat, infrastructures, and properties worldwideregardless of geographical or hydrological locations-and having direct economic impacts [9][10][11][12][13]. Recent floods at the southern slope of Himalaya (e.g., Pakistan, Bangladesh, India, as well as other South Asian Nations) have caused thousands of fatalities, the displacement of millions of people and billions of dollars of damage [14,15]. Other transboundary flood events from 1985 to 2018 have caused the deaths of more than 6000 people and displaced millions in China, Nepal, and India ( Table 1). Nepal experiences devastating floods each year that cause 29% of the annual deaths and 43% of the total loss of property [16][17][18]. Developing countries such as Nepal, Bangladesh and Myanmar are still struggling to minimize the adverse effect of flooding, while developed countries manage flood risk with increasingly sophisticated flood forecasting models and methods to protect the larger floodplains [19,20]. Therefore, long-term planning and flood control mechanisms are needed in Nepal to reduce the impact of flooding. Many studies have focused on identifying floodplains and flood hazards since 1990 in different parts of the world [21][22][23][24][25]. The Hydrologic Engineering Center's River Analysis System (HEC-RAS) model [26] developed by the U.S. Army Corps of Engineers (USACE) is generally used for investigating flooding and flood-related hazards, and for identifying floodplains globally [21,23,[27][28][29][30]. For instance, [31] integrated this model into ArcGIS, in 2D and 3D, in the South Nation River system of Ottawa, Canada. Similarly, [32] applied this model in conjunction with an extension in ArcGIS (HEC-GeoRAS) in Los Alamos, New Mexico, USA, for floodplain delineation. [33] identified the risk area for constructing flood control structures in the Barsa river, Bhutan, using the hybrid ArcGIS and HEC-RAS approach. Mapping and risk classification were carried out to determine the location, velocity and depth of floods in Morocco [34]. This model was also used in simulating flood flows and inundation levels for the downstream floodplain in the Huong River Basin, Vietnam [35]. In Nepal, based on the HEC-RAS and its ArcGIS extension (HEC-GeoRAS), previous studies have mapped the flood hazard in the Bishnumati and Balkhu rivers of Kathmandu [36,37]. Moreover, there are other freely accessible tools (e.g., RiverGIS plug-in for QGIS, commercial HEC-GeoRAS, RAS Mapper in HEC-RAS) that can be used in model preparation because HEC is not further developing GeoRAS. These studies in Nepal have also emphasized the importance of geo-informatics in urban river management during flooding. Several flood modeling programs and software have benefited the scientific community, regardless of the nature of the flooding. To alleviate the negative consequences of flooding, both structural and non-structural measures are key factors that help in minimizing the flood hazard. Here the structural measures such as culverts, dams, and dykes are determined to be hard measures that may be sometimes harmful for both the environment and mankind. Hard measures can be costly and time-consuming, yet soft measures are equally important for saving lives and property. However, studies addressing these measures should be concentrated and categorized to identify the hard and soft approaches to flood control mechanisms. Therefore, flood risk can be minimized by hazard mapping, risk zonation and enhancement of the flood forecasting and early warning systems determined as soft techniques. Limited studies of soft measures have been conducted as initial steps to establish warning systems, dangers and rough inundation mapping [38,39] in the Karnali River basin (KRB). However, these studies remain far from sufficient to assess inundation extent and water depths for flood control measures in the KRB. The KRB in western Nepal lacks a network of climatic stations with which to assess changes in climatic conditions throughout the region [40] and is one of the most topographically challenging river basins. The two major rivers (Karnali and Bheri) converge in the KRB before passing through several gorges lying across Nepal-India border; flooding in this region is a major problem for the downstream population. Floods have occurred on the Karnali River in 1963, 1983, 2008, 2013, and 2014; these have caused numerous fatalities and widespread damage to infrastructure. The 2014 flood was one of the most severe in the history of the Karnali River, even reaching relatively safe areas; the water level on 15 August 2014 crossed the danger level mark around midnight and floodwaters inundated all the villages downstream, killing 220 people and having a severe impact on 120,000 others [41]. Despite global efforts to improve technology and flood mapping, developing countries like Nepal still face annual struggles to cope with flooding. However, we note that the Karnali Chisapani Multipurpose Project, in the Chisapani area, is planned with the aim of storing about 16.2 billion cubic meters of water; this will have benefits for irrigation, flood control and navigation, as well as producing 10,800 MW of hydropower [42]. The objective of this study is to prepare flood inundation maps and delineate flood hazard zones at a typical region on the southern slope of Himalaya. It can be a preliminary step in determining risk zonation for different spatial inundation scenarios that can aid preparation of early flood warning systems and proper communication mechanisms in downstream communities. The paper firstly discusses the study domain and the flow modeling. Next, model calibration and validation, with the help of observed water levels, are presented. After generating the inundation map, different land-use types, settlements, and infrastructures with their associated vulnerabilities are identified, which will be the initial step towards a comprehensive risk mapping. Evacuation maps and safe shelters can thus be planned with reference to such inundation/hazard and risk maps. Finally, the paper suggests effective land-use planning strategies which will ultimately benefit 175,782 people along the Karnali River. Study Area The three trans-Himalayan rivers in Nepal are the Karnali, Narayani, and Koshi ( Figure 1). Each has numerous tributaries and flows towards the Indo Gangetic Plain (IGP) [43]. The geomorphological configuration of mountains, hills and plains in Nepal promotes the development of severe weather conditions such as localized precipitation and thunderstorms with hail [44,45]. Due to the very steep topographic gradient between the lowlands and high mountains of the Himalaya, downstream regions commonly suffer from severe flooding throughout the monsoon season. Due to high variability in the distribution of rainfall [46,47] and its topographic controls, about 80% of the total rainfall is contributed in the monsoon season June to September, [48] with the remaining 20% in other seasons [47]. The average annual precipitation in this region reaches 1479 mm and average maximum and a minimum temperatures range between 25 °C and 13 °C, respectively [49]. The Karnali River lies between the mountain ranges of Dhaulagiri and Nanda Devi, in the western part of Nepal. The basin extends from 28.2°-30.4° N and 80.6°-83.7° E, covering a total area of 45,269 km 2 [50] and yielding an average annual discharge of 1441 m 3 /s [51]. However, the average of observed instantaneous discharge from the years 1962 to 2015 was found to be 9672 m 3 /s, which seems to be highly fluctuating, whereas the maximum discharge is 21,700 m 3 /s and minimum is 4300 m 3 /s. The river is fed by 57 tributaries from six major watersheds (West Seti, Karnali, Humla-Karnali, Mugu Karnali, Tila, and Bheri), all of which originate in Nepal except for the Humla-Karnali, which originates on the Tibetan Plateau in China. The Karnali and Bheri are two major rivers in the KRB, which converge and flow through several gorges across the Nepal-India border and finally join the Ganges in northern India. The study domain selected here is approximately 38 km downstream from Chisapani ( Figure 1) in a region suffering from destructive flooding that causes significant loss of human life and extensive damage to agriculture, human settlements, and other physical infrastructures. Figure 1. The study area and its elevation in Triangulated Irregular Network (TIN) form. The location of the study area in Nepal and the three major river basins is also provided for reference. Data Sets In this study the water level in the 38 km reach downstream of Chisapani station in the Karnali River Basin (KRB) was simulated to determine the floodplain extent of different year return periods (YRPs) by using HEC-RAS 1D model. The location was chosen based on its vulnerability. Data sets used in the study are summarized as follows. The Shuttle Radar Topographic Mission (SRTM) digital elevation model (DEM), with a ground resolution of 30 m, is available from the United States Geological Survey's (USGS) web portal [52] The land-use map ( Figure 2) was obtained from the International Centre for Integrated Mountain Development (ICIMOD) website [53]. Instantaneous peak discharge ( [54]. Demographic data ( Figure 4) were obtained from Central Bureau of Statistics, Nepal [55] and were supplemented with field work to analyze the social impacts. Field work was carried out to collect water surface elevation, GPS location and land-use characteristics of the floodplain. Methodology We used the HEC-RAS hydraulic model [26], developed by USACE, to simulate 1D steady flow based on calculated river hydraulics. Floods are generally unsteady; therefore, as an initial step for risk zonation of the inundation area, steady-state conditions were used in this study for different Year Return Period (YRP) floods. The steady-state approach calculates water levels at discrete crosssections using the flows assigned in the model. Routing is based on the dynamic wave theory of the Saint-Venant equations, as well as the continuity and momentum equations [26]. Similarly, the energy balance equation (Equation 1 below) is determined as the basis of calculation of water surface profile: where, Z1, Z2 = elevation of the main channel inverts, Y1, Y2 = depth of water at cross sections, V1, V2 = average velocities, a1, a2 = velocity weighing coefficients, g = gravitational acceleration and he energy head loss [26]. Here, HEC-RAS uses a semi-implicit solution algorithm, which is a combination of implicit and explicit finite difference schemes [56]. Figure 5 shows the overall framework used in this study. Firstly, flood frequency analysis was performed for different return periods (2-, 5-, 10-, 50-, 100-, 200-, 1000-YRPs) using Gumbel's distribution [57][58][59]. To determine the best frequency, we used the standard normal distribution G (α = 1.96) for the goodness of fit with a 95% confidence limit (Equation 2): where, In Equation 2, Se is the probable error, Sx is the standard deviation and n is the sample size. KT is the frequency factor, which depends upon the type of distribution and return period. The frequencies of both the maximum and minimum discharge were determined and then the calculated frequencies of flow rates within the outliers were applied in the model ( Table 2). The extreme values calculated using Gumbel's distribution were applied together with the maximum instantaneous peak discharge recorded during the historical peak flood as flow inputs in the model. In the second step, a SRTM-DEM [60] was used to create a Triangular Irregular Network (TIN) file containing geometric data to verify the height approximations in ArcGIS. The HEC-GeoRAS preprocessing was then completed and the final output from the ArcGIS and HEC-GeoRAS was processed in the HEC-RAS 1D steady flow simulation. Manning's roughness coefficients (n = 0.04 or 0.02) were selected based on visual inspection of the river channel both upstream and downstream as suggested by [58,61] and were adjusted slightly to calibrate the model [56,62]. Similarly, Manning's roughness coefficients were specified for forest (0.08), agriculture (0.035), bare ground (0.03), shrubland (0.035), and grassland (0.032) according to visual inspection of the floodplain characteristics. The model selected here is considered viable for flood inundation mapping due to the similar geographical and streamflow conditions in the present application to those described in previous studies [63]. The river's transition to lower plains, with significant slope change, can be the determinant factor in the HEC-RAS model. Since the cross-section is extracted from the DEM, the terrain slope is determined as (left = 0.091 and right = 0.678), with bed slopes = 0.641 in the investigated region. Floodplain maps represent key planning documents by providing a visual interpretation of the spatial variability of flood extents and future flood hazards [64,65]. Model Configuration Hydrodynamic modeling in HEC-RAS involved several steps in the pre-and post-processing stages. In pre-processing, the geometric data and other required themes were prepared in an ArcGIS environment with HEC-GeoRAS and exported to .sdf format in HEC-RAS. The themes included generation of the river centerline, bank line and flow path from the DEM and World Imagery in ArcGIS; these data were imported to HEC-GeoRAS layers. The TIN and cross-section cut lines in the study domain were constructed for the river using the DEM alone. Geometric data prepared in the pre-processing phase of HEC-GeoRAS were imported into HEC-RAS for the hydrodynamic modeling. Hydraulic data including flow data and associated boundary conditions were used in the HEC-RAS flow plan, where the calculated flood frequencies for different YRPs (including the maximum peak recorded flood in the Chisapani gauging station) were applied to the river crosssection. The steady-state flow simulation was then carried out to calculate the water surface profile under a mixed flow regime. Calculated water surface profiles were exported to GIS format for post-processing. In the post-processing phase, results from HEC-RAS were imported into the HEC-GeoRAS platform after layer configuration with terrain input. The water surfaces for different flow plans were generated for each return period along with the corresponding flood frequency analysis of the maximum peak discharge. The flood inundation map and floodplain boundary were then generated for each water surface profile, using the TIN. Model Calibration and Validation Model calibration and validation is an essential aspect of the hydraulic simulations. Two different stations were used to verify the model's satisfactory performance for further simulations. The model was calibrated using observed instantaneous peak discharge gauge heights recorded over a long period between 1962 and 2014 in the Chisapani hydrological station; the model was then validated with field observations at Satighat. In the case of calibration, the peak flood events of selected years (i.e., 1962,1963,1968,1970,1971,1973,1975,1983) were taken, whereas validation was based on major flooding's of 2009, 2013 and 2014. After performing the repeated simulations and readjusting the river channel water level to the datum height, the difference between observed and simulated water levels was found to be negligible. The Hicks and Peacock equation [66], in Equation 3 below, was adopted to assess simulation performance based on the percentage difference between simulated and observed water levels during historical peak flood events in the downstream region of the Karnali River. A lower percentage error between the simulated (Swl) and observed (Owl) water level indicates a better performance of the model. Thus, the simulated 2014 flood water level, with a difference of only 3.61% between the observed (15.2 m) and simulated (15.75 m) water levels, as well as the obtained R 2 value in both the calibration = 0.95 and validation = 0.98, seems to be highly significant and demonstrated the good model performance. A detailed comparison of the calibration and validation processes is illustrated in Figure 6. Results The simulated floods in all the YRPs showed significant flooding conditions in both river banks of the Karnali river; as a result, the HEC-RAS simulation model was used to prepare flood inundation maps with changes in depth for different YRP flood frequencies. The inundation maps showed the variation in flood depth over the floodplain with increase in discharge. The inundation maps provided a qualitative picture of the depth and its extent during inundation. Flood Frequency Analysis Flood frequencies obtained using Gumbel's distribution for different YRPs are presented in Table 2 and Figure 7 using the log chart along with the recorded historical peak discharge for the different years. Of the three types of extreme value theory, the straight line of Type I was determined as being the most appropriate. The discharges predicted by Gumbel's distribution were 9088, 12,696, 15,085, 20,343, 22,565, 24,780, and 29,910 m 3 /s for the 2-, 5-, 10-, 50-, 100-, 200-, and 1000-YRPs, respectively (Table 2; Figure 7). . The year return period (YRP) is plotted on a logarithmic scale for clarity. Historically observed discharge is also given here for reference. Comparison and Analysis of Water Surface Profile and Water Surface Elevation We used two river stations for comparison of water levels and water surface elevations in the model simulation: an upstream station at Chisapani bridge and a downstream station at the Satighat observation station. The upstream station at Chisapani bridge was used for calibration, whereas Satighat was used for validation. Modeled water surface elevations at Chisapani station under 10-, 100-, and 1000-YRP floods and the 2014 peak flood reached 212 m above sea level (a.s.l.) (Figure 8). Under these same YRPs, the river level downstream at Satighat reached 145 m a.s.l. Due to the strongly heterogeneous river bed and slope, it is very difficult to estimate water flow rates during flood and non-flood periods, as illustrated by Figure 8. The river flow varies according to the weather conditions in the KRB and the small river joining the Karnali River. Thus, the water surface profile obtained by the model reproduces the natural characteristics of the river as observed during the site visit. Simulated water levels in the two modeled cross sections at Chisapani and Satighat showed higher overflows from the riverbank during the 10-, 100-, and 1000-YRPs and the 2014 flood, as illustrated in Figure 9. The output cross-sectional plots generated by the model (Figure 9) showed that water levels exceeded river bank heights even during the 10-and 100-YRPs, indicating a situation of critical concern in the downstream area. As expected, in all cross sections of the upper reach and lower reach, the water surface elevation under the 2014 flood was higher than that under the 100-YRP flood. Since we have assumed the flood magnitude in 2014 was approximately that of the 100-YRP, then according to the calculated and observed discharge, the water levels of the 2014 flood in the upper reach (Chisapani) and lower reach (Satighat) would have resembled those illustrated in Figure 9. In addition, the maximum instantaneous discharges in 1983 and 2014 at Chisapani station were both around 21,700 m 3 /s, whereas peak discharge in 1975 was 16,000 m 3 /s and about 17,000 m 3 /s in 2009. Such a variable discharge of the Karnali River indicates that future floods could occur at any time. Simulation of the 2014 Flood Event The 2014 flood simulation for the Karnali River was executed in HEC-RAS using steady flow methods. Boundary conditions were established for entire river nodes with normal depths after visual inspection of the study domain. Similarly, an initial condition was imposed by populating the model with the observed water discharge during the flood. The simulated and observed water surface elevation and inundation area corresponded to a 100-YRP; however, [41] reported that this flood event was equivalent to a 1000-YRP and argued that the water level was as high as 16.1 m. In the absence of verified data for different flood events, the Karnali River flood in 2014 can be considered as a historical flood, but its exact magnitude cannot be constrained as a 100-or 1000-YRP. The observed discharge can be compared with the discharge calculated using Gumbel's distribution at both river stations ( Figure 9). Moreover, the estimated discharge during flooding is greater than that of the normal flow, as shown in the simulation of the 2014 Karnali floods. The Karnali River has many tributaries fed by snow melt, drops in altitude by approximately 7000 m, and passes through the deep gorges of the Himalayas; therefore, its discharge varies considerably and leads to the high flood risk simulated by the model. The left bank of the river was found to be more vulnerable than the right, as the surrounding terrain is slightly lower near the left bank. The simulated water surface elevations in the 2014 flood showed that the greatest inundation of the study domain was due to the overflow of the river in the lower reach. Therefore, further riverbank restoration is required in this part of the river. The DHM has established threshold water level gauge heights of 10 m and 10.8 m for warning and danger levels, corresponding to 201.64 m and 202.44 m a.s.l., respectively [67] ( Table 3). The discharge recorded on 15 August 2014, was 21,700 m 3 /s, with an observed water level of 15.2 m at Chisapani gauging station; this illustrates the severity of flooding in the downstream reach of the river area, which leads on the river overflow from both its banks. Flood Hazard Mapping The flood hazard is directly linked to the hydraulic and hydrological parameters. Here, a hazard can be defined as a threatening natural event with a specified probability of occurrence [15]. Quantifying flood hazard level and mapping its potential damage can be achieved using water depths reached during flooding. Here, flood hazard was mapped after reclassifying the flood depths into five classes: <1 m, 1-2 m, 2-4 m, 4-6 m, and >6 m. This classification was a crucial step in quantifying flood hazard and its potential damage in the future. The areas bounded by each flood polygon were calculated to assess the hazard level. Therefore, effective land-use planning and its recommendations can be implemented in the downstream region of the KRB with the help of soft measures used in this study. The flood inundation and hazard mapping in this study show that the existing warning and danger levels correspond well with observations, as demonstrated by the simulated water surface elevation at Chisapani and Satighat. The flood depth classification shows that most of the inundated area has a water depth less than 1 m (Figure 10), but the area with water depths greater than 2 m increases considerably as flood intensity increases. Also, the area with higher flood depth increases and the area with lower depths decreases as flooding intensity increases ( Figure 10). Some areas become inundated even in a "normal" flood scenario of the 2-YRP, demonstrating a critical situation in the downstream reaches of these rivers. Flood depths exceeding 6 m in the 1000-YRP flood affect a slightly greater area than that in the 100-YRP flood. There is a relationship between intensity and loss. During the early 1990s, a smaller number of flood events were relatively more dangerous to people downstream than at present, due to the lack of a flood forecasting system. As technology has developed, the number of flooding casualties has decreased, but flooding still displaces millions of people. Classifying the data recorded up to 2018 (Table 1) [68] shows that the transboundary floods are more severe, causing many fatalities and forcing displacement of inhabitants that leaves them homeless. Hence, the study reveals the link between flood depth and flood hazard. During extreme flooding events in the monsoon season, the discharge of snow-fed rivers becomes high due to the combination of snow melt and rain, leading to flooding and landslides. The downstream region of the study domain comprises the flat plains of Terai and is more vulnerable to the flood water, as it can more easily inundate settlements. Figure 11 shows that the flood water level is significantly higher than normal in the river channel, and that the river banks can also reach depths of 4-6 m. Similarly, the simulated 2014 floods also yielded high water levels. Although the depth of the Karnali River can be very high due to its stream characteristics, the area of inundation after bank overspills used to be greater. Therefore, it is important to deliver early warning messages via the mass transit system and to communicate with the regional local government offices to minimize losses. To assist with this objective, we classified the flood depths in different locations based on the occurrence and severity of the flood hazard level faced by local communities. This complements the flood mapping used by the DHM in which we tried to investigate the flooding extent in detail for different locations and flood depths. Flood Vulnerability Assessment Nepal is a mountainous country with a high risk of flooding, landslides and soil erosionespecially during the monsoon season [69]. The flat plains of Nepal promote monsoon season flooding and inundation across a large area, mostly on the IGP along the southern flank of the Siwalik Zone [67,70]. The main causes of inundation are ineffective land-use planning, negligent floodplain management and insufficient research prior to the implementation of further development programs. Another major problem which aggravates flooding downstream of the Siwalik Zone near the Nepal-India border is the illegal construction of infrastructure without proper research into land settings, and a lack of cross-drainage passages and embankments on the IGP [71]. The different land-cover types and their vulnerabilities are illustrated in Figure 12. The floodplain assessment demonstrated the highest percentage of vulnerability in the downstream region of the KRB (up to 75%), which was in the cultivated area (53,338 ha), followed by 14,658 ha of barren land. Although only a negligible area of built-up land was inundated, this small area comprises a densely populated county situated close to the river. Counties located along the bank of the Karnali River (Janaki, Tikapur, Geruwa, Madhuwan, Thakurbaba, Rajapur) in the study area are at high risk of flooding during the monsoon. Following the February-March 2018 survey, this study suggests that the population in the KRB is more vulnerable than those near the edges of the floodplain. Tikapur, Rajapur and Geruwa are densely populated counties where 68,917 residents live in 13,214 households designated as high flood risk. The simulated water level at the Chisapani gauging station can show a high level of inundation in downstream regions, depending on the YRP ( Figure 13). As the flood discharge increases in the upstream region, the water level reaches around 16 m in the river channel and the inundation area affects more than 10,000 households. However, our study suggests that key facilities could be affected during future floods in the Karnali River corridor, including floods such as that in 2014. The major transportation routes (one airport and two major bus terminals), as well as local trade routes and link roads (including 164 km of highways) (Figure 14), are at risk of inundation and resulting collapse of infrastructure. As seen in the inundation scenario of the 2014 floods, many schools, hospitals and settlements were heavily inundated and remain at a high risk of future inundation. The extent of major inundation events and their effect on settlements is presented in Figure 15. The simulated flood of 2014 roughly corresponded to a 100-YRP flood that inundates 8784 ha of agricultural land and 2267 ha of barren ground. During the post-monsoon season, central Nepal and some western parts of Nepal received record-breaking rainfall, reaching 493.8 mm/24 hr at Karnali Chisapani station, which is upstream of the study area. This rainfall was one of the key factors contributing to the 2014 floods with impacts as described above [41,72]. Modeled peak discharge levels observed during the historical floods of 1970,1971,1975,2000,2009 and 2013 were used for the calibration, yielding gauge height discrepancies below 23%; the historical flood of 2014 was used for the validation and yielded an error of 3.61%. Therefore, the model results agree closely with observed water surface elevation and demonstrate the model's suitability for mapping the floodplains of other rivers with similar characteristics. Discussion Disasters such as floods and landslides are very common in the southern part of Nepal bordering India, with different transboundary floods with high number of fatalities (Table 1). They affect the livelihood of the people and cause enormous damage to the physical properties, e.g., destroy houses, infrastructure, agricultural land, and crops. Therefore, it is very important to identify probable future floods and their inundation extent with available tools. In this study as per the recommendation of the World Meteorological Organization (WMO), Type I (Gumbel) distribution was used to determine the flood frequency [73]. After the calibration and validation (Figure 6) of the model, further simulation was carried out to observe probable scenarios of flood in the KRB. The calculated flood frequency of Type I seemed to be the most applicable and appropriate, as summarized in Table 2 and illustrated in Figure 7. The water surface profile obtained after the simulation by the model reproduces the natural characteristics of the river as observed during the site visit. Also, due to high variability in the velocity of the river from Chisapani to Satighat, greater overflow of the river occurs along the lower reach than along the upper reach, causing an extensive inundation of the region around Satighat (Figure 8). After the simulation, flood extent in all the YRPs showed significant flooding conditions in both river banks of the Karnali River ( Figure 9). Based on the flood frequency analysis, inundation areas were determined and compared under different YRP floods. The scenario depicted that high-elevation settlements in the upper reach were not greatly affected by flooding, but the river in the lower reach destroyed settlements during excessive flooding. While it is normal for the river to affect trade routes when it is in flood during the monsoon season, the river still presents the risk of dangerous conditions during high floods. A recent study analyzing two decades of maximum instantaneous discharge of the Karnali River showed very high discharges during the summer monsoon season, reaching 21,700 m 3 /s, thus presenting a serious threat to the KRB [41]. Moreover, the observed peak discharge of the 2014 flood was broadly consistent with that of the 100-YRP (22,565 m 3 /s). Thus, we could evaluate the capacity of the HEC-RAS model in generating water surface profiles and elevations. This will help in the further planning of river restoration to mitigate the loss and damage during flooding. Overall, the simulated water surface profile in the HEC-RAS model showed good performance in the KRB. A similar study in Pakistan by [74] compared the water surface elevation and profile of different YRPs for the Kabul River, further illustrating the requirement for policies to mitigate likely future inundation events. Since the model applied to the Karnali River was calibrated with observed historical gauge heights, we can assume that calculated flood levels in the vicinity of both stations are reasonably accurate. Similarly, in this study the flood depth obtained after simulation helps in mapping flood hazard. Most of the flooded area is within or shallower than the depth class 2-4 m, as evident in Figure 10; however, even floods with depths of 1 m can cause severe damage and are therefore considered of high risk in the downstream region ( Figure 11). Higher flood depths generally cause more fatalities and pose serious threats to the settlement along the river and coastal cities [75]. [37] studied the Balkhu River in Nepal to assess the hazard level in illegal squatter settlements along the riverbank and suggested their relocation. They also discussed the details of a plan implemented by the local government whereby construction activities were restricted by regulations requiring a 20 m distance between development and the river bank. A similar approach can be very fruitful in the downstream region of the KRB, which will help in reducing the future impact of flooding. Although a flood projection and water level threshold was established by the DHM in the KRB [67] (Table 3), uncertainty remains regarding the timing of flooding, as it is sometimes caused by anomalous flooding events. The vulnerability assessment was conducted by identifying the key facilities with the help of field surveys and open street maps (Figures 12-15). Urban, suburban and rural areas generally require greater utility of natural resources such as rivers and watershed areas, and greater construction of river embankments, levees, and dikes, resulting in the bumping of sediments. These factors cause high vulnerability of the human settlements and aquatic habitat in the study area, which must be properly managed to minimize loss. Various land cover types (e.g., bare area, cultivable land, built-up area, grassland and forest) in the downstream region are at high risk ( Figure 12). More than 4000 hectares (ha) are inundated under normal floods, representing a large area around the river banks and urban settlements. Despite severe flooding in almost every year since 1970, Nepal still lacks the proper measures needed to tackle regular flooding of the IGP during the monsoon season [71]. [41] showed that weather conditions were responsible for the Karnali River floods in 2014, which caused over 220 fatalities and was assumed to be a 100-YRP event. Likewise, in Figure 14, the impact assessments for different types of infrastructure were carried out to identify likely future inundation scenarios. Here, the settlements, hospitals, schools, roadways, airports and bus stops were identified as crucial elements of public infrastructure or facilities [4]. Generally, the potential impact of a disaster on a territory is estimated from the potential vulnerability of its critical facilities in a hazardous situation. The potential impact on these key facilities plays an important role in the operation of the governing body and proper functionality of the region. Therefore, effective land-use planning and its recommendations can be implemented in the downstream region of the KRB with the help of the soft measures used in this study. The urgent need for maximum flood forecasting units, the latest early warning technology, and door-to-door awareness campaigns in riverbank settlements and nearby communities are all important objectives that can help in reducing the flood-related loss of life and property [76]. In the context of Nepal, the application of a numerical model and ArcGIS for floodplain analysis was limited by the availability of river geometry, topographical data, and hydrological data. Here, the adequacy of the available topographical information was a major concern; the data used in this study were based on a TIN with high accuracy. In particular, the model was able to accurately determine the severity of the 2014 flood. Therefore, we can assure that the model is valid for flood hazard mapping in the study region. The accuracy of topographical data significantly plays an important role in delineation of likely inundation. In our study we limited the topographical data to space-borne DEMs, which is not normally recommended. Better topographical data provide clear information of the areas with high susceptible flow velocity and direction. We therefore recommend carrying out river bathymetry, Lidar surveys, and Differential Global Positioning System (DGPS) surveys across the study area. For the spatial extent of inundation our study provides more or less acceptable scenarios. The likely inundation across the study area and the simulated result of this study are almost congruous, which is confirmed by our field level observations and published literatures. We believe this study can provide a baseline for flood risk management in the downstream of the KRB, as well as in the context of similar catchments. Conclusions In this study, we presented a systematic approach to identifying flood frequency and its future risk based on a spatial extent of inundation in the downstream region of the Karnali River at southern Himalaya. Although a major objective was to focus on flood inundation scenarios, this study had the limitation of not precisely focusing on flow velocity, direction, timing, propagation and its hydraulics during peak flood. Thus, the YRP hazard maps were established based on a hybrid approach using ArcGIS and the HEC-RAS model. By analyzing past hydrological events on the Karnali River and by assessing its present condition, a number of conclusions were drawn. Natural disasters such as floods, landslides, riverbank erosion, debris flow, soil erosion, etc., have posed socio-economic risks leading to considerable loss of life as well as agricultural land, food production, private and public properties. Hence, reducing the impact of natural disasters aggravated by human activities is a critical task in preparing the river management plan for the Karnali River. Executing this plan is essential to reducing the threat of flooding. The extent of the flood hazard in the downstream region of the KRB for the various return periods shows that the total hazard area increases as the YRP increases. Highintensity rainfall across the basin leads to flooding associated with high discharge of the KRB. Hydraulic simulations performed for different YRPs show that a rapidly increasing area of fertile land will be inundated as the YRP increases, thereby threatening the food security of the river basin. The downstream community can prepare well by implementing a timely and well-communicated plan with the help of the spatial inundation scenarios simulated by this study. Regarding the methodology, the HEC-GeoRAS proved to be capable of flood inundation mapping and was very effective for assessing the magnitude of future flood risk in river basins, as demonstrated in the case of the Karnali River in Nepal. After generating the inundation map, the vulnerabilities of different land-use types, settlements and infrastructures were identified, representing the first step towards comprehensive risk mapping. The evacuation maps and safe shelters can be identified and planned by using such inundation/hazard and risk maps. This work has tried to classify the depths of the floods in different locations during the occurrence and severity of its flood hazard presented to the local communities. This research complements the flood mapping approach used by the DHM, and we have tried to investigate in detail the flooding extent at different locations for varying river depths. This study will also help in the resettlement of communities near the river, since we can take advantage of this study to identify the modeled floodplains and vulnerability, which can be disseminated to the local authorities for effective land-use planning.
9,275
2020-02-14T00:00:00.000
[ "Engineering" ]
Solid-phase sintering and vapor-liquid-solid growth of BP@MgO quantum dot crystals with a high piezoelectric response Low-dimensional piezoelectric and quantum piezotronics are two important branches of low-dimensional materials, playing a significant role in the advancement of low-dimensional devices, circuits, and systems. Here, we firstly propose a solid-phase sintering and vapor-liquid-solid growth (SS-VLS-like) method of preparing a quantum-sized oxide material, i.e., black phosphorus (BP)@MgO quantum dot (QD) crystal with a strong piezoelectric response. Quantum-sized MgO was obtained by Mg slowly released from MgB2 within the confinement of a nanoflake BP matrix. Since the slow release of Mg only grows nanometer-sized MgO to hinder the further growth of MgO, we added a heterostructure matrix constraint: nanoflake BP. With the BP as the matrix confinement, MgO QDs embedded in the BP@MgO QD crystals were formed. These crystals have a layered two-dimensional (2D) structure with a thickness of 11 nm and are stable in the air. In addition, piezoresponse force microscopy (PFM) images show that they have extremely strong polarity. The strong polarity can also be proved by polarization reversal and a simple pressure sensor. Introduction Quantum dots (QDs) are the basis of quantum technology [1][2][3]. The QDs are nanoparticles that exhibit threedimensional quantum confinement with sizes below tens of nanometers. They are regarded as artificial atoms with zero dimension [4][5][6][7][8][9][10]. To date, semiconducting QDs [1], including CdS, ZnSe, CuInS, AgS, and InP, have been the most studied. Due to their unique optical and electrical properties, they are mainly used in optoelectronic devices, such as light-emitting diodes, transistors, and solar cells, as well as in bioimaging and biosensing applications [11,12]. Zhou et al. [13] proposed that resistive switching (RS) states were discovered in the Ag|TiO x nanobelt|Ti device under different moisture levels. Strong interplay with ions, electron transfer, and migration of OH − ions push the device into the RS memory state. Zhou et al. [14] also prepared a memristor with the structure of Ag|graphene QDs|TiO x |F-doped SnO 2 that exhibited typical bipolar RS memory behavior. Negative photoconductivity (NPC) effect (increase in resistance exposed to illumination) has been observed in the RS memory, which has great potential for the application in photoelectric devices. The semiconducting QDs can be made by molecularbeam epitaxy (MBE) or chemical vapor deposition (CVD), both of which are epitaxy methods [1,2]. More specifically, during the epitaxial growth of one semiconducting material from a substrate crystal of another semiconducting material (these two methods can also make non-semiconducting materials), crystals deposit as nano-islands near the heterojunction interface, which become the QDs under the appropriate conditions. This type of QD is formed by self-assembly, and thus is called a self-assembled QD [11]. For two-dimensional (2D) materials that are susceptible to exfoliation [15], such as graphene [16] and black phosphorous (BP) [17], the QDs are often obtained with mechanical or chemical exfoliation methods. Metal oxide QDs also show wide applications such as photocatalysis (TiO 2 ), ultraviolet protection films (ZnO), gas sensing (SnO 2 ), and high-temperature superconductors (CuO) [18]. However, due to their high surface energy, they have heightened mechanical, thermal, and chemical instability. It is therefore relatively difficult to achieve viable large-scale production. Embedding them in amorphous matrices [18] has been reported to be an effective way to obtain stable metal oxide QDs. Analogously, embedding them in a crystal matrix would be worth trying. Novel low-dimensional piezoelectric [19] materials and quantum piezotronics are important branches of low-dimensional material research, as they play a significant role in the advances of low-dimensional devices, circuits, and systems [20,21]. Piezoelectric materials, which can convert mechanical and electrical energy, offer a wide range of applications, and have been utilized in a variety of smart devices [22] with both active functions (emitting sound waves, moving objects, etc.) and passive functions (such as sensing), are the key materials in the age of intelligence. The most widely known nano-piezoelectric material is ZnO [23][24][25]. Zinc oxides that have a wurtzite crystal structure or a cubic β-ZnS structure exhibit piezoelectric effects with single layers, which preserve the bulk materials' piezoelectricity in 2D. In contrast, although the bulk material of MgO is not piezoelectric, we observed strong piezoelectricity in our synthesized BP@MgO QD crystal. This phenomenon has been described by Lin et al. [20], who demonstrated that from the perspective of crystallography, 2D morphologies represent spontaneous breakdowns of three-dimensional symmetries; therefore, the nonpiezoelectric bulk materials can exhibit their intrinsic piezoelectricity in 2D. Therefore, it may be possible to combine the piezoelectric nature of the 2D materials with other intriguing properties (e.g., quantum tunneling effect, semiconducting, or spintronic properties) in the QD crystals. Microscopically, the piezoelectric effect occurs when dipoles are aligned asymmetrically. Our MgO QD dipoles embedded in the 2D BP layers, similar to the tips of a needle, stand up on a flat surface without symmetry, but they are all aligned. They have the smallest size but the strongest piezoelectric effect. MgO QD can control the piezoelectric effect on the nanoscale and improve the compatibility between the piezoelectric materials and the nanodevices. The coupling between the piezoelectricity and the semiconducting properties in the BP@MgO QD crystals may be useful in powering the nanodevices, even the optoelectronic devices with only a few atomic layers thick. To obtain small crystals, the reaction must be segmented and confined to a small area; thus, the product is unable to grow further. Mg slowly released by MgB 2 during heating can only produce nanometer-sized MgO, as shown in the Electronic Supplementary Material (ESM). Thus, confined conditions must be considered, and we added a heterojunction matrix: nanoflake BP. www.springer.com/journal/40145 The vaporization temperature of P is lower than the liquefaction temperature of Mg, so liquid Mg can preserve gaseous P very well. More importantly, the gaseous P provides good confinement for the growth of MgO. As the temperature decreases, P crystalizes to become the BP [26][27][28][29][30], while MgB 2 decomposes completely and reacts with oxygen to form MgO and amorphous B 2 O 3 . With the BP as the vapor core, MgO QD-embedded BP@MgO QD crystals formed. These crystals are stable in the air; in other words, the MgO QDs and nano-BP are air-stable. The BP is easy to decompose in the air [31]. Most efforts have been made to make the BP stable [32][33][34][35]. With a stable crystal structure and high piezoelectric response, these powerful BP@MgO QD crystals are self-supporting freestanding, and compatible with any device. We also believe that by careful choosing of the segment and confine condition, the invented solid-phase sintering and vapor-liquid-solid growth (SS-VLS-like) method [36] in this work can be used to prepare other metal oxide QDs. Materials and methods High-purity MgB 2 and red phosphorus (RP) powders (> 99.9%) as starting materials with a ratio between 4 : 1 and 5 : 1 were put into a vacuum ball mill and milled under N 2 protection for 48 h. The same results were obtained with both dry and wet ball milling. The mixed powders were placed in a sealed crucible without vacuum for heat treatment at 670 ℃ for 12 h. The BP@MgO QD crystals were thoroughly ground and dispersed in ethanol, and then dropped on the sticky side of a copper foil strip. It would dry naturally in about 2 min. By using a small iron ring with an inner diameter of 9 mm and an outer diameter of 12 mm as a mask, the upper electrode was sputtered on the side coated with BP@MgO QD crystal film by a magnetron sputtering apparatus. Finally, it was wrapped with a polyimide tape. The morphology of the as-prepared samples was characterized by a scanning electron microscope (SEM; S4800, Hitachi, Japan; 5 kV). Transmission electron microscopy (TEM) observations were conducted with a JEM-2011 microscope (JEOL, Japan) operated at 200 kV and a JEM-ARM200F microscope (JEOL, Japan) equipped with an energy-dispersive X-ray spectrometer. X-ray diffraction (XRD) patterns were obtained by an X-ray diffractometer (D8 Advance, Bruker, Germany) with a Cu Kα radiation source (λ = 0.15418 nm). Raman spectra were collected with a spectrophotometer (InVia, Renishaw, Germany) using a laser of 514 nm. Fourier transform infrared (FTIR) spectra were recorded on a spectrometric analyzer (6700 FTIR, Nicolet, USA). Thermal gravimetric (TG) analysis was carried out with a Q600 SDT instrument (TA Instruments, USA) under an N 2 atmosphere. The N 2 adsorptiondesorption isotherms were obtained by using a 3Flex analyzer (Micromeritics, USA) at a testing temperature of 77 K. Before the measurements, each sample was degassed under vacuum at 200 ℃ for at least 8 h. Piezoresponse force microscopy (PFM) results were conducted with a piezoresponse force microscope (NT-MDT Spectrum Instruments, Ntegra Prima, Russia). The output voltages of BP@MgO QD crystal pressure sensor were tested by a high-precision digit multimeter (DMM7510, Keithley, USA). Results and discussion A schematic illustration of the formation mechanism of the BP@MgO QD crystals is shown in Fig. 1. MgB 2 , with the properties of slow Mg release and a low melting point (650 ℃), was the source of quantumsized Mg. The RP with a low vaporization temperature (580 ℃) and 2D crystal phase BP were the source of the vapor core (later the nano-BP). How can the vapor be the core? Because only the vapor core, which is preserved by liquid Mg, remains. Therefore, the amount of the Mg source is 4 times that of the P source. The mixture of MgB 2 powders and RP was ball milled for 12 h under nitrogen gas inside an uninsulated crucible. The temperature of this setup was gradually increased to 670 ℃ and then maintained for 12 h, followed by cooling to room temperature. The resulting powders were a mixture of the BP@MgO QD crystals and amorphous B 2 O 3 . The micromorphology of the BP@MgO QD crystals was 2D layers, and the matrix was 2D nano-BP crystals. More details are shown in the ESM. The slowly released Mg, which initially existed in the solid state, was the precursor of the MgO QDs. When the temperature increased to 580 ℃ , phosphorous became a gas. When the temperature reached 650 ℃, elemental Mg turned into liquid. After all of MgB 2 had decomposed into Mg and B, the whole system consisted of gaseous phosphorous, liquid Mg, and amorphous B 2 O 3 , with B 2 O 3 preserving P and Mg to be similar to paraffin. Once the temperature decreased, the BP@MgO QD crystals grew and deposited analogously to the VLS model in CVD [36][37][38]. The whole process can be summarized as the method of SS-VLS-like growth, which contains two stages. At the first stage, i.e., solid-phase sintering, as the temperature increased, Mg was slowly released from MgB 2 and became liquid, with P turning into its vapor phase. B reacted with O 2 and produced amorphous solid phase B 2 O 3 . The amorphous B 2 O 3 preserved liquid Mg and vapor P like paraffin wax. The second stage is the VLS-like growth. At this stage, the temperature decreased, and the volume of the system shrank. With O 2 getting involved in the reaction, P crystalized and became BP. Simultaneously, QD liquid Mg reacted with O 2 to become MgO QD, and a lot of MgO QDs were formed at the surface of layered BP. As shown in Table 1, according to the state of raw materials in the preparation process, the preparation methods of the QDs are divided into three categories: solid phase method, solution method, and gas phase method. The SS-VLS-like method uses the solid-phase sintering and undergoes a vapor-liquid-solid state transition during the whole reaction process. Therefore, this method has the advantages of the solid phase, solution, and gas phase methods. By carefully choosing the segment and confine condition, the application scope of the SS-VLS-like method can be expanded. In particular, the obtained QD crystals are self-supporting, freestanding, and compatible with any device. The TG-differential thermal analysis (TG-DTA) (Fig. 2) shows that the system was in an exothermic state beginning at a temperature of 700 ℃, indicating that no chemical underwent phase change. At approximately 345 ℃, the mass of the system started to increase, which was likely related to the burning of elemental Mg; additionally, the oxides in the system, including B 2 O 3 , MgO, and Mg 2 P 2 O 7 , began to form. The period after the system reached 650 ℃ was where the vapor, liquid, and solid phases coexisted. Figure 2(b) shows the XRD pattern of the BP@MgO QD crystals. These crystals are composed of BP (PDF Card No. 76-1957), MgO (PDF Card No. 78-0430), and the heterojunction oxide Mg 2 P 2 O 7 (PDF Card No. 08-0038). Throughout the sintering process, as shown in Fig. 2(c), the solid B 2 O 3 preserved the liquid QD Mg and P vapor to be similar to paraffin. When the temperature decreased, the P vapor crystalized into layered BP, the volume of the vapor-liquid-solid system shrunk, and Mg reacted with O 2 to produce MgO QDs on the surface of the layered BP. MgO and BP grew together by the atomic connection of Mg 2 P 2 O 7 . Figure 2(c) also shows the detailed atomic connection of the heterojunction. In Fig. 2(d), the amorphous B 2 O 3 looked like the paraffin wax, and the BP@MgO QD crystals arose from the sea of B 2 O 3 . Zooming in the BP@MgO QD crystals (Fig. 2(e)), it can be seen that their micromorphology was like 2D layers; this was due to their vapor core being layered by the BP crystals. From the TEM images of the (200) plane of the BP@MgO QD crystals (Figs. 3(a) and 3(b)), the size of the MgO QDs was determined to be approximately 2 nm. Note that crystal planes of (200), (020), and (002) are equivalent, as MgO is cubic. Figure 3(c) is the TEM image of the heterojunction between the BP and Mg 2 P 2 O 7 at the (040) plane of the BP and the (020) plane of Mg 2 P 2 O 7 . Figures 3(d) and 3(e) illustrate the connection between the three phases in the BP@MgO QD crystals, where Fig. 3(e) shows the atomic construction in the plane, as shown in Fig. 3(c). P atoms circled with the dashed lines show the connection relationship between the P atoms in the heterojunction on the same plane. In the BP crystal structure in Fig. 3(d), the three neighboring P atoms, circled with solid lines, were replaced by Mg and O, thereby forming the heterojunction in Mg 2 P 2 O 7 . These three positions form a triangle, where the distances between them were 4.380, 5.434, and 6.036 Å. It can be seen from the relative positions of the three P atoms (Fig. 3(e)), circled with solid lines, that the relative positions of the P atoms in Mg 2 P 2 O 7 depended on their positions in the BP, as the distances between them were 4.552, 5.381, and 6.630 Å. Figures 3(f)-3(i) display energy-dispersive X-ray spectroscopy (EDS) analysis of the mixture of the BP@MgO QD crystals and B 2 O 3 . The images show that the P and Mg atoms mainly existed in the crystal area circled with the white dashed lines, while the B atoms were present across the whole field of view. This further suggested that the BP and MgO form the BP@MgO QD crystals, and that the B atoms existed as amorphous The piezoelectric properties of the BP@MgO QD crystals were characterized by the PFM, Raman spectroscopy, and infrared spectroscopy. Compared to traditional piezoelectric materials such as BaTiO 3 , zero-dimensional MgO displayed a simpler piezoresponse. In theory, because zero-dimensional MgO is a uniaxial crystal (Fig. 4(o)), its vibration is primarily along its axial axis, from the stretching and bending vibrations of Mg-O bonds, while the vibration perpendicular to the axial direction is trivial. In addition, the vibration amplitudes under the same voltage should be the same. The PFM results showed that the signals from the out-of-plane amplitude and out-of-plane phase were fairly strong, revealing significant contrast; in comparison, the signals from the in-phase amplitude and in-plane phase were lower (Figs. 4(e) and 4(f)). This difference between the amplitudes was mostly dependent on the distribution of the MgO QDs; in other words, the areas with amplitude signals appeared where the MgO QDs were located. There were bands of equal width in both Figs. 4(b) and 4(d) (with magnification in Fig. 4(c)) that were measured to be approximately 11 nm. These bands were caused by contrast, while the lamellar BP@MgO QD crystals vibrated along with changes in thickness; therefore, the width of the bands represents the thickness of the crystals. When a forward bias or a reverse bias was added, substantial phase changes in the vibrations of the MgO QDs were measured, while the amplitudes became more similar. A phase-voltage loop of the BP@MgO QD crystals was that of a typical electric hysteresis loop, and an amplitude-voltage loop demonstrated butterfly loops. The piezoelectricity at the two lowest points in the butterfly loop (i.e., the voltages at the coercive field) were −1.2 and 0.55 V. Interestingly, there were no signals in the Raman spectrum, even after multiple measurements (Fig. 4(m)). Theoretically, this was because the Raman spectrum detects vibrations from nonpolar symmetric molecules [40,41], and the BP@MgO QD crystals are strongly polarized crystals with weak vibrations from symmetric molecules. In other words, different from MgO crystals [42,43], the lack of Raman signal just proved that the symmetry of the MgO QDs was broken, or even that it had no symmetry at all. In contrast, the FTIR spectrum showed ample information from polar molecules (Fig. 4(n)). The intense absorption band present in the region within approximately 900-1250 cm −1 was due to the stretching mode of the P-O bond of the pyrophosphate group P 2 O 7 4-. The peak at 749 cm −1 was attributed to the stretching mode of the P-O-P bond. The IR band in the region within 500-600 cm −1 was attributed to the bending mode of the O-P-O bond [44]. Finally, the peak at 597.8 cm −1 was caused by the stretching and bending vibrations of Mg-O [45,46]. The value of d 33 was calculated as 654 pm/V (Fig. S10 in the ESM). Although this absolute value from the PFM test was not accurate, we could still see that this value was very high (the d 33 value of popular lead-zirconate-titanate piezoelectric (PZT) ceramics is about 300-500 pm/V). To further demonstrate the strong piezoelectric effect of the BP@MgO QD crystals, we fabricated an extremely simple pressure sensor. As shown in Fig. 5(a), by using the adhesion of a copper foil bonding surface, we dispersed the finely ground BP@MgO QD crystals with ethanol, followed by dropping them onto the copper foil bonding surface and applying the BP@MgO QD crystal ethanol dispersion evenly. It would dry naturally in about 2 min. By using a small iron ring with an inner diameter of 9 mm and an outer diameter of 12 mm as a mask, the upper electrode was sputtered on the side coated with the BP@MgO QD crystal film by a magnetron sputtering apparatus. This setup could be secured by wrapping it with a polyimide tape. The thickness of the BP@MgO QD crystal film is about 7.5 μm (Fig. 5(d)). When this BP@MgO QD crystal-based pressure sensor is applied with a dynamic finger pressure of 9-11 N, its output voltage is shown in Fig. 5(f). We can see that the pressure response of the sensor has a high signal-to-noise ratio. By this sensor, we demonstrated the piezoelectric effect of the BP@MgO QD crystals and showcased their potential in the field of the sensor. Conclusions The BP@MgO QD crystals prepared in this work are self-supporting, freestanding, and compatible with any device. Their high piezoelectric response and zero dimensionality allow broad application prospects in the field of low-dimensional devices. The invented SS-VLS-like method proposed in this work can also be used to prepare other metal oxide QDs, after carefully choosing the segment and confine condition.
4,678.8
2022-10-19T00:00:00.000
[ "Materials Science", "Physics" ]
Universal Topological Quantum Computation from a Superconductor-Abelian Quantum Hall Heterostructure Non-Abelian anyons promise to reveal spectacular features of quantum mechanics that could ultimately provide the foundation for a decoherence-free quantum computer. A key breakthrough in the pursuit of these exotic particles originated from Read and Green’s observation that the Moore-Read quantum Hall state and a (relatively simple) two-dimensional p + ip superconductor both support so-called Ising non-Abelian anyons. Here we establish a similar correspondence between the Z 3 Read-Rezayi quantum Hall state and a novel two-dimensional superconductor in which charge-2 e Cooper pairs are built from fractionalized quasiparticles. In particular, both phases harbor Fibonacci anyons that—unlike Ising anyons—allow for universal topological quantum computation solely through braiding. Using a variant of Teo and Kane’s construction of non-Abelian phases from weakly coupled chains, we provide a blueprint for such a superconductor using Abelian quantum Hall states interlaced with an array of superconducting islands. Fibonacci anyons appear as neutral deconfined particles that lead to a two-fold ground-state degeneracy on a torus. In contrast to a p + ip superconductor, vortices do not yield additional particle types yet depending on non-universal energetics can serve as a trap for Fibonacci anyons. These results imply that one can, in principle, combine well-understood and widely available phases of matter to realize non-Abelian anyons with universal braid statistics. Numerous future directions are discussed, including speculations on alternative realizations with fewer experimental requirements. I. INTRODUCTION The emergence of anyons that exhibit richer exchange statistics than the constituent electrons and ions in a material is among the most remarkable illustrations of 'more is different'.Such particles fall into two broad categories: Abelian and non-Abelian.Interchanging Abelian anyons alters the system's wavefunction by a phase e iθ intermediate between that acquired for bosons and fermions [1,2].Richer still are non-Abelian anyons, whose exchange rotates the system's quantum state amongst a degenerate set of locally indistinguishable ground states produced by the anyons [3][4][5][6][7][8][9][10][11][12][13].The latter variety realize the most exotic form of exchange statistics that nature in principle permits, which by itself strongly motivates their pursuit.Non-Abelian anyons are further coveted, however, because they provide a route to faulttolerant topological quantum computation [14][15][16][17][18]. Here, qubits are embedded in the system's ground states and, by virtue of non-Abelian statistics, manipulated through anyon exchanges.The non-locality with which the information is stored and processed elegantly produces immunity against decoherence stemming from local environmental perturbations.One thereby sidesteps the principal bottleneck facing most quantum computing approaches, but at the expense of introducing a rather different challenge: identifying suitable platforms for non-Abelian excitations. Quantum Hall systems can, in principle, host non-Abelian anyons with universal braid statistics (i.e., that allow one to approximate an arbitrary unitary gate with braiding alone).In this context the Z 3 Read-Rezayi state [48], which generalizes the pairing inherent in the Moore-Read phase to clustering of triplets of electrons [49], constitutes the 'holy grail'.Chiral edge states with a very interesting structure appear here: a charged boson sector that transports electrical current (as in all quantum Hall states) in this case coexists with a neutral sector that carries only energy and is described by the chiral part of Z 3 parafermion conformal field theory.As a byproduct of this neutral sector the bulk admits vaunted 'Fibonacci' anyons-denoted ε-that obey the fusion rule ε × ε ∼ 1 + ε.This fusion rule implies that the low-energy Hilbert space for arXiv:1307.4403v2[cond-mat.str-el]7 Aug 2013 n ε particles with trivial total topological charge has a dimension given by the (n − 1) th Fibonacci number.Consequently, the asymptotic dimension per particle, usually called the quantum dimension, is the golden ratio ϕ ≡ (1 + √ 5)/2.Perhaps the most remarkable feature of Fibonacci anyons is that they allow for universal topological quantum computation in which a single gate-a counterclockwise exchange of two Fibonacci anyons-is sufficient to approximate any unitary transformation to within desired accuracy (up to an inconsequential overall phase).Such particles remain elusive, though the Z 3 Read-Rezayi state and its particle-hole conjugate [50] do provide plausible candidate ground states for fillings ν = 13/5 and 12/5.Intriguingly, a plateau at the latter fraction has indeed been measured in GaAs, though little is presently known about the underlying phase; at ν = 13/5 a well-formed plateau has so far eluded observation [51][52][53]. Read and Green [54] laid the groundwork for the pursuit of non-Abelian anyons outside of the quantum Hall effect by demonstrating a profound correspondence between the Moore-Read state and a spinless 2D p + ip superconductor [55].Many properties that stem from composite-fermion pairing indeed survive in the vastly different case where physical electrons form Cooper pairs.In particular, both systems exhibit a chiral Majorana edge mode at their boundary and support Ising non-Abelian anyons in the bulk.Several important distinctions between these phases do, nevertheless, persist: (i) Their edge structures are not identical-a p + ip superconductor lacks the chiral bosonic charge mode found in Moore-Read.(ii) Different classes of topological phenomena arise in each case.On one hand a p + ip superconductor realizes a topological superconducting phase with short-range entanglement; the Moore-Read state, on the other, exhibits true topological order, long-range entanglement, and hence nontrivial ground-state degeneracy on a torus.This important point closely relates to the next two distinctions.(iii) In contrast to the paired state of composite fermions, an electronic p + ip superconductor is characterized by a local order parameter.Defects in that order parameter-i.e., neutral h/2e vortices-bind Majorana zero-modes and, accordingly, constitute the Ising anyons akin to charge-e/4 quasiparticles in the Moore-Read state [54,56].(iv) Because of the energy cost associated with local order parameter variations, superconducting vortices are strictly speaking confined (unlike e/4 quasiparticles).This does not imply inaccessibility of non-Abelian anyons in this setting, since the 'user' can always supply the energy necessary to separate vortices by arbitrary distances.Non-Abelian braiding statistics is, however, realized only projectively [57,58] as a result-i.e., up to an overall phase that for most purposes is fortunately inessential.The existence of an order parameter may actually prove advantageous, as experimental techniques for coupling to order parameters can provide practical means of manipulating non-Abelian anyons in the laboratory. Shortly after Read and Green's work, Kitaev showed that a 1D spinless p-wave superconductor forms a closely related topological superconducting phase [59] (which one can view as a 2D p + ip superconductor squashed along one dimension).Here domain walls in the superconductor bind Ma-jorana zero-modes and realize confined Ising anyons whose exotic statistics can be meaningfully harvested in wire networks [60][61][62][63].Although such nontrivial one-dimensional (1D) and two-dimensional (2D) superconductors are unlikely to emerge from a material's intrinsic dynamics, numerous blueprints now exist for engineering these phases in heterostructures fashioned from ingredients such as topological insulators, semiconductors, and s-wave superconductors [64][65][66][67][68][69][70] (see Refs. 71 and 72 for recent reviews).These proposals highlight the vast potential that 'ordinary' systems possess for designing novel phases of matter and have already inspired a flurry of experiments.Studies of semiconducting wires interfaced with s-wave superconductors have proven particularly fruitful, delivering numerous possible Majorana signatures [73][74][75][76][77][78]. These preliminary successes motivate the question of whether one can-even in principle-design blueprints for non-Abelian anyons with richer braid statistics compared to the Ising case.Several recent works demonstrated that this is indeed possible using, somewhat counterintuitively, Abelian quantum Hall states as a canvas for more exotic non-Abelian anyons [58,[79][80][81][82][83] (see also Refs. 84 and 85).Most schemes involve forming a fractionalized 'wire' out of counterpropagating Abelian quantum Hall edge states.This 'wire' can acquire a gap via competing mechanisms, e.g., proximityinduced superconductivity or electronic backscattering.Domain walls separating physically distinct gapped regions bind Z n generalizations of Majorana zero-modes [86][87] and consequently realize non-Abelian anyons of a more interesting variety than those in a 1D p-wave superconductor.Unfortunately, however, they too admit non-universal braid statistics, though achieving universal quantum computation requires fewer unprotected operations [79,88]. In this paper we advance this program one step further and pursue a similar strategy towards non-Abelian anyons with universal braid statistics.More precisely, our goal is to construct a new 2D superconductor that bears the same relation to the Z 3 Read-Rezayi state as a spinless p + ip superconductor bears to Moore-Read.With this analogy in mind it seems reasonable to demand that such a phase satisfy the following basic properties.First, the boundary should host a chiral Z 3 parafermion edge mode, but lack the Read-Rezayi state's bosonic charge sector.And second, the bulk should exhibit essentially the same non-Abelian content as the Read-Rezayi phase-particularly Fibonacci anyons. We show that one can nucleate a phase with precisely these properties, not in free space but rather in the interior of a fractionalized medium.Our approach resembles that of Refs.89 and 90 which demonstrated that hybridizing a finite density of non-Abelian anyons produces new descendant phases in the bulk of a parent non-Abelian liquid.In the most experimentally relevant cases of the Moore-Read state and a 2D spinless p + ip superconductor these descendants were found to be Abelian.We describe what amounts, in a sense, to an inverse of this result.The specific construction we follow relies on embedding an array of superconducting islands in an Abelian quantum Hall system to proximity-induce Cooper pairing in the fluid.When the islands remain well-separated, each one binds localized zero-modes that collectively encode a macroscopic ground state degeneracy spanned by different charge states on the superconductors.Hybridizing these zero-modes can then lift this degeneracy in favor of novel non-Abelian 2D superconducting phases-including the Read-Rezayi analogue that we seek. As an illustrative warm-up, Sec.II explores the simplest trial application corresponding to an integer quantum Hall system at filling ν = 1.Here the superconducting islands trap Majorana modes that, owing to broken time-reversal symmetry, rather naturally couple to form a 2D spinless p + ip superconducting phase within the fluid.In other words, imposing Cooper pairing provides a constructive means of gener- ating the non-Abelian physics of the Moore-Read state starting from the comparatively trivial integer quantum Hall effect.This result is fully consistent with earlier studies of Refs.91 and 92 that explored similar physics from a complementary perspective. One can intuitively anticipate richer behavior for a superconducting array embedded in an Abelian fractional quantum Hall state.In particular, since here charge-2e Cooper pairs derive from conglomerates of multiple fractionally charged quasiparticles, such a setup appears natural for building in the clustering properties of Read-Rezayi states.This more interesting case is addressed in the remainder of the paper.We focus specifically on the experimentally observed spinunpolarized ν = 2/3 state [93]-also known as the (112) state-for which superconducting islands bind Z 3 generalizations of Majorana modes.[Note that various other quantum Hall phases, e.g., the bosonic (221) state, yield the same physics.]Hybridization of these modes is substantially more difficult to analyze since the problem can not, in contrast to the integer case, be mapped to free fermions.Burrello et al. recently addressed a related setup consisting of generalized Majorana modes coupled on a 2D lattice, capturing Abelian phases including a generalization of the toric code [94].We follow a different approach inspired by Teo and Kane's method of obtaining non-Abelian quantum Hall phases from stacks of weakly coupled Luttinger liquids [95].Though their specific coset construction is not applicable to our setup, a variant of their scheme allows us to leverage theoretical technology for 1D systems-i.e., bosonization and conformal field theory-to controllably access the 2D phase diagram. With the goal of bootstrapping off of 1D physics, Secs.III and IV develop the theory for a single chain of superconducting islands in a ν = 2/3 state.There we show, by relating the setup to a three-state quantum clock model, that this chain can be tuned to a critical point described by a non-chiral Z 3 parafermion conformal field theory.Section V then attacks the 2D limit coming from stacks of critical chains.(A related approach in which the islands are 'smeared out' is discussed in Sec.VII.)Most importantly, we construct an interchain coupling that generates a gap in the bulk but leaves behind a gapless chiral Z 3 parafermion sector at the boundary, thereby driving the system into a superconducting cousin of the Z 3 Read-Rezayi state that we dub the 'Fibonacci phase'. The type of topological phenomena present here raises an intriguing question.That is, should one view this state as analogous to a spinless p + ip superconductor (which realizes short-ranged entanglement) or rather an intrinsic non-Abelian quantum Hall system (which exhibits true topological order)?Interestingly, although superconductivity plays a key role microscopically for our construction, we argue that the Fibonacci phase is actually topologically ordered with somewhat 'incidental' order parameter physics.Indeed we show that Fibonacci anyons appear as deconfined quantum particles, just like in the Z 3 Read-Rezayi state, leading to a two-fold groundstate degeneracy on a torus that is the hallmark of true topological order.Moreover, superconducting vortices do not actually lead to new quasiparticle types in sharp contrast to a p + ip superconductor where vortices provide the source of Ising anyons.In this sense the fact that the Fibonacci phase exhibits an order parameter is unimportant for universal topological physics.Vortices can, however, serve as one mechanism for trapping Fibonacci anyons-depending on nonuniversal energetics-and thus might provide a route to manipulating the anyons in practice.Section VI provides a topological quantum field theory interpretation of the Fibonacci phase that sheds light on the topological order present, and establishes a connection between our construction and that of Refs.89 and 90. Figure 1 summarizes our main results for the ν = 1 and ν = 2/3 architectures as well as their relation to 'intrinsic' non-Abelian quantum Hall states.(For a more complete technical summary see the beginning of Sec.VIII.)On a conceptual level, it is quite remarkable that a phase with Fibonacci anyons can emerge in simple Abelian quantum Hall states upon breaking charge conservation by judiciously coupling to ordinary superconductors.Of course experimentally realizing the setup considered here will be very challengingcertainly more so than stabilizing Ising anyons.It is worth, however, providing an example that puts this challenge into proper perspective.As shown in Ref. 96 a 128-bit number can be factored in a fully fault-tolerant manner using Shor's algorithm with ≈ 10 3 Fibonacci anyons.In contrast, performing the same computation with Ising anyons would entail much greater overhead since the algorithm requires π/8 phase gates that would need to be performed non-topologically, and then distilled, e.g., according to Bravyi's protocol [97].For a π/8 phase gate with 99% fidelity, factoring a 128-bit number would consequently require ≈ 10 9 Ising anyons in the scheme analyzed in Ref. 96 [98].Thus overcoming the nontrivial fabrication challenges involved could prove enormously beneficial for quantum information applications.In this regard, inspired by recent progress in Majorana-based systems we are optimistic that it should similarly be possible to distill the architecture we propose to alleviate many of the practical difficulties towards realizing Fibonacci anyons.Section VIII proposes several possible simplifications-including alternate setups that do not require superconductivity-along with numerous other future directions that would be interesting to explore.The abundance of systems known to host Abelian fractional quantum Hall phases and the large potential payoff together provide strong motivation for further pursuit of this avenue towards universal topological quantum computation. II. TRIAL APPLICATION: p + ip SUPERCONDUCTIVITY FROM THE INTEGER QUANTUM HALL EFFECT The first proposal for germinating Ising anyons in an integer quantum Hall system was introduced by Qi, Hughes, and Zhang [91]; these authors showed that in the vicinity of a plateau transition, proximity-induced Cooper pairing effectively generates spinless p + ip superconductivity in the fluid.In this section we will establish a similar link between these very different phases from a viewpoint that illustrates, in a simplified setting, the basic philosophy espoused later in our pursuit of a Read-Rezayi-like superconductor that supports Fibonacci anyons.Specifically, here we investigate weakly coupled critical 1D superconducting regions embedded in a ν = 1 quantum Hall system, following the spirit of Ref. 95 (see also Ref. 99).This quasi-1D approach gives one a convenient window from which to access various states present in the phase diagram-including a spinless 2D p + ip superconductor analogous to the Moore-Read state [54].There are, of course, experimentally simpler ways of designing superconductors supporting Ising anyons, but we hope that this discussion is instructive and interesting nonetheless.Two complementary approaches will be pursued as preliminaries for our later treatment of the fractional quantum Hall case. A. Uniform trench construction Consider first the setup in Fig. 2(a), wherein a ν = 1 quantum Hall system contains a series of trenches (labeled by y = 1, . . ., N ) filled with some long-range-ordered superconducting material.As the figure indicates the boundary of each trench supports spatially separated right/left-moving integer quantum Hall edge states described by operators f R/L (y).We assume that adjacent counterpropagating edge modes hybridize and are therefore generically unstable, due either to ordinary electron backscattering or Cooper pairing mediated by the superconductors [100].Let the Hamiltonian governing these edge modes be H = H KE + δH + H ⊥ .Here captures the kinetic energy for right-and left-movers, with x a coordinate along the trenches (which we usually leave implicit in operators throughout this section).The second term, δH, includes electron tunneling and Cooper pairing perturbations acting separately within each trench: where t > 0 and ∆ > 0 denote the tunneling and pairing strengths.Finally, H ⊥ incorporates electron tunneling between neighboring trenches with amplitude t ⊥ , Figure 2(a) illustrates all of the above processes. Hereafter we assume |t ⊥ | t, ∆ corresponding to the limit of weakly coupled trenches.It is then legitimate to first treat H KE + δH, which is equivalent to the Hamiltonian for N independent copies of quantum spin Hall edge states with backscattering generated by a magnetic field and proximityinduced pairing [65].As in the quantum spin Hall problem, the t and ∆ perturbations favor physically distinct gapped phases that cannot be smoothly connected without crossing a phase transition.For ∆ > t each trench realizes a 1D topological superconductor with Majorana zero-modes bound to its endpoints, while for ∆ < t trivial superconductivity appears.Deep in either gapped phase small hopping t ⊥ between trenches clearly yields only minor quantitative effects on the bulk.We therefore focus on the critical point t = ∆ at which these opposing processes balance.Here arbitrarily weak t ⊥ can play an important role as each trench remains gapless.In this limit one can factorize δH in a revealing way: At the transition the 'real part' of f R (y) and the 'imaginary part' of f L (y) are thus unaffected by the perturbations in δH, while the other components hybridize and gap out.Hence the important low-energy operators at the critical point correspond to right-and left-moving gapless Majorana fields γ R/L (y), defined as ( Notice that, like the original quantum Hall edge states, the chiral Majorana modes emerging at criticality are spatially separated across each trench.Using Eq. ( 5) one can straightforwardly derive an effective low-energy Hamiltonian that incorporates small deviations away from criticality as well as weak inter-trench coupling t ⊥ ; this reads where m = 2(∆ − t).[To obtain this result one can simply replace f R (y) → γ R (y) and f L (y) → iγ L (y) in H since the imaginary part of the former and the real part of the latter are gapped; note the consistency with Eq. ( 5).]The structure of the phase diagram for H eff , which appears in Fig. 2(b), can be deduced by examining limiting cases.First, in the limit |m| t ⊥ perturbations within each trench dominate and drive gapped phases determined by the sign of m.With m < 0 tunneling t yields a trivially gapped superconducting state within the quantum Hall system.Conversely, for m > 0 Cooper pairing ∆ produces a chain of Majorana modes at the left and right ends of the trenches that form a dispersing band due to small t ⊥ .We also refer to the resulting 2D superconductor as trivial since it smoothly connects to the decoupled-chain limit.(This phase nevertheless retains some novel features and is characterized by nontrivial 'weak topological indices' [99].For instance, lattice defects can bind Majorana zero-modes [99], and the dispersing 1D band of hybridized Majorana modes can be stable if certain symmetries are present on average [101][102][103][104]. Hence we denote this trivial state with a star in the phase diagram [105].)More interesting for our purposes is the opposite limit where t ⊥ dominates so that genuinely 2D phases can arise.Upon inspecting the last term in Eq. ( 6) one sees that when m = 0 inter-trench hopping gaps out all Majorana fields in the bulk, but leaves behind gapless chiral Majorana edge states described by γ R (y = 1) on the top edge and γ L (y = N ) on the bottom.This edge structure signifies the onset of spinless p + ip superconductivity with vortices that realize Ising anyons.By passing to momentum space and identifying where the bulk gap closes, one can show that the transitions separating the states above occur at |∆ − t| = |t ⊥ |, yielding the phase boundaries of Fig. 2 (b). We have thereby established the correspondence illustrated in Fig. 1(a) between an integer quantum Hall system with (long) superconducting islands and the Moore-Read state.Towards the end of this paper, Sec.VII will discuss a similar uniform-trench setup in the fractional quantum Hall case.For technical reasons, however, it will prove simpler to analyze a fractional quantum Hall system with superconductivity introduced non-uniformly within each trench.In fact most of our treatment will be devoted to such an architecture.As a preliminary, the next subsection analyzes spatially modulated trenches in an integer quantum Hall system, once again recovering spinless p + ip superconductivity from weakly coupled chains. B. Spatially modulated trenches We now explore the modified setup of Fig. 3(a) in which the ν = 1 edge states within each trench are sequentially gapped by pairing ∆ and electron tunneling t, creating an infinite, periodic array of domain walls labeled according to the figure.This setup can again be described by a Hamiltonian H = H KE + δH + H ⊥ as defined in Eqs. ( 1) through ( 3), but now with t and ∆ varying in space.For simplicity we will assume t = 0 in the pairing-gapped regions and ∆ = 0 in the tunneling-gapped regions (one can easily relax this assumption if desired). Suppose for the moment that each domain is long compared to the respective coherence length, and that the trenches are sufficiently far apart that they decouple.In this case the Cooper-paired regions constitute 1D topological superconductors that produce a Majorana zero-mode exponentially bound to each domain wall [65].An explicit calculation reveals that the Majorana operator for domain wall j at position x j in trench y takes the form (up to normalization) Here ξ(x − x j ) denotes the decay length for the Majorana mode and is given either by v/t or v/∆ depending on the sign of x − x j .The 2D array of zero-modes present in this limit underlies a macroscopic ground-state degeneracy, since one can combine each pair of Majoranas into an ordinary fermion that can be vacated or filled at no energy cost.Next, imagine shrinking the width of the tunneling-and pairing-gapped regions, as well as the spacing between trenches, such that domain walls couple appreciably.Our objective here is to investigate how the resulting hybridization amongst nearby Majorana modes resolves the massive degeneracy present in our starting configuration. Focusing again on the weakly coupled chain limit, we first incorporate hybridization within each trench.The simplest intra-chain perturbation consistent with the symmetries of Electron hopping across the domains hybridizes the chain of Majorana modes in each trench through couplings λ∆ and λt shown above.These couplings favor competing gapped phases, and when λ∆ = λt each chain realizes a critical point with counterpropagating gapless Majorana modes in the bulk-similar to the uniform trench setup of Fig. 2(a).Turning on weak coupling t ⊥ (j − j ) between domain walls j and j in adjacent trenches then generically drives the system into a p + ip phase (or a p − ip state with opposite chirality).(b) Phase diagram for the 2D array of coupled Majorana modes near criticality.Here λ ⊥ and λ ⊥ represent interchain couplings between gapless Majorana fermions at the critical point, which follow from t ⊥ (j − j ) according to Eq. (15).the problem tunnels right-and left-moving electrons between neighboring domain walls and reads [106] [This is just a discrete version of the kinetic energy in Eq. ( 1).]The x coordinate in the argument of f R/L , usually left implicit, has been explicitly displayed since it is now crucial. We define the real couplings appearing above as λ j ≡ λ ∆ for j even and λ j ≡ λ t for j odd.Physically, λ ∆ and λ t respectively arise from coupling adjacent pairing-and tunnelinggapped regions [see Fig. 3(a)], and thus clearly need not be identical.We assume however that λ ∆ , λ t ≥ 0. According to Eq. ( 7), projection of H intra into the lowenergy manifold spanned by the Majorana operators is achieved (up to an unimportant overall constant that we will neglect) by replacing f R (x j , y) → γ j (y), f L (x j , y) → i(−1) j γ j (y).(9) This projection yields the following effective Hamiltonian for the decoupled trenches, which is equivalent to N independent Kitaev chains [59] The transition separating these 1D phases arises when λ ∆ = λ t .Viewed in terms of superconductors this limit corresponds to the situation where the chemical potential for the c j fermions is fine-tuned to the bottom of the band, so that gapless bulk excitations remain at zero momentum despite the p-wave pairing.As in the preceding subsection we will concentrate on this transition point since here even weak inter-trench coupling (to which we turn shortly) can qualitatively affect the physics.When λ ∆ = λ t one can solve either Eq. ( 10) directly, or the equivalent superconducting problem, by diagonalizing the Hamiltonian in momentum space.This exercise shows that at criticality right-and left-moving Majorana fields γ R/L (y) form the relevant low-energy degrees of freedom-precisely as in the uniform-trench construction examined earlier.Moreover, these continuum fields relate to the lattice Majorana operators via Using Eq. ( 11) to rewrite Eq. (10) and taking the continuum limit yields where the velocity ṽ follows from the tunnelings in Eq. (10) and m ∝ λ ∆ − λ t reflects small deviations away from criticality.Note that Eq. ( 12) exhibits an identical structure to the intra-chain terms in Eq. ( 6), which were derived for spatially uniform trenches.The appearance of common physics near criticality in the two setups is quite natural; indeed, in a coarse-grained picture appropriate for the critical point the spatial modulations in the trenches are effectively blurred away. One can now readily restore weak coupling between neighboring trenches.Consider the following inter-trench Hamiltonian, which encodes generic electron hoppings from the bottom of domain wall j in one trench, to the top of domain wall j in the trench just below.We have assumed that the tunneling strengths t ⊥ (j − j ) above are real and depend only on the spacing j − j between domain walls.These hoppings should be reasonably short-ranged as well; see Figure 3(a) for examples of significant processes.Since we are interested in weak interchain coupling near criticality it is useful to filter out high-energy physics, employing Eqs. ( 9) and (11) to project H ⊥ onto the low-energy manifold: The coupling constants here are defined as and, importantly, differ in magnitude unless fine-tuned. The full low-energy theory describing our weakly coupled, spatially modulated trenches is H eff = H intra + H ⊥ with the terms on the right side given in Eqs.(12) and (14).When λ ⊥ = 0 this effective Hamiltonian is essentially identical to Eq. ( 6) [107].The phase diagram thus mimics that of the uniform-trench case, and can again be inferred from considering extreme cases.When the mass term m ∝ λ ∆ − λ t dominates over all other couplings we obtain superconducting states that smoothly connect to the decoupled-chain limit; the cases λ ∆ < λ t and λ ∆ > λ t respectively correspond to the trivial and 'trivial*' phases discussed in the previous subsection.If instead λ ⊥ dominates then the interchain coupling gaps out all Majorana fields in the bulk, but leaves a gapless right-mover at the top edge and a gapless left-mover at the bottom edge.This is the spinless p + ip superconducting phase that supports Ising anyons.Finally, by examining Eq. (14) we see that when λ ⊥ provides the leading term we simply obtain a spinless p − ip superconductor with gapless edge states moving in the opposite direction.All of these phases exhibit a bulk gap; the transition between them occurs when | m| = |λ ⊥ − λ ⊥ | at which this gap closes.Figure 3(b) illustrates the corresponding phase diagram.It is worth stressing that when the trenches are each tuned to criticality (so that m = 0), interchain coupling generically drives the system to either the p + ip or p − ip phase since λ ⊥ − λ ⊥ vanishes only with fine-tuning. To summarize, in this section we have shown that depositing superconducting islands (either uniformly or nonuniformly) within integer quantum Hall trenches allows one to access nontrivial 2D superconducting states supporting Ising anyons.This outcome emerges quite naturally from weak interchain perturbations when the individual trenches are tuned to criticality, which can be traced to the fact that time-reversal symmetry is absent and the carriers in the quantum Hall fluid derive from a single fermionic species.So far the weakly coupled chain approach was convenient but by no means necessary since this section dealt only with free fermions.One can readily verify, for instance, that the Ising anyon phases we captured survive well away from this regime and persist even in an isotropic system.The remainder of this paper treats analogous setups where the ν = 1 state is replaced by a strongly correlated fractional quantum Hall fluid.Throughout numerous parallels will arise with the simpler treatment described here.We should point out that in the fractional case the weakly coupled chain approach provides the only analytically tractable window at our disposal, though we similarly expect isotropic relatives of the physics we capture to exist there as well. III. OVERVIEW OF Z3 PARAFERMION CRITICALITY One useful way of viewing Sec.II B is that we dissected a ν = 1 quantum Hall system to construct a non-local representation of the transverse-field Ising model-i.e., a Majorana chain.In preparation for treating the more theoretically challenging ν = 2/3 fractionalized setup, here we review an analogous Z 3 -invariant chain corresponding to the threestate quantum clock model.This clock model realizes a critical point described by a Z 3 parafermion conformal field theory (CFT), which provides the building blocks for the Read-Rezayi wavefunction and plays a central role in describing the edge modes of this state.Studying the chain will enhance our understanding of the symmetries, phase structure, and perturbations of this CFT.Furthermore, much of the groundwork necessary for our subsequent quantum Hall analysis will be developed here. The Z 3 quantum clock model is comprised of a chain of three-component 'spins'.Here we assume an infinite number of sites (to avoid subtleties with boundary conditions) and define operators σ j and τ j that act nontrivially on the threedimensional Hilbert space capturing the spin at site j.These operators satisfy a generalization of the Pauli matrix algebra, while all other commutators aside from the last line above are trivial: [σ j , τ j =j ] = [σ j , σ j ] = [τ j , τ j ] = 0.It follows that σ j and τ j can point in three inequivalent directions separated by an angle of 2π/3, similar to a clock hand that takes on only three symmetric orientations.Noncommutation of these operators implies that τ j 'winds' σ j and vice versa.In other words, each operator can be represented by a matrix with eigenvalues 1, e i2π/3 , e −i2π/3 , but one cannot simultaneously diagonalize σ j and τ j .The simplest quantum clock Hamiltonian bears a similar structure to the transverse-field Ising model and reads where we assume couplings J, h ≥ 0. This 1D Hamiltonian can be found by taking an anisotropic limit of the 2D classical three-state Potts model, and so the two share essentially identical physical properties.The quantum clock model in Eq. ( 17) exhibits the useful property of non-local duality symmetry.Indeed, upon introducing dual operators that satisfy the same relations as in Eq. ( 16) with σ j → µ j and τ j → ν j , the Hamiltonian takes on an identical form with h and J interchanged.Equation ( 17) additionally exhibits a number of other symmetries that play an important role in our analysis.Spatial symmetries include simple lattice translations T x and parity P (which sends σ j → σ −j and τ j → τ −j ).The model also preserves a Z 3 transformation (σ j → e i2π/3 σ j ) and a corresponding dual operation Z dual 3 (µ j → e i2π/3 µ j ).Finally, there exists a time-reversal symmetry T that squares to unity (σ j → σ j , τ j → τ † j ) and a charge conjugation symmetry C that flips the sign of the Z 3 charge carried by the clock model operators (σ j → σ † j , τ j → τ † j ).Like the closely related transverse-field Ising model, the clock Hamiltonian supports two symmetry-distinct phases.When J dominates, a ferromagnetic phase emerges with σ j = 0, thus spontaneously breaking Z 3 ; increasing h drives a transition to a paramagnetic state that in dual language yields µ j = 0 and a broken Z dual 3 .Hence one can view σ j as an order parameter and µ j as a 'disorder parameter'.Duality implies that the phase transition occurs at the self-dual point J = h, and indeed the exact solution shows that this point is critical [108].The scaling limit of the self-dual clock Hamiltonian is described by a Z 3 parafermion (or equivalently threestate Potts) CFT [109], whose content we discuss further below. We will describe in the next section a new physical route to this CFT.In particular, our approach uses ν = 2/3 quantum Hall states to construct a chain of Z 3 generalized Majorana operators that arise from the clock model via a 'Fradkin-Kadanoff' transformation [110].This transformation-which is analogous to the more familiar Jordan-Wigner mapping in the transverse-field Ising chain-also lends useful intuition for the physical meaning of parafermion fields as we will see.The Fradkin-Kadanoff transformation in the clock model allows for two closely related forms of these Z 3 generalized Majorana operators: either or which differ only in the string of operators encoded in the disorder parameter µ j .Note that when applying a Jordan-Wigner transformation to the Ising chain there is no such freedom since there the string is Hermitian.The above operators satisfy for A = R/L, similar to the clock operators from which they derive.Because of the strings, however, they exhibit non-local commutation relations, Equations ( 21) and ( 22) constitute the defining properties for the Z 3 generalized Majorana operators that will appear frequently in this paper.By using the labels L and R we have anticipated the identification of these operators with left-and right-moving fields in the CFT.On the lattice, however, α Rj and α Lj are not independent, as one can readily verify that Despite this redundancy, it is nevertheless very useful to consider both representations since α Rj and α Lj transform into one another under parity P and time-reversal T . In terms of α Rj , the clock Hamiltonian of Eq. ( 17) reads An equivalent form in terms of α L,j follows from exploiting Eqs.(23).The ferromagnetic and paramagnetic phases of the original clock model correspond here to distinct dimer patterns for α R,j (or α L,j ) favored by the J and h terms above.On a finite chain, the symmetry-related degeneracy of the ferromagnetic phase is encoded through Z 3 zero-modes bound to the ends of the system [86], similar to the Majorana end-states in a Kitaev chain [59].The dimerization appropriate for the paramagnetic phase, by contrast, supports no such edge zeromodes, consistent with the onset of a unique ground state.In this representation Z 3 parafermion criticality arising at J = h corresponds to the limit where these competing dimerizations balance, leaving the system gapless.For the remainder of this section we provide an overview of this well-understood critical point.The Z 3 parafermion CFT has central charge c = 4/5, and is rational.One of the very useful properties of a rational CFT is that a finite set of operators-dubbed primary fieldscharacterize the entire Hilbert space.That is, all states in the Hilbert space can be found by acting with the primary fields and the (possibly extended) conformal symmetry generators on the ground state.With appropriate boundary conditions, the theory admits independent left-and right-moving conformal symmetries, and so it is useful to consider purely chiral primary fields.These fields exhibit non-local correlations; local operators are found by combining left-and right-movers in a consistent way. When the conformal symmetry algebra is extended by a spin-3 current into the so-called 'W 3 -algebra' [109,111], the Z 3 parafermion CFT possesses six right-moving primary fields.These consist of the identity field I R , the chiral parts of the spin field σ R and σ † R , parafermion fields ψ R and ψ † R , and the chiral part R of the 'thermal' operator The left-moving sector contains an identical set of fields, labeled by replacing R with L. The CFT analysis yields the exact scaling dimensions of these operators-the chiral spin fields each have dimension 1/15, the parafermions each have dimension 2/3, while R/L has dimension 2/5. Perturbing the critical Hamiltonian by the thermal operator-which changes the ratio of J/h away from criticality-provides a field-theory description of the clock Hamiltonian's gapped ferromagnetic and paramagnetic phases.Note that in the Ising case, the thermal operator is composed of chiral Majorana fields, which also form the analogue of the parafermions ψ R/L .The fact that here the parafermions and thermal operator constitute independent fields allows for additional relevant perturbations, which in part underlies the interesting behavior we describe in this paper.More precisely, perturbing the critical Hamiltonian instead by ψ L ψ R + H.c. violates Z 3 symmetry, but still results in two degenerate ground states that are not symmetryrelated [112,113]; see Sec.V A for further discussion.The analogous property in our quantum Hall setup is intimately related to the appearance of Fibonacci anyons. All of the symmetries introduced earlier in the lattice model are manifested in the CFT.Particularly noteworthy are the Z 3 and Z dual 3 symmetries, whose existence is actually more apparent in the CFT due to independence of the left-and right-moving fields.The former transformation sends ψ A → e i2π/3 ψ A and σ A → e i2π/3 σ A , where A = L or R. (As usual the conjugate fields acquire a phase e −i2π/3 instead.)The dual transformation Z dual 3 similarly takes ψ R → e i2π/3 ψ R and σ R → e i2π/3 σ R , but alters left-movers via ψ L → e −i2π/3 ψ L and σ L → e −i2π/3 σ L .Under either symmetry the fields L and R remain invariant; this is required in order for the Hamiltonian to preserve both Z 3 and Z dual 3 for all couplings J and h. The relation between the lattice operators and primary fields at the critical point provides valuable insight into the physical content of the CFT.Reference 114 establishes such a correspondence by appropriately matching the spin and symmetry properties carried by a given microscopic operator and the continuum fields.This prescription yields the following familiar expansions for the lattice order and disorder parameters, where the ellipses denote terms with subleading scaling dimension.One can similarly express the thermal operator as Most crucial to us here is the expansion of the Z 3 generalized Majorana operators [114], which will form the fundamental low-energy degrees of freedom in our quantum Hall construction: with a, b real constants.[The phases in the definition of α R/L in Eqs.(20a) and (20b) are paramount in this lattice operator/CFT field correspondence.]The above equations endow clear meaning to the parafermion fields-they represent longwavelength fluctuations in the generalized Majorana operators at the critical point.Importantly, however, these lattice operators also admit an oscillating component involving products of σ and fields, which in fact yield a slightly smaller scaling dimension than the parafermion fields.In Sec.V we will use the link between ultraviolet and infrared degrees of freedom encapsulated in Eqs. ( 27a) and (27b) to controllably explore the phase diagram for coupled critical chains. The physical meaning of the chiral primary fields is further illuminated by their fusion algebra, which describes how the fields behave under operator products.This property is constrained strongly but not entirely by commutativity, associativity, and consistency with the Z 3 symmetries.Any fusion with the identity of course is trivial.As a more enlightening example, two parafermion fields obey the fusion rule ψ R × ψ R ∼ ψ † R (and similarly for ψ L ).That is, taking the operator product of two parafermion fields contains something in the sector of the conjugate parafermion (i.e., the conjugate parafermion itself or some descendant field obtained by acting with the symmetry generators on the parafermion).This fusion is natural to expect given the properties in Eq. ( 21) exhibited by the lattice analogs α R/Lj .The complete set of fusion rules involving ψ R or ψ L reads here and below the fields in such expressions implicitly all belong to either the L or R sectors.Fusion rules for ψ † R/L simply follow by conjugation or by fusing again with ψ R/L .The remaining rules for fusion with σ R/L are with those for σ † R/L given by conjugation.A sum on the righthand side indicates that two particular fields can fuse to more than one type of field, signaling degeneracies.Finally, the chiral part of the thermal operator exhibits a 'Fibonacci' fusion rule, Equation ( 30) is especially important: it underlies why the 'decorated' fractional quantum Hall setup to which we turn next yields Fibonacci anyons with universal non-Abelian statistics.(To be precise we reserve and I for CFT operators; the related Fibonacci anyon and trivial particle that appear in the forthcoming sections will be respectively denoted ε and 1.) Our goal now is to illustrate how one can engineer the nonlocal representation of the clock model in Eq. ( 24), and with it a critical point described by Z 3 parafermion CFT, using edge states of a spin-unpolarized ν = 2/3 system in the so-called (112) state.As a primer, Sec.IV A begins with an overview of the edge theory for this quantum Hall phase (see Ref. 115 for an early analysis).Section IV B then constructs Z 3 generalized Majorana zero-modes from counterpropagating sets of ν = 2/3 edge states, while Sec.IV C hybridizes these modes along a 1D chain to generate Z 3 parafermion criticality.Results obtained here form the backbone of our coupled-chain analysis carried out in Sec.V. Note that much of the ensuing discussion applies also to the bosonic (221)-state with minor modifications; this bosonic setup will be briefly addressed later in Secs.V D and VI. A. Edge theory Edge excitations at the boundary between a spinunpolarized ν = 2/3 droplet and the vacuum can be described with a two-component field φ(x) = (φ ↑ (x), φ ↓ (x)), where x is a coordinate along the edge and the subscripts indicate physical electron spin.In our conventions φ α (x) is compact on the interval [0, 2π); hence physical operators involve either derivatives of φ or take the form e i l• φ for some integer vector l.Commutation relations between these fields follow from an integer-valued K-matrix that encodes the charge and statistics for allowed quasiparticles in the theory [116].For the case of interest here we have The term involving the Pauli matrix σ y corresponds to a Klein factor as discussed below.Since det K < 0 the ν = 2/3 edge supports counterpropagating modes; these can be viewed, roughly, as ν = 1 and ν = 1/3 modes running in opposite directions. In terms of the 'charge vector' q = (1, 1) the total electron density for the edge is q •∂ x φ/(2π).Since we are dealing with an unpolarized state, it is also useful to consider the density for electrons with a definite spin α =↑, ↓, which is given by Equations ( 31) and ( 33) allow one to identify as spin-α electron operators.Indeed, these operators add one unit of electric charge and satisfy appropriate anticommutation relations (note that anticommutation between ψ ↑ and ψ ↓ requires the Klein factor introduced above).One can further, with the aid of Eq. ( 33), define a Hamiltonian incorporating explicit density-density interactions via where V αβ is a positive-definite matrix describing screened Coulomb interactions and the ellipsis denotes all other allowed quasiparticle processes.These preliminary definitions allow us to readily treat the following more interesting setup.Suppose that one carves out a long, narrow trench from the system as sketched in Fig. 4, thus generating two identical (but oppositely oriented) sets of ν = 2/3 edge states in close proximity to each other.To describe this 'doubled' edge structure we employ fields φ 1 = (φ 1↑ , φ 1↓ ) for the top side of the trench and φ 2 = (φ 2↑ , φ 2↓ ) for the bottom.The corresponding electron densities for spin α are defined as while the commutation relations read (The relative minus sign for the density on the bottom side of the trench, along with the commutation relations above, can be understood by viewing φ 1 and φ 2 as essentially the same fields connected at the right end of the trench.)It follows that the electron operators for the top and bottom sides of the trench are respectively Similarly to Eq. ( 35), one can express the Hamiltonian for the edge interface as Of crucial importance here are the additional terms present in δH.Since the interface carries identical sets of counterpropagating modes, it is always possible for perturbations to gap out the edges entirely.Here we will invoke two physically distinct gapping mechanisms, similar to our earlier ν = 1 setup: (i) spin conserving electron tunneling across the interface and (ii) spin-singlet Cooper pairing of electrons on opposite sides of the trench, mediated by an s-wave superconductor.These processes are schematically illustrated in Fig. 4 and lead to the following perturbations, where t and ∆ are the tunneling and pairing amplitudes.It is important to emphasize that in this setup tunneling and pairing of fractional charges across the trench is not possiblesuch processes are unphysical since the intervening region separating the top and bottom sides by assumption supports only electronic excitations.Later, however, we will encounter edges separated by ν = 2/3 quantum Hall fluid, and in such a geometry inter-edge fractional charge tunneling can arise.Before discussing the fate of the system in the presence of the couplings in δH it is useful to introduce a basis change to charge-and spin-sector fields Here ρ + = ∂ x θ ρ /π and S + = ∂ x θ σ /π respectively denote the total edge electron density and spin density, while ρ − = ∂ x φ ρ /π and S − = ∂ x φ σ /π are respectively the difference in the electron density and spin density between the top and bottom sides of the trench.Equations (37) imply that the only nontrivial commutation relations amongst these fields are where Θ is the Heaviside step function.(Contrary to the first two lines, the third is nontrivial only because of Klein factors.)In this basis δH becomes simply The scaling dimensions of the operators above depend on the matrix V aα;bβ in Eq. ( 39) specifying the edge density-density interactions.In the simplest case V aα;bβ = vδ ab δ αβ both the tunneling and pairing terms have scaling dimension 2 and hence are marginal (to leading order).Following Ref. 117, we have verified that upon tuning V aα;bβ away from this limit t and ∆ can be made simultaneously relevant.Hereafter we assume that both terms can drive an instability, either because they are explicitly relevant or possess 'order one' bare coupling constants.Suppose first that inter-edge tunneling dominates.In terms of integer-valued operators M , m, this coupling pins to minimize the energy, thus fully gapping the charge and spin sectors.Note that both fields are simultaneously pinnable since θ σ and θ ρ commute with each other.If the pairing term dominates, however, a gap arises from pinning where n is another integer operator.Both fields are again simultaneously pinnable, but note that Eqs. ( 44) and ( 45) can not be simultaneously fulfilled in the same region of space since [θ ρ (x), φ ρ (x )] = 0. Consequently, the tunneling and pairing terms compete with one another [118].The physics is directly analogous to the competing ferromagnetic and superconducting instabilities in a quantum spin Hall edge; there domain walls separating regions gapped by these different means bind Majorana zero-modes [65].Due to the fractionalized nature of the ν = 2/3 host system, in the present context domain walls generate more exotic zero-modes-as in Refs.58, 79-81, 83-85, 119, and 120-that will eventually serve as our building blocks for a Z 3 parafermion CFT. B. Z3 zero-modes As an incremental step towards this goal we would like to now capture these zero-modes by studying an infinite array 2e/3 FIG. 5. Schematic of a spin-unpolarized ν = 2/3 system hosting a trench in which the edge modes are alternately gapped by electron backscattering t and Cooper pairing ∆.The integer operators mi and ni in each domain characterize the pinning of the charge-sector fields as specified in Eqs. ( 44) and (45).Physically, mi − mi−1 quantifies the total charge (top plus bottom) Q + i on the intervening superconducting-gapped region, while ni+1 − ni quantifies the charge difference (top minus bottom) Q − i on the intervening tunneling-gapped segment.The remaining low-energy physics is captured by Z3 generalizations of Majorana operators α R/L,j bound to each domain wall labeled as above.These operators cycle the values of Q ± i on the domains by adding charge 2e/3 (mod 2e) to the top and bottom trench edges as illustrated in the figure.Charge 2e/3 tunneling between neighboring domain walls hybridizes these modes, and can be described by a 1D Hamiltonian [Eq.( 57)] intimately related to the three-state quantum clock model.The critical point of this Hamiltonian, as in the clock model context, is described by Z3 parafermion conformal field theory. of long domains alternately gapped by tunneling and pairing as displayed in Fig. 5 [121]; note the similarity to the integer quantum Hall setup analyzed in Sec.II B. (For illuminating complementary perspectives on this problem see the references cited at the end of the previous paragraph.)In each tunneling-and pairing-gapped segment the fields are pinned according to Eqs. ( 44) and (45), respectively.Since θ σ is pinned everywhere, in the ground-state sector the integer operator M takes on a common value throughout the trench.(Nonuniformity in M requires energetically costly twists in θ σ .)Conversely, the pinning of θ ρ and φ ρ is described by independent operators mj and nj in different domains-see Fig. 5 for our labeling conventions.The commutation relations between the integer operators follow from Eqs. (42), which yield while all other commutators vanish. The zero-mode operators of interest can be obtained from quasiparticle operators e i( l1• φ1+ l2• φ2) acting inside of a domain wall, simply by projecting into the ground-state manifold.To project nontrivially the dependence on the field φ σ must drop out since e iφσ creates a kink in θ σ which costs energy.This condition is satisfied provided Projection of the remaining fields is achieved by replacing θ σ , θ ρ , and φ ρ by their pinned values on the adjacent domains.The complete set of projected quasiparticle operators obeying Eq. ( 47) can be generated by e i( l1 Crucially, these correspond to charge-2e/3 quasiparticle operators acting on the top and bottom edges of the trench, respectively.Suppose that P is the ground-state projector while x j denotes a coordinate inside of domain wall j.We then explicitly get where on the right side we have inserted phase factors for later convenience and defined Z 3 generalized Majorana zero-mode operators Above we denote whether a given zero-mode operator adds charge 2e/3 (mod 2e) to the top or bottom edge.The importance of the spatial separation between α Rj and α Lj evident here is hard to overstate and will prove exceedingly valuable in the following section.Equation (46) implies that the Z 3 zero-mode operators in our quantum Hall setup satisfy precisely the properties in Eqs. ( 21) through (23) introduced in the quantum clock model context.Once again α Rj and α Lj are not independent, but as we will see describing physical processes for coupled trenches in a simple way requires retaining both representations because of their spatial separation.The Z 3 zero-modes encode a ground-state degeneracy that admits a simple physical interpretation.First we note that gauge invariant quantities involve differences in the mj or nj operators on different domains.Consider then the quantity A(x − x ) = e iπ x x ρ+(x )dx = e i[θρ(x )−θρ(x)] , where again ρ + = ∂ x θ ρ /π denotes the total density.If x and x straddle a pairing-gapped domain in which nj is pinned, then Eq. ( 44) yields a ground-state projection Hence specifies the total charge (mod 2e) on the pairing-gapped segment.A comparison with the more familiar case of Majorana zero-modes along a quantum spin Hall edge is useful here.In that context the Majoranas encode a two-fold degeneracy between even and odd parity ground states of a superconducting-gapped region of the edge.Here the physics is richer-a superconducting segment of the ν = 2/3 interface supports ground states with charge 0, 2/3, or 4/3 (mod 2e).From the density difference ρ − = ∂ x φ ρ /π between the top and bottom edges of the trench one can similarly define . With x and x now straddling an mj -pinned tunneling-gapped region, one obtains We thus see that represents the charge difference (again mod 2e) across the trench in a tunneling-gapped region, which can also take on three distinct values.If desired one can use these definitions to express mj = 3 these forms can then be used to rewrite the Z 3 zero-mode operators of Eq. ( 49) in terms of physical quantities. To avoid overcounting degeneracy, observe that due to the nontrivial commutator in Eq. ( 46) one can specify either the total charge Q + j on each superconducting segment or the charge difference Q − j on each tunneling gapped region-but not both simultaneously.Consequently, there exists three ground states per pair of domain walls (neglecting possible Hilbert space constraints), yielding a quantum dimension of √ 3 associated with each zero-mode [122].The action of the zero-mode operators on a given initial state alters Q ± j by integer multiples of 2e/3, thereby allowing one to cycle through the entire ground-state manifold.More precisely, the modification of these charges follows from for k = 2j − 1 or 2j, while for k = 2j or 2j + 1. (At other values of k the zero-modes do not affect Q ± j .)Notice that α R,k and α L,k increment the charge difference Q − j in opposing directions because they add quasiparticles to opposite sides of the trench. One can now intuitively understand why two nontrivial R/L representations exist for the Z 3 zero-modes whereas the Majorana operators γ j discussed in Sec.II B are uniquely defined, up to a sign.For concreteness let us work in a basis where the ground states are labeled by the set of charges {Q + j } on the superconducting regions.The key point is that in the fractional quantum Hall case there are two physically distinct processes that transform the system from one such ground state to another.Namely, the total charge on a given superconducting segment can be incremented by adding fractional charge either to the upper or lower trench edges.This distinction is meaningful since fractional charge injected at one edge can not pass to the other because only electrons can tunnel across the trench.These two processes are implemented precisely by α Rj and α Lj , as illustrated in Fig. 5.By contrast in the integer quantum Hall case no such distinction exists.The Majorana operators add one unit of electric charge (mod 2e) which can readily meander across the trench, so that their representation is essentially unique. Finally, we note a curious feature implicit in the zeromodes and ground states: although a ν = 2/3 edge supports charge-e/3 excitations, they are evidently frozen out in the low-energy subspace in which we are working.The doubling of the minimal charge arises because the spin sector is uniformly gapped throughout the trench.Charge-e/3 excitations must therefore come in opposite-spin pairs to circumvent the spin gap.As a corollary, one cannot define an electron operator in the projected Hilbert space since charge-e excitations are absent for the same reason.This explains the Z 3 structure arising in the theory-along with the difference from the Z 6 structure found in related studies of ν = 1/3 Laughlin states [79][80][81]85]. C. Z3 parafermion criticality Imagine now that the size of each domain shrinks so that quasiparticle tunneling between neighboring domain walls becomes appreciable.Such processes lift the ground-state degeneracy described above and can be modeled by an effective Hamiltonian with J ∆ , J t > 0. The first term reflects a fractional Josephson coupling between adjacent superconducting segments [59,[79][80][81], mediated by charge-2e/3 tunneling across the intervening tunneling-gapped region.This favors pinning mj to uniform values in all superconducting regions, resulting in Q + j = 0 throughout.Similarly, the second (competing) term represents a 'dual fractional Josephson' [123][124][125][126] coupling favoring uniform nj in tunneling-gapped regions and hence Q − j = 0.In terms of generalized Majorana operators defined in Eq. ( 49) the effective Hamiltonian becomes which exhibits precisely the same form as the Fradkin-Kadanoff representation of the quantum clock model in Eq. (24).The connection to the quantum clock model can be further solidified by considering how the various symmetries present in the former are manifested in our ν = 2/3 setup.Appendix A discusses this important issue and shows that all of these in fact have a transparent physical origin (including the timereversal operation T that squares to unity).To streamline the analysis we have defined the generalized Majorana operators in Eqs.(49) such that under each symmetry they transform identically to those defined in the clock model. The Z 3 and Z dual 3 transformations, which send warrant special attention.Clearly the Hamiltonian in Eq. ( 57) preserves both operations.In our quantum Hall problem these symmetries relate to physical electric charges.More precisely, they reflect global conservation of the 'triality' operators which generalize the notion of parity and take on three distinct values.The trialities respectively constitute conserved Z 3 and Z dual 3 quantities that specify (mod 2) the sum and difference of the total electric charge on each side of the trench.According to Eqs. (58a) and (58b), α Rj and α Lj carry the same Z 3 charge but opposite Z dual 3 charge; this is sensible given that these operators increment the charge on opposite trench edges [see also Eqs. ( 54) and ( 55)]. The correspondence with the clock model allows us to directly import results from Sec. III to the present setup.Most importantly we immediately conclude that the limit J ∆ = J t realizes a self-dual critical point described by a Z 3 parafermion CFT.Furthermore, at the critical point the primary fields relate to the lattice operators through Eqs.(27a) and (27b), repeated here for clarity: An important piece of physics that is special to our ν = 2/3 setup is worth emphasizing here.First we note that A , with A = R or L, represents an electrically neutral field that modifies neither the total charge nor the charge difference across the trench.This can be understood either from the fusion rule × ∼ 1 + -which implies that A carries the same (trivial) charge as the identity-or by recalling from Sec. III that R/L remains invariant under both Z 3 and Z dual 3 .It follows that ψ R/L and σ R/L must carry all of the physical charge of the lattice operators α R/Lj .That is, like their lattice counterparts, ψ R and σ R add charge 2e/3 to the top edge of the trench, while ψ L and σ L add charge 2e/3 to the bottom trench edge.In this sense the ψ and σ fields inherit the spatial separation exhibited by α R/Lj .The next section explores stacks of critical chains, and there this property will severely restrict the perturbations that couple fields from neighboring chains, ultimately enabling us to access a superconducting analogue of the Read-Rezayi state in a rather natural way. V. FIBONACCI PHASE: A SUPERCONDUCTING ANALOGUE OF THE Z3 READ-REZAYI STATE Consider now the geometry of Fig. 6(a) in which a spinunpolarized ν = 2/3 quantum Hall system hosts an array of N trenches of the type studied in Sec.IV.Edge excitations on the top and bottom of each trench can similarly be described with fields φ 1α (x, y) and φ 2α (x, y), where α denotes spin, x is a coordinate along the edges, and y = 1, . . ., N labels the trenches.In the charge-and spin-sector basis defined in Eqs.(41) the nontrivial commutation relations now read For y = y one simply recovers Eqs.(42).The additional commutators for y = y ensure proper anticommutation relations between electron operators acting at different trenches but play no important role in our analysis.We assume that the set of counterpropagating edge modes opposite each trench are alternately gapped by the Cooper pairing and electron backscattering mechanisms discussed in Sec.IV.At low energies the pinning of the charge-and spinsector fields in each gapped region is again described by Eqs. ( 44) and (45).Using the labeling scheme in Fig. 6(a), we respectively denote the integer operators characterizing θ σ , θ ρ , and φ ρ in a given domain by M (y), mj (y), and nj (y). The remaining low-energy degrees of freedom for the system are captured by Z 3 generalized Majorana operators α R/L,j (y) bound to the domain walls; these are defined precisely as in Eq. ( 49) upon appending a trench label y to each operator.In the spirit of Ref. 95 we are interested in the situation where these modes hybridize strongly with their neighbors inside of a given trench, and secondarily with neighbors from adjacent trenches.Just as for the Majorana case discussed in Sec.II, this weakly coupled chain approach allows us to utilize the formalism developed for a single trench in Sec.IV to access nontrivial 2D phases. Let the effective Hamiltonian describing this setup be The first term incorporates interactions between Z 3 generalized Majorana operators within each trench and essentially reflects N copies of the Hamiltonian in Eq. ( 57): Spin unpolarized 44) and (45)].We assume that the Z3 generalized Majorana operators bound to each domain wall hybridize strongly within a trench and weakly between neighboring trenches.Underlying this hybridization is tunneling of 2e/3 charges which can only take place through the fractional quantum Hall fluid; examples of allowed and disallowed processes are illustrated above.(b) Phase diagram for this system of weakly coupled chains starting from the limit where each chain is tuned to a critical point described by Z3 parafermion conformal field theory.The couplings λ a/b represent interchain perturbations defined in Eq. ( 67). Here J ∆ and J t denote superconducting and 'dual' fractional Josephson couplings, respectively, mediated by charge-2e/3 tunneling across the domains.Interchain couplings are encoded in H ⊥ and similarly arise from tunneling of fractional charges between adjacent trenches.Consider, for example, the perturbations ζ jj e −i l• φ2(xj ,y) e i l• φ1(x j ,y+1) + H.c. (65) with l = (1, 1) and x k corresponding to a coordinate in domain wall k in a given chain.These terms transfer charge 2e/3 between the top edge of domain wall j on trench y + 1, and the bottom edge of domain wall j on trench y.Such processes are indeed physical since the intervening quantum Hall fluid supports fractionalized excitations.As emphasized earlier 2e/3 tunneling across a trench is, by contrast, not permitted since the charge would necessarily pass through trivial regions that support only electrons.For instance, hopping of charge-2e/3 quasiparticles from the bottom edge of trench y + 1 to the top edge of trench y is disallowed for this reason.Figure 6(a) schematically illustrates such physical and unphysical processes.Symmetry partially constrains the tunneling coefficient ζ jj in Eq. ( 65).Specifically, enforcing charge conjugation C (up to a Z dual 3 transformation) allows one to take ζ jj = e i2π/3 ζ * jj .We will further assume for simplicity that ζ jj depends only on j − j , i.e., that the coupling strength between domain walls on adjacent chains depends only on their separation.The explicit dependence of ζ jj on this separation depends on microscopic details but should of course be appropriately short-ranged. The action of Eq. ( 65) in the low-energy manifold can be deduced by projecting onto the Z 3 generalized Majorana operators α R/L,j (y) using a trivial extension of Eqs.(48) to the multi-chain case [127].Using this procedure, one can show that the quasiparticle hoppings in Eq. ( 65) generate the following form of the interchain Hamiltonian: with t j−j real.The factor of (−1) j+j above reflects the alternating sign between even and odd domain walls on the righthand side of the projection in Eqs.(48).We have chosen to explicitly display this factor to distinguish from possible sign structure in t j−j , which encodes phases acquired by quasiparticles upon tunneling from domain wall j in one chain to j in another.Note also the conspicuous absence of terms that couple α † R,j (y) with α L,j (y + 1)-which importantly are unphysical.As stressed in Sec.IV B α Rj and α Lj respectively add fractionalized quasiparticles to the top and bottom edges of a given trench.Consequently such terms would implement disallowed processes similar to that illustrated in Fig. 6(a). Suppose that J t = J ∆ so that in the decoupled-chain limit each trench resides at a critical point described by a Z 3 parafermion CFT.Again, this limit is advantageous since arbitrarily weak inter-trench couplings can dramatically impact the properties of the coupled-chain system.At low energies it is then legitimate to expand the lattice operators α R/L,j (y) in terms of critical fields using Eqs.(60a) and (60b).Inserting this expansion into the interchain Hamiltonian yields with real couplings Insight into the phases driven by these interchain perturbations-both of which are relevant at the decoupledchain fixed point-can be gleaned by examining certain extreme limits.Consider first the case with λ a = 0, λ b = 0. Since λ b hybridizes both the right-and left-moving sectors of a given chain with those of its neighbor, we conjecture that this coupling drives a flow to a fully gapped 2D phase with no lowenergy modes 'left behind'.It is unclear, however, whether this putative gapped state smoothly connects to that generated by moving each individual trench off of criticality by turning on the thermal perturbation H T = y x λ T R (y) L (y), where λ T ∼ J t − J ∆ .This intriguing question warrants further investigation but will not be pursued in this paper. Instead we concentrate on the opposite limit λ a = 0, λ b = 0, where an immediately more interesting scenario arises.Here the parafermion fields hybridize in a nontrivial way-left-movers from chain 1 couple only to right-movers in chain 2, left-movers from chain 2 couple only to rightmovers in chain 3, and so on.'Unpaired' right-and leftmoving Z 3 parafermion CFT sectors thus remain at the first and last chains, respectively.The structure of this perturbation parallels the coupling that produced spinless p + ip superconductivity from critical chains in the integer quantum Hall case studied in Sec.II, and furthermore closely resembles that arising in Teo and Kane's construction of Read-Rezayi quantum Hall states from coupled Luttinger liquids [95].In the present context, provided λ a gaps the bulk (which requires λ a > 0 as discussed below) the system enters a superconducting analogue of the Z 3 Read-Rezayi phase that possesses edge and bulk quasiparticle content similar to its non-Abelian quantum Hall cousin.For brevity, we hereafter refer to this state as the 'Fibonacci phase'-the reason for this nomenclature will become clear later in this section. One can deduce rough boundaries separating the phases driven by λ a and λ b from scaling.To leading order, these couplings flow under renormalization according to where is a logarithmic rescaling factor and ∆ a = 4/3, ∆ b = 14/15 represent the scaling dimensions of the respective terms.The physics will be dominated by whichever of these relevant couplings first flows to strong coupling (i.e., values of order some cutoff Λ).Equating the renormalization group scales at which λ a/b reach strong coupling yields the following phase boundary: with λ * a/b the bare couplings at the transition.Figure 6(a) sketches the resulting phase diagram, which we expound on below. Naturally we are especially interested in the Fibonacci phase favored by λ a > 0 and flesh out its properties in the remainder of this section.We do so in several stages.First, Sec.V A analyzes the properties of a single 'ladder' consisting of left-movers from one trench and right-movers from its neighbor.As we will see this toy problem is already extremely rich and contains seeds of the physics for the 2D Fibonacci phase.Section V B then bootstraps off of the results there to obtain the Fibonacci phase's ground-state degeneracy and quasiparticle content.The properties of superconducting vortices in this state are addressed in Sec.V C, and finally Sec.V D discusses the edge structure between the Fibonacci phase and the vacuum (as opposed to the interface with ν = 2/3 fluid). A. Energy spectrum of a single 'ladder' Until specified otherwise we study the critical trenches perturbed by Eq. ( 67) assuming λ b = 0.This special case allows us to obtain various numerical and exact analytical results that will be used to uncover universal topological properties of the Fibonacci phase that persist much more generally.Tractability here originates from the fact that with λ b = 0 one can rewrite the coupled-chain Hamiltonian as H = y H y,y+1 ladder , where the 'ladder' Hamiltonian involves only left-moving fields from trench y and right-movers from trench y + 1. (Non-zero λ b clearly spoils this decomposition.)More explicitly, H y,y+1 ladder can be written as with H CFT terms describing the dynamics for the unperturbed left-and right-movers from trenches y and y + 1, respectively.Although the ladder Hamiltonians at different values of y act on completely different sectors, the problem does not quite decouple: there remains an important constraint between their Hilbert spaces which will become crucial in Sec.V B. For the rest of this subsection we explore the structure of H y,y+1 ladder for a single ladder.The information gleaned here will then allow us to address the full 2D problem. Although λ a as defined earlier is real it will be useful to now allow for complex values-not all of which yield distinct spectra.Because correlators in the critical theory with λ a = 0 are non-zero only when each of the total Z 3 charges is trivial, perturbing around the critical point shows that the partition function can only depend on the combinations (λ a ) 3 , (λ * a ) 3 and |λ a | 2 .Thus Hamiltonians related by the mapping λ a → e i2π/3 λ a are equivalent.The physics does, however, differ dramatically for λ a positive and negative [112,113].For λ a < 0 the model flows to another critical point, which turns out to fall in the universality class of the tricritical Ising model.In CFT language, this is an example of a flow between minimal models via the Φ 1,3 operator [128]; here the flow is from central charge c = 4/5 to c = 7/10 theories.The solid lines in Fig. 7(a) correspond to λ a values for which the ladder remains gapless.These results imply that the full coupledchain model with λ a < 0 and λ b = 0 realizes a critical phase as denoted in Fig. 6(a).For λ a non-negative (and not with phase ±π/3) the spectrum of a single ladder is gapped.We focus on this case from now on-especially the limit of λ a real and positive (modulo a phase of 2π/3), where the field theory is integrable [112].These special values are indicated by dotted lines in Fig. 7(a).Integrability provides a valuable tool for understanding the physics as it allows one to obtain exact results for the ladder spectrum.Namely, the spectrum can be described in terms of quasiparticles with known scattering matrices and degeneracies.References 112 and 129 determined these via the indirect method of finding the simplest solution of the integrability constraints adhering to known properties of a Hamiltonian equivalent to Eq. ( 71).This analysis is fairly technical, using tools from the representation theory of quantum groups [130].While this language is probably unfamiliar to most condensed-matter physicists, the results are not: they are the rules for fusing anyons!The connection between the quasiparticle spectrum and scattering matrix of a 1+1D integrable quantum field theory and the fusing and braiding of anyons in a 2+1D topological phase is explained in depth in Ref. 131.For the Z 3 parafermion case of interest here, the implications of integrability are striking but quite simple to understand. To illustrate the results it is useful to first characterize the Hilbert space for a critical clock chain reviewed in Sec.III, and then identify the (related but not identical) Hilbert space for a single ladder.Consider for the moment the familiar three-state quantum clock model.As discussed in Sec.III, the entire spectrum at the critical point can be organized into sectors labeled by the chiral primary fields.With periodic boundary conditions the allowed left-and right-moving Hilbert spaces correspond to conjugate pairs H L F ⊗ H R F † , where F signifies one of the six fields I, ψ, ψ † , , σ, σ † .Perturbing the critical clock model with a perturbation δH ∝ x (ψ † L ψ R + H.c.) analogous to the λ a term in our ladder Hamiltonian mixes these sectors, but not completely.Two decoupled sectors remain.This follows from the fusion algebra described in Sec.III: the key property here is that fusing with ψ or ψ † does not mix the first three of the six fields above with the last three.Thus when the critical clock Hamiltonian is perturbed by δH, the Hilbert space can still be divided into the following 'superselection' sectors, Next we return to the ladder Hamiltonian given in Eq. ( 71).In this case the superselection sectors above still appear, but now the left-and right-moving Hilbert spaces correspond to different trenches.For this reason the constraints between the left-and right-movers are relaxed, resulting in sectors not present in the periodic clock chain.Specifically, there are two additional superselection sectors given by where again L and R refer to different trenches.Note that we forbid combinations such as H L I ⊗ H R ψ , which would require net fractional charge in the ν = 2/3 strip between the trenches; for a more detailed discussion see Sec.V C. The upshot of this perturbed CFT analysis is that the Hilbert space for a single ladder can be split into the four distinct sectors defined in Eqs. ( 72) and (73). Exploiting integrability of the Hamiltonian in Eq. ( 71) at λ a > 0 provides both an intuitive way of understanding the spectrum, and reveals remarkable degeneracies among the sectors that are far from apparent a priori.One important feature is that the integrable model admits two degenerate ground states not related by any local symmetry (actually this property survives for rather general λ a -see below).We confirmed the presence of two ground states by analyzing the spectrum numerically in two complementary ways.The first method employed the density matrix renormalization group (DMRG) on an integrable lattice model; this analysis will be detailed elsewhere [114].The second method utilized the truncated conformal space approach (TCSA), which directly simulates the field theory [132,133].Here the eigenstates and operator product rules of the CFT are used to characterize the Hilbert space and the action of the perturbation on these states.By truncating the Hilbert space, one obtains a finitedimensional matrix that can be diagonalized numerically.Results of this analysis appear in Fig. 7(c), which displays the energy E versus momentum k for three of the physical superselection sectors (the spectrum of the fourth, [ε 1], follows from that of [1ε]).These plots clearly reveal a degeneracy between the ground states in [1 1] and [εε] sectors, and a gap to all excited states.Since there is no symmetry of the fusion algebra between the identity and ε sectors, however, gapped excitations about the two ground states are not degenerate.This too is readily apparent from our TCSA numerics in Fig. 7(c). To understand the situation more intuitively, it is useful to imagine a Ginzburg-Landau-type effective potential following Refs.134 and 135, where the same spectrum as the ladder Hamiltonian arises (but starting from a different model).Two non-symmetry-related vacua together with the low-lying excitations can be described by a double-well potential, where the two wells have the same depth but exhibit different curvature as in Fig. 7(b).From this effective potential, one can understand the four sectors in the ladder spectrum as follows.Two of the sectors, [1 1] and [εε], correspond to the degenerate minima and massive fluctuations thereabout.The different curvature of the wells leads to non-degenerate massive modes-similar to our TCSA numerical data where [εε] exhibits the smaller gap.In fact, there 'one-particle' states occur, whereas the gap in the [1 1] sector is about twice as large and appears to consist of a multi-particle continuum.The remaining two sectors correspond to 'kinks' interpolating between the ground states.A kink is a field configuration where the field takes on one minimum to the left of some point in space and a different minimum on the right; the excitation energy is then localized at the region where the field changes.There are two possible configurations, related by parity, and we will label these here as kinks and antikinks.It is natural to expect that these parity conjugates occur in the [1ε] and [ε 1] sectors.This is indeed consistent with our numerical work displayed in Fig. 7(c). Aside from the two ground states, there exists another remarkable degeneracy between two very different quasiparticle excitations: the gap in the [εε] sector is the same as the minimum kink or antikink energy [112,129].One can see this either directly from the numerics in Fig. 7(c), or from an analysis exploiting integrability.The latter shows that the kink, antikink, and 'oscillator' excitation in the [εε] sector exhibit identical dispersion as well.The entire spectrum is then built up from these fundamental excitations.For instance, the lowest excited states in the [1 1] sector form a two-particle continuum originating from kink/antikink pairs (as opposed to another species of single-particle excitations), consistent with the numerically determined spectrum. Even though there are three flavors of excitations, the number of states in the spectrum with N quasiparticles actually grows more slowly than 3 N .(By 'quasiparticle' we mean a localized excitation that takes the form of either a kink, antikink, or oscillator mode.)The reason is that the spatial order in which different excitation flavors occur is constrained. Viewing the problem in terms of the double-well potential described above, the following rules are evident.Going (say) left to right, a kink can be followed by an antikink or an oscillator excitation; an oscillator can be followed by an antikink or another oscillator; and an antikink can only be followed by a kink.Because of these restrictions the number of states grows asymptotically with N as ϕ N , where again ϕ ≡ (1 + √ 5)/2 is the golden ratio.We therefore dub the features described here as the 'Fibonacci kink' spectrum. Integrability turns out to provide a sufficient but not necessary condition for these striking degeneracies.We have verified numerically using the TCSA method that the two symmetry-unrelated ground states and the Fibonacci kink spectrum persist even for λ a lying away from the dashed lines in Fig. 7(a) that mark the integrable points [114].For instance, with λ a = e iπ/5 the spectra are nearly indistinguishable from those in Fig. 7(c).Hence for almost all λ a (the exception occurring where the the system is critical) the ladder Hamiltonian realizes a gapped phase with the properties noted above.It is useful to comment that one can, in principle, spoil this structure: terms such as σ L (y)σ R (y + 1) + H.c. break the degeneracies-but are nonlocal in our setup and thus do not reflect physical perturbations. We should emphasize here that the preceding discussion applies only to a single ladder Hamiltonian defined in Eq. (71).By itself this 1D model does not support Fibonacci anyons as stable excitations in any meaningful sense.Nevertheless the tantalizing similarities are by no means accidental.In fact the remarkable Fibonacci kink spectrum should be viewed as a precursor to both the topological order and Fibonacci anyons that do appear in the full 2D coupled-trench system.This will be elucidated in the next subsection which uses the results obtained here to deduce the ground-state degeneracy and particle content of the Fibonacci phase. B. Ground state degeneracy and quasiparticle content We now show that in the 2D Fibonacci phase the coupledchain system exhibits a two-fold ground-state degeneracy on a torus.Consider N parallel trenches labeled by y, coupled to their neighbors via λ a > 0 (we continue to assume λ b = 0).To form the torus geometry each chain is itself periodic and the first and last chains at y = 1, N couple as well.The system is therefore described by H = N y=1 H y,y+1 ladder with periodic boundary conditions along the x and y directions; the ladder Hamiltonian is defined in Eq. ( 71) and was studied for a single y in the last subsection. Given that for a single ladder Eq. ( 71) already exhibits a two-fold ground-state degeneracy, one might naively expect a 2 N -fold degeneracy for the full N -trench system.This conclusion is incorrect, however, as such naive counting ignores Hilbert space constraints between the left-and rightmovers within a given trench.In particular, combinations H R F (y) ⊗ H L F (y) with F ∈ {I, ψ, ψ † } and F ∈ { , σ, σ † } (or vice versa) are forbidden for any physical boundary conditions on trench y [136].Here we have explicitly denoted that H R/L correspond to the same chain y to avoid possible confusion with the previous subsection (where the right-and left-moving Hilbert spaces correspond to different trenches).Thus the allowed CFT superselection sectors in each chain must have either F, F ∈ {I, ψ, ψ † } or F, F ∈ { , σ, σ † }; in other words, CFT sector R (y) ∼ CFT sector L (y) mod ψ. (74) Note that this includes sectors such as H R I (y) ⊗ H L ψ (y), which are physical since fractional charges can hop between trenches.Now recall from Sec. V A that the ground states for a single ladder occur in the sectors [1 1] and [εε] as defined in Eq. ( 72), where again H R and H L correspond to chains y and y + 1.In order for the 2D coupled-trench system to reside in a ground state, the superselection sectors between adjacent chains must therefore match, i.e., CFT sector L (y) ∼ CFT sector R (y + 1) † . ( Combining with Eq. ( 74) this locks the Hilbert spaces of every chain together, yielding two ground states as claimed.We label the ground states as |1 and |ε , which denotes the corresponding sectors in the chains.Our aim next is to unambiguously identify the anyon content of our coupled-chain phase.The Fibonacci kink spectrum identified in the ladder problem in Sec.V A already strongly hints that a Fibonacci anyon is present, though we will derive this explicitly in what follows.To do so it will be instructive to review a few facts regarding topological states on a cylinder (instead of a torus).On an infinite cylinder, the ground state degeneracy equals the number of anyon types.For every anyon α there is an associated ground state |α , the set of which forms an orthogonal basis for the ground state Hilbert space.Physically, these states are defined with a fixed anyon charge at infinity, or equivalently, as eigenstates of Wilson loop/anyon flux operators around the circumference of the cylinder.(They are also referred to as 'minimum entangled states' [137].)Anyon excitations are trapped at the domain wall between ground states that are consistent with the fusion rules.More precisely, using y as a coordinate in the infinite direction of the cylinder, let the wavefunction for y > 0 be |α + and for y < 0 be |α − .At least one anyon must be trapped on the circle y = 0, with total topological charge β satisfying the fusion relation α − × β ∼ α + + . . . .Applying this discussion to our setup we now consider an infinite number of trenches, each forming a ring around the cylinder.This gives us an infinite number of chains labeled by y ∈ Z, coupled via Eq.(71).By the same logic as for the torus geometry, there are again two ground states |1 , |ε that arise from different superselection sector on each chain.Keep in mind that for the time being 1 and ε are merely labels derived from the coupled-chain construction; we have not yet made the association with anyons. Recall in our argument for the two ground states that Eq. ( 74) is an unyielding requirement which follows from the boundary condition, while Eq. ( 75) follows from energetics.Hence when studying excited states, we can relax the second condition on specific ladders where localized excitations exist.Let us examine the three flavors of fundamental ladder excitations-kink, antikink, and 'oscillator'-identified in Sec.V A. Suppose first that there is a single kink between trenches y = 0, 1-i.e., that the corresponding ladder resides in the [1ε] sector defined in Eq. ( 73).The chains then lie in the 1 sector for y ≤ 0 and the ε sector for y ≥ 1.For an antikink, the sectors are ε and 1 for y ≤ 0 and y ≥ 1, respectively.Finally, for an oscillator excitation every chain must be in the ε sector (that excitation type exists only in the [εε] ladder sector). Since the three excitations possess the same mass and dispersion, it is natural to identify all of these as the same nontrivial anyon (which we label as • for the time being).The discussion above then implies that a • anyon can occur at a domain wall between |ε and |1 on the cylinder, or simply between two |ε regions-but not between two |1 states.Accordingly, the allowed fusion channels follow as 1 × • ∼ ε and ε × • ∼ 1 + ε, whereas 1 × • → 1 is forbidden.We can rewrite these rules as a tensor N a •,b with integer entries, where N a •,b = 1 if b × • → a is admissible and zero otherwise.In the basis of 1 and ε ground states, the fusion matrix for the excitation is with dominant eigenvalue, or quantum dimension, equal to the golden ratio: Hence in addition to being associated with CFT sectors, we can identify 1 as the trivial anyon and ε = • as the Fibonacci anyon. We further corroborate this result through numerical evaluation of the 'topological entanglement entropy'.Suppose that we partition the cylinder between chains y c and y c +1 as illus-trated schematically in Fig. 8(a).The entanglement entropy is given by S E = − Tr y>yc [ρ log ρ], where ρ = Tr y≤yc |Ψ Ψ| is the reduced density matrix that comes from a partial trace of the wavefunction |Ψ .For a ground state of any gapped system, this quantity scales linearly with the cylinder circumference L x : S E ∼ sL x −γ+. . .(up to terms that decay exponentially with L x ).The slope s is identical for all ground states of the same Hamiltonian but depends on non-universal microscopic details.By contrast, the intercept γ defines the 'topological entanglement entropy' [138,139]-a universal topological invariant of the ground state used in the computation.This invariant can be further decomposed as γ = log(D/d Ψ ), where d Ψ is the quantum dimension of the quasiparticle corresponding to the state |Ψ , and D is the 'total quantum dimension' of the phase [138][139][140]. In the geometry illustrated in Fig. 8(a), the only contribution to entanglement comes from the left-movers of chain y = y c and right-movers of chain y = y c + 1, as all other degrees of freedom decouple at λ b = 0. Hence the entanglement entropy arising from a bipartition of the cylinder is equivalent to that arising from a bipartition of a single ladder into leftand right-movers.(This setup bears much resemblance to the so-called 'spin-1 AKLT' chain [141].There each spin fractionalizes into a pair of spin-1 2 's, and in the ground state the 'right' spin- 1 2 for a given site forms a singlet with the 'left' spin- 1 2 at the next site over.An entanglement cut between two adjacent sites thus breaks apart exactly one spin singlet into its left and right spin-1 2 's.)We used our TCSA simulations of Eq. ( 71) to evaluate S E for the two ground states |1 and |ε ; the data appear in Fig. 8(b).By fitting S E versus L x for ground state |1 (which corresponds to d 1 = 1) we extract the total quantum dimension D = 1.9 ± 0.1.One can in principle perform a similar fit for the other ground state |ε to extract d ε /D.However, a far more precise value for d ε follows from the difference δS E ≡ S E [|ε ] − S E [|1 ] of entanglement entropies for the two ground states; the linear term in L x cancels here leaving δS E = log(d ε /d 1 ).In this way we obtain quantum dimension d ε = 1.619 ± 0.002.These values are in excellent agreement with those of a Fibonacci anyon model with just one nontrivial particle, for which The ground state degeneracy on the torus, fusion rules, and topological entanglement entropy computed above are sufficient in this case to uniquely identify the 2D topological phase that the system enters.Indeed, there are only two topological phases of fermions with two-fold ground state degeneracy on the torus [142].The nontrivial particle can be either a semion or a Fibonacci anyon.We can distinguish between these possibilities with either the fusion rules or topological entanglement entropy; both indicate that our coupled-trench system supports the Fibonacci anyon-which justifies the name 'Fibonacci phase' christened here. Given the particle types and fusion rules, the universal topological properties of this phase can be determined by solving the pentagon and hexagon identities; they may be summarized as follows (for a concise review, see Ref. 143).The Fibonacci phase admits only the two particle types deduced above: the trivial particle, 1, and a Fibonacci anyon, ε.They have topological spins θ 1 = 1, θ ε = e 4πi/5 and satisfy the fusion rule ε × ε ∼ 1 + ε [144].As a result of this fusion rule, the dimension of the low-energy Hilbert space of (n + 1) ε-particles with total topological charge 1 is the n th Fibonacci number, F n , which grows asymptotically as ϕ n / √ 5; thus its quantum dimension is d ε = ϕ, as we saw previously.(This is the same quantity that enters the formulas for the entanglement entropy used above.)When two Fibonacci anyons are exchanged, the resulting phase acquired is either R εε 1 = e −4πi/5 or R εε ε = e 3πi/5 , depending on the fusion channel of the two particles denoted in the subscript.The result of an exchange can thereby be deduced if we can bring an arbitrary state into a basis in which the two ε-particles in question have a definite fusion channel.This can be accomplished with the Fsymbols, which effect such basis changes.The only nontrivial one is: written in the basis {1, ε} for the central fusion channel.From these relatively simple rules follows a remarkable fact: these anyons support universal topological quantum computation [145,146]. While the aforementioned analysis was carried out for λ b = 0, the gapped topological phase that we have constructed must be stable up to some finite λ b .Rough phase boundaries for this state were estimated earlier; see Fig. 6(a).However, directly exploring the physics with λ b = 0, either analytically or numerically, is highly nontrivial since we then lose integrability and can no longer distill the problem into individual 'ladders' with a Hilbert space constraint.Progress could instead be made by employing DMRG simulations to map out the phase diagram more completely, which would certainly be interesting to pursue in follow-up work. C. Superconducting vortices Since the Fibonacci phase arises in a superconducting system, it is also important to investigate the properties of h/2e vortices-despite the fact that, unlike Fibonacci anyons, they are confined.Before turning to this problem it will be useful to briefly recall the corresponding physics in a spinless 2D p+ip superconductor [54,[147][148][149].One way of understanding the nontrivial structure of vortices there is by considering the chiral Majorana edge states of a p + ip superconductor on a cylinder.Finite-size effects quantize their energy spectrum in a manner that depends on boundary conditions exhibited by the edge Majorana fermions.With anti-periodic boundary conditions the spectrum is gapped, while in the periodic case an isolated zero-mode appears at each cylinder edge.Threading integer multiples of h/2e flux through the cylinder axis toggles between these boundary conditions, thereby creating and removing zero-modes.This reflects the familiar result that h/2e vortices in a planar p+ip superconductor bind Majorana zero-modes and consequently form Ising anyons.superconducting vortices in the Fibonacci phase.We initially assume that pure ν = 2/3 quantum Hall states border the Fibonacci phase from above and below.This results in two well-defined boundaries: the Fibonacci phase/quantum Hall edge, and the quantum Hall/vacuum edge.Adiabatically inserting h/2e flux through the cylinder (which is topologically equivalent to an h/2e vortex in the bulk of a planar Fibonacci phase) pumps charge e/3 across each quantum Hall region as shown above.Because the charge difference across the trenches then changes, the upper Fibonacci phase/quantum Hall edge binds either a ψ or σ excitation that carries charge 2e/3 mod 2e.The upper quantum Hall/vacuum edge, however, binds charge e/3 so that in total the vortex carries only fermion parity.If one shrinks the pure quantum Hall regions so that the two boundaries hybridize, ψ and σ lose their meaning since other sectors mix in.The final conclusion is that an h/2e vortex traps either a trivial particle or Fibonacci anyon depending on non-universal details, but does not lead to new quasiparticle types. We will deduce the properties of vortices in the Fibonacci phase by similarly deforming our ν = 2/3 quantum Hall setup into a cylinder as sketched in Fig. 9.In principle the physics can be analyzed by deriving the influence of flux on boundary conditions for the Z 3 parafermionic edge modes supported by this state, though such an approach will not be followed here.Instead we develop a related adiabatic flux-insertion argument that allows us to obtain the result with minimal formalism.We proceed by first assuming that the Fibonacci phase is bordered by 'wide' ν = 2/3 regions on the upper and lower parts of the cylinder, as Fig. 9 indicates.This will allow us to separately address the effect of flux on (i) the gapless Z 3 parafermion modes at the interface between the Fibonacci phase and ν = 2/3 regions, and (ii) the outermost ν = 2/3 edge states that border the vacuum.One can then couple these sectors to determine the final vortex structure.Following this logic we will show that in contrast to the p + ip case h/2e flux does not introduce new topological anyons be-yond the trivial and Fibonacci particles already discussed.A vortex may, however, provide a local potential that happens to trap a deconfined Fibonacci anyon, though whether or not this transpires is a non-universal question of energetics.(Note that the same could be said for, say, an impurity, so one should not attach any deep meaning to this statement.) Let us first a consider a cylinder with no flux, in the limit where each trench is tuned to Z 3 parafermion criticality and interchain coupling is temporarily turned off.For concreteness we also assert that each ν = 2/3 edge contains no net electric charge mod 2e.The sum and difference of the total charge on the two sides of each trench, Q ± tot [see Eq. ( 59)], must also then vanish mod 2e.This restricts the possible CFT sectors present in the trenches to either I R × I L or R × L ; all other physical sectors contain the wrong charge.Next we adiabatically increase the flux through the cylinder from 0 to h/2e [150].Because of the nontrivial Hall conductivity in the ν = 2/3 fluids, charge e/3 pumps from the bottom to top edge of each quantum Hall region in response to the flux insertion, as Fig. 9 illustrates.The pumping leaves the total charge Q + tot on each trench intact but alters the total charge difference for each trench to Q − tot = −2/3 mod 2. The only allowed sectors consistent with this charge arrangement are ψ R × ψ † L and σ R × σ † L .Finally we turn on the interchain perturbation λ a in Eq. ( 67) to enter the Fibonacci phase.The CFT sectors in the bulk that are gapped by this coupling will clearly then mix.However, the gapless right-movers from the top trench and left-movers from the bottom remain unaffected by λ a ; the former necessarily realizes either ψ R or σ R , while the latter realizes ψ † L or σ † L .Focusing on the top half of the system, this argument shows that an h/2e superconducting vortex traps an Abelian ψ or non-Abelian σ particle at the interface between ν = 2/3 fluid and the Fibonacci phase.Importantly, we must additionally account for the quantum Hall edge at the top of the cylinder, which also responds to the flux and influences the structure of a vortex in a crucial way as we will see.Figure 9 shows that the flux induces charge +e/3 at the uppermost cylinder edge.Together, we see that an h/2e vortex gives rise to edge excitations ψ, 1/3 or σ, 1/3 when the Fibonacci phase is bordered by wide Abelian quantum Hall fluid.Here and below F, q indicates that ν = 2/3 liquid/Fibonacci phase interface traps particle type F, while the quantum Hall edge bordering the vacuum binds charge q mod 1. Recalling the 2e/3 charge associated with ψ and σ, we conclude that the h/2e vortex carries total charge e mod 2e-which is not fractional.Next we discuss the fate of the ψ and σ particles at the Fibonacci phase boundary when we include coupling to the outer quantum Hall edge. If one assumes that the Z 3 parafermion edge states and outer ν = 2/3 edge modes decouple, then the system can in principle reside in six possible edge sectors: I, 0 , ψ, 1/3 , ψ † , 2/3 , , 0 , σ, 1/3 , and σ † , 2/3 .(This statement is independent of vorticity, and simply tells one which states have physical charge configurations.)Suppose now that the pure quantum Hall region at the top of Fig. 9 shrinks to allow fractional charge tunneling between the parafermion and ν = 2/3 edge modes.Some of the edge sectors above then mix and hence are no longer distinguishable.For instance, transferring e/3 charge from the vacuum edge to the boundary of the Fibonacci phase can send σ, 1/3 → , 0 .In fact only two inequivalent edge sectors remain-the triplet I, 0 , ψ, 1/3 , ψ † , 2/3 that is associated with the identity particle, and the remaining set , 0 , σ, 1/3 , σ † , 2/3 associated with the ε non-Abelian anyon. Applying the above discussion to vortices, we infer that h/2e flux does not generically bind a ψ or σ in any meaningful way once the parafermion and outer ν = 2/3 edge modes hybridize.The vortex can trap a trivial or Fibonacci anyon but exhibits no finer Z 3 structure-which is entirely consistent with the fact that it carries only fermion parity.Which of the two particle types occurs in practice depends on nonuniversal microscopic details, though both cases are guaranteed to be possible because ε is deconfined.(If a vortex binds a trivial particle one can always bring in a Fibonacci anyon from elsewhere and attach it to the vortex to obtain the ε case, or vice versa.) In fact a similar state of affairs occurs for any phase that supports a Fibonacci anyon, including the Z 3 Read-Rezayi state.Because of the fusion rule ε × ε ∼ 1 + ε, the Fibonacci anyon ε must carry the same local quantum numbers (such as charge and vorticity) as the trivial particle.Thus any Abelian anyon A can fuse with the neutral Fibonacci anyon to form a non-Abelian particle with identical local quantum numbers: A × ε ∼ Aε [151].For example, in the case of the Z 3 Read-Rezayi state at filling ν = 13/5, there are two anyons with electric charge e/5: one Abelian and the other non-Abelian with quantum dimension ϕ.The latter quasiparticle may be obtained by fusing the former with a neutral Fibonacci anyon.Or equivalently, the former may be obtained from the latter by fusing two non-Abelian e/5 quasiparticles with a −e/5 quasihole.Which of these e/5 excitations has lowest energy is a priori non-universal.Details of such energetics issues are interesting but left to future work. Finally, we remark that the Z 3 structure at the edge between the Fibonacci phase and ν = 2/3 state arises solely from the fractional quantum Hall side.The corresponding fractionally charged quasiparticles indeed do not exist within the Fibonacci phase, as evidenced by the absence of ψ or σ particles in the bulk.Our coupled-chain construction provides an intuitive way of understanding this: 2e/3 excitations are naturally confined in the Fibonacci phase since the trenches provide a barrier that prevents fractional charge from tunneling between adjacent quantum Hall regions.The Fibonacci anyon is neutral, by contrast, and thus suffers no such obstruction. D. Excitations of the edge between the Fibonacci phase and the vacuum Bulk properties strongly constrain the edge excitations of a topological phase.In particular, the edge bordering the vacuum must support as many anyon types as the bulk.This correspondence is simplest when the bulk is fully chiral.Edge excitations are then described by a CFT (possibly deformed by marginal perturbations so that some of the velocities are unequal) that exhibits precisely the same number of primary fields as the bulk has anyon types.These fields possess fractional scaling dimensions, and all other fields have scaling dimensions that differ from these by integers.Therefore, one can view an arbitrary field as creating an anyon (via a primary operator) together with some additional bosonic excitations.It is important to note that the edge may have additional symmetry generators beyond just the Virasoro generators derived from the energy-momentum tensor.These additional symmetry generators have their scaling dimensions fixed to 1 (Kac-Moody algebras) or some other integer (e.g., W-algebras) [152]. Since the Fibonacci phase has only two particle types, 1 and ε, the minimal possible edge theory describing the boundary with the vacuum has two primary fields which we denote as 1 and ˜ .(The tilde is used to distinguish from the field that lives at the boundary between the Fibonacci phase and the parent quantum Hall fluid.)At first glance, however, our quantum Hall/superconductor heterostructure appears to exhibit a much more complicated edge structure than the quasiparticle content suggests.The interface between the Fibonacci phase and the spin-unpolarized ν = 2/3 state is described by a Z 3 parafermion CFT, and the boundary between the ν = 2/3 state and the vacuum is described by a CFT for two bosons with K-matrix The former CFT has six primary fields while the latter has three.One can obtain a direct interface between the Fibonacci phase and vacuum by simply shrinking the outer ν = 2/3 fluid until it disappears altogether; the resulting boundary is then naively characterized by a product of these two edge theories.However, in the previous subsection we argued that of the 18 primary fields in the product CFT, only a subset of 6 are physical from charge constraints, and these combine to just two primary fields.Here we explicitly construct a chiral CFT with exactly these two primary fields.Furthermore we demonstrate that upon edge reconstruction the Fibonacci phase/vacuum interface is described by this CFT combined with unfractionalized fermionic edge modes, in precise correspondence with the bulk quasiparticle types supported by the Fibonacci phase. It is useful to first examine the simpler case of a ν = 2/3 state built out of underlying charge-e bosons.This allows us to replace the K-matrix of Eq. ( 78) with For brevity we refer to this bosonic quantum Hall phase as the (221) state.Most of the preceding analysis, including the appearance of a descendant Fibonacci phase, is unchanged by this modification.However, by working with a bosonic theory we can appeal to modular invariance to connect the bulk quasiparticle structure to the edge chiral central charge c R − c L : where a sums over the two anyon types and c R/L denote the central charges for right/left-movers.Using results from Sec. V B-in particular, D = 1 + ϕ 2 , d 1 = 1, d ε = ϕ and θ 1 = 1, θ ε = e 4πi/5 -the chiral central charge follows as c R − c L ≡ 14/5 mod 8. Thus, the minimal edge theory describing the boundary with the vacuum is purely chiral with c R = 14/5 and c L = 0. We now show that the bosonic Fibonacci phase/vacuum edge is consistent with these scaling dimensions and central charges. The key physical observation was made in the previous subsection: fractional charge and the resulting Z 3 structure are features of the ν = 2/3 state, not the Fibonacci phase.Equivalently, not all of the excitations of the combined Z 3 parafermion CFT and the (221) edge states are allowed in the Fibonacci phase because we cannot transfer fractional charge from one edge of the system to the other via the bulk.Fractional charge can pass only between the Fibonacci phase/(221) state and (221) state/vacuum interfaces; together, these two edges form the Fibonacci-to-vacuum edge.As such the total charge of the Fibonacci phase/vacuum edge must be an integer, which dictates the set of physical operators that appear. In terms of the Z 3 parafermion operators and the edge fields φ ↑ , φ ↓ of the (221) state, the most relevant operators that transfer fractional charge within an edge are Note that these all have scaling dimension 1.There are six additional dimension-1 operators that add integer charge to an edge: Finally, the two charge current operators also have scaling dimension 1.The 14 operators in Eqs.(81a) through (81c) satisfy the Kac-Moody algebra for the Lie group G 2 at level-1: where f abc are the structure constants for the G 2 Lie algebra, normalized such that the Killing form f acd (f bcd ) * = 8δ ab .The two charge currents form the Cartan subalgebra for G 2 while the operators in Eqs.(81a) and (81b) correspond to the non-zero roots of G 2 , as follows: In the axes the vector l represents the argument of a given (221) state operator written as e i l• φ [e.g., l = (2, 1) for e 2iφ ↑ +iφ ↓ ].As an extension to the Virasoro algebra, this Kac-Moody algebra has c = 14/5 and only two primary fields, the identity 1 and ˜ = σ † e iφ ↑ +iφ ↓ [153].All other fields of the CFT can be constructed by combining one of the primaries with the generators in Eqs. ( 81), e.g., arises from the operator product expansion between ˜ and ψe −iφ ↑ −iφ ↓ .The identity field has scaling dimension h 1 = 0 and transforms trivially under the G 2 action, while the nontrivial field ˜ has scaling dimension h ˜ = 2/5 and belongs in the 7-dimensional fundamental representation of G 2 .Here we can see that the bulk-edge correspondence is consistent with our identification of the bulk as the Fibonacci phase; for example, the topological spins of 1 and ε are related to the scaling dimensions of the fields 1 and ˜ via θ 1,ε = e 2πih1,˜ .We now return to the fermionic case, where the ν = 2/3to-vacuum edge is characterized by the K-matrix in Eq. ( 78).The allowed operators that transfer charge in the fermionic Fibonacci phase/vacuum edge are once again given by Eqs.(81).Unlike in the bosonic case, however, these operators are nonchiral because the fermionic ν = 2/3 state supports counterpropagating edge modes at the interface with the vacuum.Nevertheless they remain spin-1 operators as in the bosonic setup.Moreover, the fermionic Fibonacci-to-vacuum edge exhibits a phase that bears a simple relation to the bosonic edge, as we now demonstrate. This phase occurs when the edge reconstructs such that an additional non-chiral pair of unfractionalized modes comes down in energy and hybridizes with the modes of the ν = 2/3to-vacuum edge.In the limit where these modes are gapless, the K-matrix becomes The ν = 2/3-to-vacuum edge is then described by the effective field theory Here the ellipsis represents quasiparticle tunneling processes; indices I, J label the field components such that φ 1 , φ 2 denote the original spin-up and spin-down modes while φ 3 , φ 4 represent the new counterpropagating modes added to the edge; and V IJ is a symmetric matrix that characterizes densitydensity interactions amongst all four modes.If V IJ is small for I = 1, 2 and J = 3, 4, then the additional φ 3,4 fields generically acquire a gap because one of the tunneling perturbations cos(φ 3 ± φ 4 ) will be relevant [154].However, when these off-diagonal entries in V IJ are appreciable the edge can enter the new phase that we seek. To describe this phase, it is convenient to invoke a basis change to Ke = W K e W T and Ṽ = W V W T , where and Suppose for the moment that ṼIJ = 0 for I = 1, 2 and J = 3, 4. By comparing Eqs. ( 79) and ( 87) one sees that the fermionic edge is then equivalent to the bosonic case examined earlier, supplemented by two Dirac fermion modes running in the opposite direction relative to the chiral modes of the (221) state.This correspondence allows us to immediately deduce that the fermionic Fibonacci-to-vacuum edge is described by the G 2 Kac-Moody theory at level-1 together with two backwards-propagating Dirac fermions (or, equivalently, four backwards-propagating Majorana fermions).More generally, when ṼIJ is small but non-zero for I = 1, 2 and J = 3, 4, the G 2 theory and the backwards-propagating fermions hybridize through the marginal couplings ṼIJ .Once again, we find a correspondence between the bulk and the edge with the vacuum: both have Fibonacci anyons as well as fermionic excitations [155]. VI. TOPOLOGICAL QUANTUM FIELD THEORY INTERPRETATION We will now provide an alternative topological quantum field theory (TQFT) interpretation of the Fibonacci phase introduced in the preceding sections.Although less connected to microscopics, the perspective developed here cuts more directly to the elegant topological properties enjoyed by this state.Our discussion will draw significantly on the earlier works of Gils et al. [89] and especially Ludwig et al. [90].As already mentioned in the introduction our construction of the Fibonacci phase from superconducting islands embedded in a ν = 2/3 quantum Hall state bears some resemblance to these studies.Starting from parent non-Abelian systems Refs.89 and 90 investigated descendant phases emerging in the interior of the fluid due to interaction amongst a macroscopic collection of non-Abelian anyons.We followed a similar approach in that the domain walls in our spatially modulated trenches correspond to extrinsic non-Abelian defects [58,[79][80][81]83] by virtue of the Z 3 zero-modes that they bind; moreover, we likewise hybridized these defects to access the (descendant) Fibonacci phase within a (parent) ν = 2/3 state.This common underlying philosophy suggests a deep relationship with Refs.89 and 90. Of course the most glaring difference stems from the Abelian nature of our parent state.We will show below that one can blur this (certainly important) distinction, however, by developing a non-standard view of the spin-unpolarized ν = 2/3 quantum Hall state-namely, as emerging from some non-Abelian phase upon condensation of a boson that confines the non-Abelian particles.Such an interpretation might initially seem rather unnatural, but provides an illuminating perspective in situations where one can externally supply the energy necessary to generate these confined non-Abelian excitations in a meaningful way.This is indeed precisely what we accomplish by forcing superconducting islands into the ν = 2/3 fluid to nucleate the domain walls that trap Z 3 zeromodes.We will employ such a picture to sharpen the connection with earlier work and, in the process, develop a TQFT view of the Fibonacci phase generated within a ν = 2/3 state.In the discussion to follow, we ignore the fermion present in the (112) state, which leads to subtle consequences that we address at the end of this section.[In fact, our conclusions will apply more directly to the analogous bosonic (221) state.] As a first step we summarize the results from Ref. 90 that will be relevant for our discussion.Consider a parent non-Abelian phase described by an SU(2) 4 TQFT.Table I lists the properties of the gapped topological excitations of this phase-including the SU(2) spin j, conformal spin h, quantum dimension d, and nontrivial fusion rules for each field.Ludwig et al. found that antiferromagnetically coupling a 2D array of non-Abelian anyons in this parent state produces a gapped descendant phase described by an SU(2) 3 ⊗ SU(2) 1 TQFT, as sketched in the left half of Fig. 10.See Table II for the corresponding properties of SU(2) 3 and SU(2) 1 .The interface between these parent and descendant phases supports a gapless edge state, which exhibits central charge c = 4/5 and ten fields corresponding exactly to those of the so-called M(6, 5) minimal model.Note that this edge theory is distinct from the Z 3 parafermion CFT arising in our setup, which possesses only six fields.Nevertheless, there are hints of a relation with our work present already here: SU(2) 4 supports non-Abelian anyons with quantum dimension √ 3 (like the non-Abelian defects in our trenches), and the descendant SU(2) 3 ⊗ SU(2) 1 region supports a Fibonacci anyon (as in our Fibonacci phase). At this point it is worth speculating on the field content ex- Fusion rules , along with their corresponding SU(2) label j, conformal spin h, quantum dimension d, and nontrivial fusion rules.The chiral central charge associated with SU(2) The parent state on the left side of Fig. 10 is described by this TQFT. Fusion rule FIG. 10.Boson condensation picture leading to a topological quantum field theory (TQFT) interpretation of the Fibonacci phase.On the left a parent non-Abelian SU(2) 4 phase hosts a descendant SU(2) 3 ⊗ SU(2) 1 state arising from interacting anyons within the fluid [90].Condensing a single boson throughout the system produces the setup on the right in which an Abelian Z3 parent state gives rise to a descendant phase described by a pure Fibonacci TQFT.The latter system very closely relates to our spin-unpolarized ν = 2/3 state with superconducting islands that generate the Fibonacci phase inside of the quantum Hall medium, in that the quasiparticle content (modulo the electron) is identical.An even more precise analogy occurs in the case where the Fibonacci phase resides in a bosonic (221) quantum Hall state; here the TQFT's from the right side of the figure describe the universal topological physics exactly. clearly possesses a quantum dimension of d = √ 3 (consistent with deductions based on ground-state counting), since 1, Y 1 , and Y 2 are Abelian fields with d = 1.No other fields are immediately evident.This picture cannot possibly be complete, however, as there is no TQFT with four fields obeying these fusion rules [156]. The difficulty with identifying a TQFT using the preceding logic stems from the fact that X differs fundamentally from the other fields in that it does not represent a point-like excitation.Rather, X occurs only at the end of a 'string' formed by a superconducting region within our trenches; since these strings are physically measurable X is confined and exhibits only projective non-Abelian statistics.One could-at least in principle-envision quantum mechanically smearing out the location of the superconductors to elevate X to the status of a deconfined point-like quantum particle belonging to some genuine non-Abelian TQFT.Or by turning the problem on its head one can instead view confined excitations like X as remnants of that non-Abelian TQFT after a phase transition.In the latter viewpoint the mechanism leading to the transitionand the accompanying confinement-is boson condensation, which was described in detail by Bais and Slingerland in the context of topologically ordered phases [157]. To be precise we will define a boson here as a field possessing integer conformal spin and quantum dimension d = 1 [158].Suppose that a boson B with these properties condenses.When this happens the condensed boson is identified with the vacuum 1, and any fields related to one another by fusion with B are correspondingly identified with each other.For instance, if A × B ∼ C then fields A and C are equivalent in the condensed theory.The nature of such fields that are related by the boson B depends on their relative conformal spin.If their conformal spins differ by an integer, they braid trivially with the new vacuum and represent deconfined excitations.Otherwise it is no longer possible to define in a gauge-invariant manner the conformal spin for that type of excitation; it braids nontrivially with the new vacuum and therefore must be confined by a physically measurable string.Let us now apply this discussion to the parent SU(2) 4 TQFT described earlier, assuming the Z field condenses (from Table I we see that this is the only nontrivial boson in the TQFT).The resulting theory was already discussed extensively by Bais and Slingerland and will be briefly summarized here.First of all the fusion rules tell us that condensation of Z identifies X and X ; anticipating a connection with our ν = 2/3 extrinsic defects, we will label the corresponding excitation by X.Indeed, X is confined (because the conformal spins of X and X differ by a non-integer), possesses a quantum dimension of √ 3, and exhibits the same projective non-Abelian braiding statistics as our quantum Hall domain wall defects [58,[79][80][81].As for the Y field, it can fuse into the vacuum in two different ways when Z condenses (since Z → 1), and so must split into two Abelian fields with conformal spin 2/3 mod 1 [157].We will denote these two fields Y 1 and Y 2 as they exhibit the same characteristics as the charge 2e/3 and 4e/3 excitations in our quantum Hall problem.The properties of this 'broken SU(2) 4 ' theory [157], including the confined X excitation, appear in Table III.From the table it is apparent that this condensed theory reproduces exactly the structure anticipated from our ν = 2/3 setup decorated with superconducting islands that generate Z 3 zero-modes.Hence the fusion rules and braiding statistics for our parent state can be viewed as inherited (projectively) from SU(2) 4 .Note, however, that 'broken SU(2) 4 ' is not a pure TQFT; focusing only on deconfined excitations, we are left with a simple Z 3 Abelian theory with only 1, Y 1 , and Y 2 . So far we have shown that the parent SU(2) 4 theory discussed by Ludwig et al. recovers the particle content of our parent ν = 2/3 system upon condensing the Z field.Next we explore the fate of their descendant SU(2) 3 ⊗ SU(2) .Aside from the identity we need only consider three fields after condensation-(ε, 1), (ε, η), and (ξ, 1)-since all others are related to these by the condensed boson.The latter two are, however, confined as one can deduce by examining their conformal spin before and after fusing with (ξ, η).The lone deconfined field that remains is (ε, 1), which is described by a pure Fibonacci theory.Table IV summarizes the main features of this TQFT, denoted here by 'Fib'.This theory is analogous to that describing the descendant Fibonacci phase that we obtained by hybridizing arrays of Z 3 zero-modes in our parent ν = 2/3 system.While it is not yet apparent, the condensation transitions that we discussed separately in the parent and descendant phases are in fact intimately related.This connection becomes evident upon examining (from a particular point of view) the structure of the M(6, 5) minimal model describing the boundary between the pure SU(2) 4 and SU(2) 3 ⊗ SU(2) 1 phases prior to the transitions.Appendix B shows that at that boundary the Z and (ξ, η) bosons are identified, which is reasonable since their SU(2) spin labels, conformal spins, and quantum dimensions all match.Thus one can move the Z boson smoothly from the parent to the descendant region where it 'becomes' (ξ, η)-or vice versa.It follows that the transitions in the parent and descendant phases are not independent, but rather can be viewed as arising from the condensation of a single common boson. Figure 10 summarizes the final physical picture that we obtain.The left-hand side represents the parent SU(2) 4 with descendant SU(2) 3 ⊗ SU(2) 1 setup analyzed by Ludwig et al. [90], which exhibits quite different physics from what we captured in this paper.Condensing a single boson throughout that system leads to the parent Z 3 with descendant Fib configuration illustrated on the right side of the figure.These parent and descendant states do, by contrast, closely relate to our ν = 2/3 quantum Hall setup with superconducting islands that drive the interior into the Fibonacci phase, in the sense that both systems exhibit the same deconfined bulk excitations in each region.There are, however, subtle differences between the system on the right side of Fig. 10 and our specific quantum Hall architecture that deserve mention. First, the Abelian Z 3 TQFT technically does not quite describe the spin-unpolarized ν = 2/3 state: the theory must be augmented to accommodate the electron in this fermionic quantum Hall phase [156].Moreover, the edge structure for the Z 3 TQFT admits a chiral central charge c = 2, whereas the ν = 2/3 state has c = 0 (because there are counterpropagating modes).Both of these issues are relatively minor for the purposes of our discussion, however, and in any case can easily be sidestepped by considering a bosonic parent system.In particular, as alluded to earlier the bosonic (221) statewhich provides an equally valid backdrop for the descendant Fibonacci phase-exhibits a chiral central charge of c = 2 and is described by a Z 3 TQFT with no modification.The Fib TQFT denoted on the right side of Fig. 10 also does not exactly describe our Fibonacci phase because this state exhibits a local order parameter (and hence is not strictly described by any TQFT).This actually poses a far more minor issue than those noted above.Recall from Sec. V C that superconducting vortices do not generate additional nontrivial quasiparticles in the Fibonacci phase.Consequently the order parameter physics 'factors out' and essentially decouples from the topological sector.More formally, one can envision quantum disordering the superconductor by condensing vortices to eradicate the order parameter altogether without affecting the quasiparticles supported by the Fibonacci phase that we have constructed [159]. The TQFT perspective on our results espoused in this section has a number of virtues.For one it clearly illustrates the simplicity underlying the end product of our construction, and also unifies several related works that may at first glance appear somewhat distantly related.Another benefit is that the condensation picture used along the way naturally captures the confined non-Abelian domain wall defects supported by ν = 2/3 trenches with superconductivity.More generally, viewing Abelian phases as remnants of non-Abelian TQFT's as we have done here may be useful in various other settings as a way of similarly identifying nontrivial phases accessible from interacting extrinsic defects. VII. FIBONACCI PHASE FROM UNIFORM TRENCHES In Sec.II we identified two closely linked routes to spinless p+ip superconductivity from an integer quantum Hall system.The first utilized trenches with spatially uniform Cooper pairing and electron backscattering perturbations present simultaneously; the second considered trenches alternately gapped by pairing and backscattering, yielding chains of hybridized Majorana modes.In either case the trenches could be tuned to an Ising critical point, at which interchain coupling then naturally generated p + ip superconductivity.To construct a superconducting Z 3 Read-Rezayi analogue (the Fibonacci phase), Secs.IV and V adopted the second approach and analyzed chains of Z 3 generalized Majorana modes nucleated in a ν = 2/3 fractional quantum Hall fluid.This route enabled us to exploit the results of Ref. 114, which derived the relation between lattice and CFT operators at the Z 3 parafermion critical point for a single chain, to controllably study the 2D coupled-chain system.Here we will argue that as in the integer quantum Hall case the same physics can also be obtained from spatially uniform ν = 2/3 trenches.This is eminently reasonable since on the long length scales relevant at criticality the detailed structure of the trenches should become unimportant. The analysis proceeds in two stages.First we will use results from Lecheminant, Gogolin, and Nersesyan (LGN) [160] to argue that a ν = 2/3 trench with uniform pairing and backscattering perturbations also supports a Z 3 parafermion critical point.The relation between bosonized fields and CFT operators at criticality will then be deduced by coarse-graining the corresponding relationship obtained in Sec.IV in the spatially non-uniform case.At that stage our results from Sec. V carry over straightforwardly, allowing us to immediately deduce the existence of a Fibonacci phase in the uniform-trench setup. We start by reviewing the critical properties [160] for a toy Hamiltonian of the form where the fields satisfy [161] The u 1,2 perturbations in H LGN are both relevant at the Gaussian fixed point and favor locking θ and φ to the three distinct minima of the respective cosines.Because of the nontrivial commutator above, however, these terms compete and favor physically distinct gapped phases-very much like the tunneling and pairing terms in our quantum Hall trenches.Using complementary non-perturbative methods, LGN showed that the self-dual limit corresponding to u 1 = u 2 realizes the same Z 3 parafermion critical point as the three-state quantum clock model [160]. To expose the connection to our quantum Hall setup, consider the Hamiltonian introduced in Sec.IV A for a single trench in a ν = 2/3 fluid with backscattering and Cooper pairing induced uniformly: As before φ ρ/σ and θ ρ/σ represent fields for the charge/spin sectors, while t and ∆ denote the tunneling and pairing strengths.In writing the first line of H we have assumed a particularly simple form for edge density-density interactions that can be described with velocities v ρ/σ .Upon comparison of Eqs. ( 42) and ( 89) one sees that the charge-sector fields obey the same commutation relation as those in the model studied by LGN.Furthermore, modulo the spin-sector parts, the u 1,2 perturbations in Eq. ( 88) have the same form as the tunneling and pairing terms above.This hints at common critical behavior for the two models. The simplest way to make this relation precise is to include a perturbation that explicitly gaps the spin sector (while leaving the charge sector intact).One such perturbation arises from correlated spin-flip processes described by δH = , where ψ 1α and ψ 2α are spin-α electron operators acting on the top and bottom sides of the trench, respectively.In bosonized language this yields a term of the form Suppose that the coupling u σ dominates over t, ∆ and drives an instability in which θ σ is pinned by the cosine potential above.At low energies the Hamiltonian H in Eq. ( 90) that describes the remaining charge degrees of freedom then maps onto the LGN Hamiltonian in Eq. ( 88).Consequently, the self-dual critical point at which |t| = |∆| is likewise described by Z 3 parafermion CFT. For the following reasons we believe that it is likely that the same critical physics arises without explicitly invoking the u σ perturbation.Recall that both t and ∆ favor pinning the spin-sector field θ σ in precisely the same fashion, but gap the charge sector in incompatible ways [see Eqs. ( 44) and ( 45)].Suppose that we start from a phase in which tunneling t gaps both sectors.Increasing ∆ at fixed t must eventually induce a phase transition in the charge sector.Provided the spin sector remains gapped throughout it suffices to replace the cos θ σ term in Eq. ( 90) by a constant across the transition.The model then once again reduces to H LGN and hence exhibits a Z 3 parafermion critical point at |t| = |∆|.We stress that although it is difficult to make rigorous statements about this nontrivial, strongly coupled field theory, this outcome is nevertheless intuitively very natural given our results for criticality in spatially modulated trenches. Our primary interest lies in 'stacking' such critical 1D systems to access new exotic 2D phases.Physical interchain perturbations can easily be constructed in terms of bosonized fields, as in Sec.V, though at the Z 3 parafermion critical point these fields no longer constitute the 'right' low-energy degrees of freedom.An essential technical step is identifying the correspondence between bosonized and CFT operators at criticality so that one can systematically disentangle high-and low-energy physics.We will now deduce this relationship for quasiparticle creation operators that are relevant for interchain processes in our ν = 2/3 setup with uniform trenches. To do so we first revisit the non-uniform system analyzed in Sec.IV.By combining Eqs. ( 48), (60a) and (60b) we obtain the following expansions valid at the parafermion critical point: We remind the reader that the operators on the left-hand side create charge-2e/3 quasiparticles on the top and bot-tom trench edges, at position x j in domain wall j [φ 1/2α relates to the charge-and spin-sector fields through Eqs.(41)].Moreover, on the right side a, b again denote non-universal constants while the ellipses represent terms with subleading scaling dimensions.Connection with the uniform trench can now be made upon coarse-graining the expressions abovespecifically by averaging over sums and differences of quasiparticle operators at adjacent domain walls in a given unit cell. (Each unit cell contains two domains as shown in Fig. 5.) The oscillating terms clearly cancel for the sum, leaving where x now denotes a continuous coordinate.One can isolate the parafermion fields by instead averaging over differences of quasiparticle operators at neighboring domain walls, which yields The extra derivatives on the left-hand side reflect the fact that the parafermions acquire a relative minus sign under parity P compared to the fields on the right side of Eqs. ( 93) [114].More generally, the coarse-graining procedure used here merely ensures that the quantum numbers carried by the bosonized and CFT operators agree with one another. We are now in position to recover the physics discussed in Sec.V, but instead from a system of spatially uniform critical trenches.( 93) and (94) allow us to construct interchain quasi-particle hoppings that reproduce the λ a,b terms in Eq. ( 67).The effective low-energy Hamiltonians in the two closely related setups are then identical-and hence so are the resulting phase diagrams.In particular, as Fig. 6(a) illustrates if the interchain coupling λ a > 0 dominates then the uniform-trench system flows to the Fibonacci phase.Determining the microscopic parameters (in terms of the underlying electronic system) required to enter this phase remains an interesting open issue, though such a state is in principle physically possible in either setup that we have explored. VIII. SUMMARY AND DISCUSSION The introduction to this paper provided a broad overview of the main physical results derived here.Having now completed the rather lengthy analysis, we will begin this discussion with a complementary and slightly more technical summary: Our setup begins with a spin-unpolarized ν = 2/3 Abelian fractional quantum Hall state-also known as the (112) state-as the backbone of our heterostructure.The (112) state is a strongly correlated phase built from spin-up and spindown electrons partially occupying their lowest Landau level.At the boundary with the vacuum its edge structure consists of a charge mode (described by φ ↑ + φ ↓ ) and a counterpropagating neutral mode (described by φ ↑ − φ ↓ ).We first showed that a long rectangular hole-a 'trench'-in this fractional quantum Hall system realizes a Z 3 parafermion critical point when coupled to an ordinary s-wave superconductor.This nontrivial critical theory with central charge c L = c R = 4/5 is well-known from earlier studies of the three-state quantum clock model, and moreover is important for characterizing edge states of the Z 3 Read-Rezayi phase whose properties we sought to emulate.We presented two related constructions.The first utilizes an alternating pattern of superconducting and non-superconducting regions in the trench, as described in Sec.IV, to essentially engineer a non-local representation of the three-state clock model.The second, explored in Sec.VII, employs a 'coarse-grained' variation wherein the trench couples uniformly to a superconductor throughout.Tuning to the Z 3 parafermion critical point follows by adjusting the coupling between domain walls (in the case of modulated trenches), or electron tunneling across the trench (in the uniform-trench setup).In both scenarios, the neutral excitations are gapped out while the charge modes provide the low-energy degrees of freedom.One remarkable feature of our mapping is that we can identify the relation between 'high-energy' operators and chiral fields describing low-energy physics near criticality, given by Eqs.(27) for the lattice construction and Eqs. ( 93) and ( 94) for the continuum version.This key technical step enabled us to perform calculations parallel to those for coupled Majorana chains described in Sec.II-but at a nontrivial strongly interacting critical point. To construct a 2D non-Abelian phase reminiscent of the Z 3 Read-Rezayi state we consider an array of these critical trenches in the ν = 2/3 quantum Hall fluid, with neighboring trenches coupled via charge-2e/3 quasiparticle hopping (see Fig. 6 for the lattice setup).With the correspondence between quasiparticle operators and CFT fields in hand, we find that the second-most-relevant interchain coupling corresponds to a term that couples the right-moving parafermion field ψ R (y) from trench y with the adjacent left-mover ψ L (y + 1) from trench y+1.This perturbation gaps out each critical trench except for the first right-mover and the final left-mover.The system then enters a stable 2D chiral topological state, as shown in Sec.V, which we dubbed the 'Fibonacci phase'.Since this phase exhibits a bulk gap its topological properties are stable; therefore it is neither necessary to tune the individual chains exactly to criticality, nor to set the most-relevant interchain coupling precisely to zero. We uniquely established the universal topological properties of the Fibonacci phase by identifying its two-fold ground state degeneracy on a torus (which implies two anyon species), fusion rules, and quantum dimensions via the topological entanglement entropy.The quasiparticle structure present here is elegant in its simplicity yet rich in content, consisting of a trivial particle 1 and a Fibonacci anyon ε obeying the simple fusion rule ε × ε ∼ 1 + ε.One of the truly remarkable features of this state is that the ability to exchange Fibonacci anyons, and to distinguish the Fibonacci anyon from the vacuum, is sufficient to perform any desired quantum computation in a completely fault-tolerant manner [145,146]. The Fibonacci phase supports gapless edge excitations.When this state borders the parent Abelian quantum Hall fluid from which it descends [as in Fig. 1(b)], they are described by a chiral Z 3 parafermion CFT with central charge c = 4/5exactly as in the Z 3 Read-Rezayi phase modulo the charge sector.The edge states arising at the interface with the vacuum can be obtained upon shrinking the outer Abelian quantum Hall liquid, thereby hybridizing the parafermion and quantum Hall edge fields.If the Fibonacci phase descends from a bosonic analogue of the spin-unpolarized ν = 2/3 state, i.e., the bosonic (221) state, then the boundary with the vacuum exhibits edge modes described by the G 2 Kac-Moody algebra at level-1.This fully chiral edge theory has central charge c = 14/5, contains two primary fields associated with the bulk excitations 1 and ε, and occurs also in the pure Fibonacci topological quantum field theory discussed in Sec.VI.If instead the Fibonacci phase emerges out of the fermionic (112) state, then the corresponding edge is not fully chiral and does not in general admit a decomposition into independent left-and right-movers.However, we find that the edge theory may be reconstructed such that it factorizes into two left-moving fermions with central charge c L = 2 and a rightmoving sector identical to the bosonic case with central charge c R = 14/5. Because of the superconductivity in our setup the Fibonacci phase admits gapless order parameter phase fluctuations but is otherwise fully gapped away from the edge.Nevertheless, its low-energy Hilbert space consists of a tensor product of states for a topologically trivial superconductor and those of a gapped topological phase.In this sense the superconductivity is peripheral: it provides an essential ingredient in our microscopic construction, but does not influence the Fibonacci phase's universal topological properties.This stands in stark contrast with the case of a spinless p + ip superconductor.There an h/2e superconducting vortex binds a Majorana zeromode and thus exhibits many characteristics of σ particles (i.e., Ising anyons), despite being logarithmically confined by order parameter energetics.If superconductivity is destroyed by the condensation of double-strength h/e vortices, then the h/2e vortex becomes a bona fide deconfined σ particle in the resulting insulating phase.On the other hand, destroying superconductivity by condensing single-strength h/2e vortices produces a trivial phase.The physics is completely different in the Fibonacci phase where an h/2e vortex braids trivially with an ε particle.(Here we assume that the vortex does not 'accidentally' trap a Fibonacci anyon.)Condensation of h/2e vortices therefore simply leaves the pure Fibonacci phase with no residual order parameter physics.It is interesting to note that richer physics arises upon condensing nh/2e vortices, which yields the Fibonacci phase tensored with a Z n gauge theory; this additional sector is, however, clearly independent of the Fibonacci phase. A number of similarities exist between our Fibonacci phase and previously constructed models that harbor Fibonacci anyons.We have already emphasized several parallels with the Z 3 Read-Rezayi state.Teo and Kane's coupled-wire construction of this non-Abelian quantum Hall phase is particularly close in spirit to this paper (and indeed motivated many of the technical developments used here).The Z 3 Read-Rezayi state, however, certainly represents a distinct state of matter with different universal topological properties.For instance, there the fields ψ and σ (with appropriate bosonic factors) represent deconfined, electrically charged quasiparticles, whereas the Fibonacci anyon ε provides the only nontrivial quasiparticle in the Fibonacci phase.Fibonacci anyons also occur in the exactly soluble lattice model of Levin and Wen [162].Important differences arise here too: their model is non-chiral, and has the same topological properties as two opposite-chirality copies of the Fibonacci phase constructed in this paper.(See also the related works of Refs.163 and 164 for loop gas models that may support such a non-chiral phase.)Finally, recent unpublished work by Qi et al. accessed a phase with Fibonacci anyons using Z n lattice operators as building blocks, similar to those that arise in our spatially modulated trenches [165].It would thus be interesting to explore possible connections with our study. We now turn to several other outstanding questions and future directions raised by our results, placing particular emphasis on experimental issues. Realizing non-Abelian anyons with universal braid statistics in any setting carries great challenges yet correspondingly great rewards if they can be overcome.Our proposal is no exception.The price that one must pay to realize Fibonacci anyons as we envision here is that a fractional quantum Hall system must intimately contact an s-wave superconductor.For several reasons, however, accessing the Fibonacci phase may be less daunting than it appears.First of all Abelian fractional quantum Hall states appear in many materials-and not just in buried quantum wells such as GaAs.Among the several possible canvases graphene stands out as particularly promising due to the relative ease with which a proximity effect can be introduced [166][167][168].Graphene can also be grown on metallic substrates [169], and if such a substrate undergoes a superconducting transition a strong proximity effect may result. Another point worth emphasizing is that weak magnetic fields are not required, which is crucial given that our proposal relies on the fractional quantum Hall effect.This stems from the fact that superconducting vortices in the Fibonacci phase need not carry topologically nontrivial particles.Assuming that Fibonacci anyons do not happen to energetically bind to vortex cores-which again they need not-then any field strength up to the (type II) superconductor's upper critical field H c2 should suffice.By contrast, in the case of a spinless p + ip superconductor the density of vortices must remain low because they necessarily support Majorana modes.Appreciable tunneling between these, which will arise if the spacing between vortices becomes too small, therefore destabilizes the Ising phase. We also reiterate that preparing precisely the somewhat elaborate, fine-tuned setups explored here is certainly not necessary for accessing the Fibonacci phase.Many of the features we invoked in our analysis-including the multi-trench geometry and all of the fine-tuning that went with it-served purely as a theoretical crutch that enabled us to decisively show that our model supports this state and identify its properties.The Fibonacci phase is stable to (at least) small perturbations, and the extent of its stability remains a very interesting open ques-tion.It seems quite possible that this stability regime extends across a large swath of the parameter space for a quantum Hall state coupled to a superconductor.Hinting that this may be so is the fact that the Fibonacci phase that we have constructed is actually isotropic and translationally invariant in the longwavelength limit.Hence, it is even possible that a completely 'smeared' Abelian quantum Hall/superconductor heterostructure enters this phase even in the absence of trenches.Although the methods used in this paper are not applicable to this case, it may be possible to study such a scenario by applying exact diagonalization or the density-matrix renormalization group to small systems of electrons in the lowest Landau level.Numerical studies along these lines are analogous to previous studies of the fractional quantum Hall effect, but with the added wrinkle that U(1) charge conservation symmetry is broken.This almost entirely untapped area seems ripe for discovery. As a final remark on experimental realizations, we stress that superconductivity may be altogether inessential-even at the microscopic level.To see why it is useful to recall that the superconductors in our construction simply provide a mechanism for gapping the edge states opposite a trench that is 'incompatible' with the gapping favored by ordinary electronic backscattering.When balanced these competing terms thus drive the system to a nontrivial critical point that we bootstrapped off of to enter the Fibonacci phase.In beautiful theoretical studies Refs.58 and 83 showed that similar incompatible gap-generating processes can arise in certain quantum Hall bilayers without Cooper pairing; for instance, if one cuts a trench in the bilayer, electrons can backscatter by tunneling from 'top to bottom' or 'side to side'.It may thus be possible to realize the Fibonacci phase in a bilayer fractional quantum Hall setup by regulating the inter-and intra-layer tunneling terms along trenches, following Refs.58 and 83.Such an avenue would provide another potentially promising route to Fibonacci anyons that is complementary to the superconductor/quantum Hall heterostructures that we focused on here. Our construction naturally suggests other interesting generalizations as well.The ν = 2/3 state is not the only spinsinglet fractional quantum Hall phase-another can occur, e.g., at ν = 2/5.These may provide equally promising platforms for the Fibonacci phase or relatives thereof.Moreover, our construction is by no means limited to fermionic quantum Hall phases.As we noted earlier the bosonic (221) state, for instance, leads to nearly identical physics (which is actually simpler in some respects).By following a similar route to that described here, it may be possible to build on these quantum Hall states to construct other non-Abelian topological phases, perhaps realizing Z k parafermions, SU(2) k , or yet more exotic phases. To conclude we briefly discuss the longer-term prospects of exploiting our model for quantum computation.Quantum information can be encoded in a many-ε state using either a dense or sparse encoding.There are two states of three ε particles with total charge ε and also two states of four ε particles with total charge 1, and either pair can be used as a qubit.The unitary transformations generated by braiding are dense within the projective unitary group on the many-anyon Hilbert space and, therefore, within the unitary group on the computational subspace [145,146].However, this presupposes that we can create pairs of Fibonacci anyons at will, and braid and detect them.Since they carry neither electric charge nor any flux, this is challenging.In this respect, the rather featureless ε particles are analogous to ψ particles in an Ising anyon phase.This suggests the following approach.Consider the case of a single Ising or three-state clock model on a ring.If we make one of the bond couplings equal to −∞, then it breaks the ring into a line segment and the spins at the two ends are required to have opposite values.In the Ising case, this means that if one end is 'spin up', the other is 'spin down', and vice versa.This forces a ψ into the chain.However, this particle is not localized and can move freely.If we now couple many such chains, some of which have ψ's, then they can also move between chains and annihilate.However, we can in principle trap a ψ by reducing the gap at various locations.In the Z 3 clock case, if one end of a chain is A, then the other end is 'not-A'.(Here, we are calling the three states A, B, C.) This forces an ε particle into a single chain.It is plausible that when the chains are coupled through their parafermion operators, these ε particles will be able to move freely between chains.They could then similarly be trapped by locally suppressing the gap, as in the Ising case.Showing that this scenario is correct or designing an alternate protocol for manipulating Fibonacci anyons poses an important challenge for future work.transformations, translations T x , parity P , charge conjugation C, and a time-reversal transformation T .In this Appendix we illustrate that each of these symmetries exhibits a physical analogue in the quantum Hall architectures discussed in Secs.IV and V. To this end consider the geometry of Fig. 5, in which a single trench hosted by a ν = 2/3 system yields a chain of coupled Z 3 generalized Majorana operators; the Hamiltonian describing the hybridization of these modes is given in Eq. ( 57).Below we identify the realization of the clock-model symmetries in this specific setup.The results apply straightforwardly to the multi-trench case as well.Note that we frequently make reference to the bosonized fields, and the integer operators describing their pinning induced by tunneling t or pairing ∆, defined in Sec.IV. (i) In the limit where ∆ = t = 0 the electron number on each side of the trench is separately conserved.This is reflected in independent global U(1) symmetries that send θ ρ → θ ρ + a 1 and φ ρ → φ ρ + a 2 for arbitrary constants a 1,2 .Restoring ∆ and t to non-zero values breaks these continuous symmetries down to a pair of discrete Z 3 symmetries, which is immediately apparent from Eq. ( 43).The remaining invariance under φ ρ → φ ρ + 2π/3, which transforms nj → 1 + nj , corresponds to the clock model symmetry Z 3 ; similarly, the transformation θ ρ → θ ρ + 2π/3 sends mj → 1 + mj and corresponds to Z dual 3 .(ii) The symmetry T x corresponds to a simple translation along the trench that shifts mj → mj+1 and nj → nj+1 . (iii) In the clock model parity P corresponds to a reflection that interchanges the generalized Majorana operators α Rj and α Lj .Since the analogous operators defined in Eqs.(49) involve quasiparticles from opposite sides of the trench, here the equivalent of P corresponds to a π rotation in the plane of the quantum Hall system.We seek an implementation of this rotation that leaves the total charge and spin densities ρ + , S + invariant; changes the sign of the density differences ρ − , S − ; and preserves the bosonized form of δH in Eq. (43).The following satisfies all of these properties: θ ρ (x) → −θ ρ (−x) − π/3, φ ρ (x) → φ ρ (−x) + 4π/3, θ σ (x) → −θ σ (−x), and φ σ (x) → φ σ (−x).(We have included the factor of 4π/3 in the transformation of φ ρ so that the generalized Majorana operators in our quantum Hall problem transform as in the clock model under P .This factor transforms all electron operators trivially and thus corresponds to an unimportant global gauge transformation.)Taking the rotation about the midpoint of a pairing-gapped section, the integer operators transform as M → − M , mj → − m−j−1 , nj → n−j + M + 2 under this operation. (iv) Charge conjugation C arises from a particle-hole transformation on the electron operators ψ 1α → ψ † 1α , ψ 2α → −ψ † 2α , which leaves the perturbations in Eq. ( 40) invariant.In bosonized language this corresponds to θ ρ → −θ ρ − π/3, φ ρ → −φ ρ + π/3, θ σ → −θ σ , and φ σ → −φ σ .The integer operators in turn transform as M → − M , mj → − mj , and nj → −n j under C. Note that it is easy to imagine adding perturbations that violate this symmetry in the original edge Hamiltonian (e.g., spin flips acting on one side of the trench); however, such perturbations project trivially into the groundstate manifold.Hence one should view C as an emergent symmetry valid in the low-energy subspace in which we are interested. (v) Finally, for the equivalent of the clock-model symmetry T we need to identify an antiunitary transformation exhibited by our ν = 2/3 setup that squares to unity in the groundstate subspace and swaps the α Rj and α Lj operators.Physical electronic time-reversal T ph composed with a reflection R y about the length of the trench (which can be a symmetry for electrons in a magnetic field) has precisely these propertiesi.e., T = T ph R y .This operation transforms the electron operators as ψ 1α → iσ y αβ ψ 2β , ψ 2α → iσ y αβ ψ 1β and sends the bosonized fields to θ ρ → θ ρ , φ ρ → −φ ρ + π/3, θ σ → −θ σ , and φ σ → φ σ + π.The integer operators correspondingly transform under T as M → − M , mj → mj + M , and nj → −n j .Notice that whereas this composite operation squares to −1 when acting on the original electron operators, in the projected subspace (T ph R y ) 2 = +1 as desired. Appendix B: M(6, 5) edge structure via boson condensation This Appendix deals with the setup shown in the left side of Fig. 10, in which a parent state described by an SU(2) 4 TQFT hosts a descendant SU(2) 3 ⊗ SU(2) 1 phase [90]; see Tables I and II for summaries of the field content in each region.Our specific goal is to substantiate the claim made in Sec.VI that the Z and (ξ, η) bosons supported in the bulk of the parent and descendant states, respectively, are equivalent at their interface.[We are again using notation where fields from SU(2) 3 ⊗ SU(2) 1 are labeled (A, B), with A in SU(2) 3 and B in SU(2) 1 .]To meet this objective we will describe how one can recover, via edge boson condensation, the M(6, 5) minimal model describing gapless modes at the interface between the parent and descendant phases.As we will see this viewpoint makes the identification of the Z and (ξ, η) bosons immediately obvious. First, observe that the gapless modes bordering SU(2) 4 and SU(2) 3 ⊗ SU(2) 1 topological liquids are naively captured by an SU(2) 3 ⊗ SU(2) 1 ⊗ SU(2) 4 CFT, where the overline indicates a reversed chirality.For concreteness we will assume that the sector with an overline describes left-movers while others correspond to right-movers.Adopting similar notation as above we describe fields from the product edge theory theory as triplets of fields from the constituent sectors, e.g., (ε, η, X). (Note that this Appendix will employ the same symbols for primary fields at the interface and bulk anyons to facilitate the connection with Sec.VI.)In total, forty such triplets exist-far more than the ten fields found in M(6, 5).Any non-chiral boson in this edge theory can, however, condense at the interface thereby reducing the number of distinct deconfined fields.To avoid possible confusion, we stress that in contrast to Sec.VI we assume throughout this appendix that the bulk properties of the parent and descendant phases remain intact. Ignoring chirality for the moment, we find only three such bosonic combinations (i.e., triplets with integer conformal spin and quantum dimension d = 1).They are (1, 1, Z), (ξ, η, 1), and (ξ, η, Z).The right-and left-moving conformal dimensions of these fields are respectively given by (0, 1), (1, 0), and (1, 1).Consequently, the first two fields form chiral bosons and so cannot condense without an accompanying bulk phase transition in the parent or nucleated liquid-which again we preclude here.The last field, (ξ, η, Z), represents a non-chiral Z 2 boson, and as we now argue when condensed results in the M(6, 5) minimal model on the edge. To see this, note that one can divide the forty fields of SU(2) 3 ⊗ SU(2) 1 ⊗ SU(2) 4 into sets of fields A i and B i (with i = 1, . . ., 20) related by fusion with the Z 2 boson (ξ, η, Z).That is, This reduces the number of fields from forty to twenty-still more than are present in the M(6, 5) minimal model.There is, however, an additional criterion that one needs to consider.Namely, only when the conformal spins of A i and B i match (mod 1) can a well-defined spin be assigned to the new field A i ≡ B i following the condensation of (ξ, η, Z); otherwise those fields become confined.One can readily verify that there are ten pairs of fields A i and B i for which the conformal spins agree in the above sense, and these deconfined fields correspond to the ten fields of the M(6, 5) minimal model.This picture of M(6, 5) as an SU(2) 3 ⊗ SU(2) 1 ⊗ SU(2) 4 edge theory with (ξ, η, Z) condensed is very useful.In particular, since (1, 1, Z) × (ξ, η, Z) ∼ (ξ, η, 1), it follows that the Z and (ξ, η) bosons native to the parent and descendant phases are indeed identified at their interface, which is what we set out to show. FIG. 1 . FIG. 1. Schematic illustration of main results.Abelian quantum Hall states interlaced with an array of superconducting islands (left column) realize analogues of exotic non-Abelian quantum Hall states (right column).The interface between the superconducting regions and surrounding Abelian quantum Hall fluids supports chiral modes similar to those on the right, but without the bosonic charge sector.(We suppress the edge states at the outer boundary of the Abelian quantum Hall states for simplicity.)Solid circles denote deconfined non-Abelian excitations, while open circles connected by dashed lines represent confined h/2e superconducting vortices.Quasiparticle charges are also listed for the non-Abelian quantum Hall states.In (a) σ particles represent Ising anyons, which in the p + ip phase on the left correspond to confined vortex excitations.In (b) ε is a Fibonacci anyon that exhibits universal braid statistics.The superconducting Fibonacci phase is topologically ordered and supports deconfined ε particles-similar to the Read-Rezayi state.Vortices in this nontrivial superconductor do not lead to new quasiparticle types, but can in principle trap Fibonacci anyons. FIG. 3. (a)Variation on the setup of Fig.2(a) that also supports a p + ip superconducting state with Ising anyons.Here a ν = 1 quantum Hall system hosts spatially modulated trenches whose edge states are gapped in an alternating fashion by backscattering t and Cooper pairing ∆.When the trenches decouple and the gapped regions are 'large', each domain wall binds a Majorana zero-mode.Electron hopping across the domains hybridizes the chain of Majorana modes in each trench through couplings λ∆ and λt shown above.These couplings favor competing gapped phases, and when λ∆ = λt each chain realizes a critical point with counterpropagating gapless Majorana modes in the bulk-similar to the uniform trench setup of Fig.2(a).Turning on weak coupling t ⊥ (j − j ) between domain walls j and j in adjacent trenches then generically drives the system into a p + ip phase (or a p − ip state with opposite chirality).(b) Phase diagram for the 2D array of coupled Majorana modes near criticality.Here λ ⊥ and λ ⊥ represent interchain couplings between gapless Majorana fermions at the critical point, which follow from t ⊥ (j − j ) according to Eq. (15). FIG. 4 . FIG.4.Spin-unpolarized ν = 2/3 setup with a long, narrow trench producing counterpropagating sets of edge states described by fields φ1 on the top and φ2 on the bottom.One way of gapping these modes is through electron backscattering across the interface-which essentially 'sews up' the trench.A second gapping mechanism can arise if an s-wave superconductor mediates spin-singlet Cooper pairing of electrons from the top and bottom sides of the trench as illustrated above.These processes lead to physically distinct gapped states that cannot be smoothly connected, resulting in the formation of Z3 generalized Majorana zero-modes at domain walls separating the two. FIG. 6 . FIG. 6.(a) Multi-chain generalization of Fig. 5 in which a sequence of trenches labeled by y = 1, . . ., N is embedded in a spin-unpolarized ν = 2/3 quantum Hall system.Once again the edge modes opposite each trench are alternately gapped by electron backscattering and Cooper pairing, with mi(y) and ni(y) characterizing the pinned charge-sector fields in a given domain [see Eqs.(44) and(45)].We assume that the Z3 generalized Majorana operators bound to each domain wall hybridize strongly within a trench and weakly between neighboring trenches.Underlying this hybridization is tunneling of 2e/3 charges which can only take place through the fractional quantum Hall fluid; examples of allowed and disallowed processes are illustrated above.(b) Phase diagram for this system of weakly coupled chains starting from the limit where each chain is tuned to a critical point described by Z3 parafermion conformal field theory.The couplings λ a/b represent interchain perturbations defined in Eq. (67). FIG. 7 . FIG. 7. (a) Phase diagram of the 'ladder' Hamiltonian in Eq. (71) for complex λa.At λa = 0, the ladder resides at a Z3 parafermion critical point.Along the three solid lines the ladder remains gapless, but flows instead to the tricritical Ising point.Everywhere else the system is gapped and exhibits two symmetry-unrelated ground states together with the 'Fibonacci kink spectrum' described in the main text.The dotted lines indicate integrability.(b) Effective doublewell Ginzburg-Landau potential of the ladder Hamiltonian, which provides an intuitive picture for the ground-state degeneracy and Fibonacci kink spectrum.The equal-depth wells represent the two ground state sectors.Excitations in these sectors are non-degenerate, and correspond to massive modes about the asymmetric well minima.'Kinks' and 'antikinks' interpolate between ground states, and turn out to have the same energy as the 'oscillator' excitations in one of the ground states.This is the hallmark of the Fibonacci kink spectrum.(c) Energy versus momentum obtained via the truncated conformal space approach for each superselection sector.(The [ε 1] spectrum is identical to that of [1ε] with k → −k.)Notice the two ground states, the nearly identical single-particle bands in [1ε] and [εε], as well as the multi-particle continuum in all sectors. FIG. 8 . FIG. 8. (a) Bipartition of the superstructure that cuts between two chains on a cylinder.(b) Entanglement entropy SE of the |1 (red) and |ε (blue) ground states of the 2D Fibonacci phase as a function of the cylinder circumference Lx, computed numerically via the truncated conformal space approach.Fitting SE for state |1 to the form sLx − γ at large Lx, we extract the intercept −γ ≈ −0.65; see solid line the figure.This yields a total quantum dimension D ≈ 1.9 for the Fibonacci phase.Taking the difference SE[|ε ] − SE[|1 ] = log dε, we deduce the quantum dimension dε ≈ 1.62 ≈ ϕ which confirms that ε corresponds to the Fibonacci anyon. 3 FIG. 9 . FIG. 9. Cylinder geometry used to deduce the properties of h/2esuperconducting vortices in the Fibonacci phase.We initially assume that pure ν = 2/3 quantum Hall states border the Fibonacci phase from above and below.This results in two well-defined boundaries: the Fibonacci phase/quantum Hall edge, and the quantum Hall/vacuum edge.Adiabatically inserting h/2e flux through the cylinder (which is topologically equivalent to an h/2e vortex in the bulk of a planar Fibonacci phase) pumps charge e/3 across each quantum Hall region as shown above.Because the charge difference across the trenches then changes, the upper Fibonacci phase/quantum Hall edge binds either a ψ or σ excitation that carries charge 2e/3 mod 2e.The upper quantum Hall/vacuum edge, however, binds charge e/3 so that in total the vortex carries only fermion parity.If one shrinks the pure quantum Hall regions so that the two boundaries hybridize, ψ and σ lose their meaning since other sectors mix in.The final conclusion is that an h/2e vortex traps either a trivial particle or Fibonacci anyon depending on non-universal details, but does not lead to new quasiparticle types. TABLE II . Properties of SU(2) 3 and SU(2) 1 topological quantum field theories, which describe the descendant phase on the left side of Fig.10.In the table c is the chiral central charge, j is an SU(2) spin label, h denotes conformal spin, d represents the quantum dimension, and ϕ is the golden ratio.pectedfrom a hypothetical TQFT describing our ν = 2/3 state with domain walls binding Z 3 zero-modes.First one should have Abelian fields Y 1 and Y 2 corresponding to charge 2e/3 and 4e/3 excitations (which can live either on the gapped regions of the trenches or in the bulk of the quantum Hall fluid).Conservation of charge mod 2e suggests the fusion rulesY 1 × Y 1 ∼ Y 2 , Y 2 × Y 2 ∼ Y 1 , and Y 1 × Y 2 ∼ 1, where 1 denotes the neutral identity channel.One also might expect non-Abelian fields X corresponding to domain walls separating pairing-and tunneling-gapped regions of the trenches.Recalling that the Cooper-paired regions can carry charge 0, 2e/3, or 4e/3 mod 2e, the merger of two adjacent superconducting islands in a trench should be captured by the fusion rule X × X ∼ 1 + Y 1 + Y 2 .From this perspective X quite TABLE III . Field content and fusion rules for SU(2) 4 upon condensing the bosonic Z field listed in TableI.As in the other tables j is an SU(2) spin label, h denotes conformal spin, and d represents the quantum dimension for each particle.The X field is confined by the condensation and hence exhibits an ill-defined conformal spin; this field obeys the same fusion rules and projective non-Abelian statistics as the (also confined) domain wall defects in our ν = 2/3 trenches.Additionally Y1 and Y2 represent Abelian fields that correspond to charge 2e/3 and 4e/3 excitations in our quantum Hall setup.If one ignores the confined excitation X, the remainder is a pure Abelian Z3 theory with only 1, Y1, and Y2 particles. TABLE IV . The fields of Fib, along with their corresponding conformal spin h, quantum dimension d, and nontrivial fusion rule.This TQFT arises from SU(2) 3 ⊗ SU(2) 1 upon condensing the boson (ξ, η) in TableII, and describes the topologically ordered sector of the Fibonacci phase in our ν = 2/3 setup.phase upon boson condensation.Let us denote fields from SU(2) 3 ⊗ SU(2) 1 as (A, B), where A and B respectively belong to SU(2) 3 and SU(2) 1 , and explore the consequences of (ξ, η) condensing.(According to Table II this field is indeed bosonic)
38,640
2013-07-16T00:00:00.000
[ "Physics" ]
Biginelli Reaction Synthesis of Novel Multitarget-Directed Ligands with Ca2+ Channel Blocking Ability, Cholinesterase Inhibition, Antioxidant Capacity, and Nrf2 Activation Novel multitarget-directed ligands BIGI 4a-d and BIGI 5a-d were designed and synthesized with a simple and cost-efficient procedure via a one-pot three-component Biginelli reaction targeting acetyl-/butyrylcholinesterases inhibition, calcium channel antagonism, and antioxidant ability. Among these multitarget-directed ligands, BIGI 4b, BIGI 4d, and BIGI 5b were identified as promising new hit compounds showing in vitro balanced activities toward the recognized AD targets. In addition, these compounds showed suitable physicochemical properties and a good druglikeness score predicted by Data Warrior software. Introduction Alzheimer's disease (AD) is considered an extensively complex and multifactorial neurodegenerative disorder. AD is the leading cause of neurocognitive disorders and presents a high social and economic cost estimated at USD 604 billion worldwide [1]. Despite the large number of scientific publications (196K in PubMed), the available drugs are limited to a temporary relief of symptoms [2], while specific and effective treatments for this disease are struggling to reach the market. Indeed, AD has a complex physiopathology including various biological phenomena and highly interconnected pathological mechanisms that are hallmarked by: (i) neuronal death leading to dysfunction of the acetylcholine (ACh) receptor system in neurons; (ii) extracellular deposits of amyloid-beta (Aβ) peptide resulting from the accumulation of abnormal levels of its insoluble aggregates; (iii) the presence of hyperphosphorylated tau proteins, causing the formation of neurofibrillary tangles; (iv) neuroinflammation induced by high concentrations of pro-inflammatory cytokines released by activated microglia and astrocytes; and (v) disrupted homeostasis of mitochondria and homeostasis of biometals (Cu, Fe, and Zn) related to their contribution to Aβ peptide aggregation, or hydrogen peroxide (H 2 O 2 ) resulting from monoamine-oxidase-catalyzed deamination of biogenic amines such as adrenaline, dopamine, and serotonin [3] irreversibly leading to oxidative stress (OS). biometals (Cu, Fe, and Zn) related to their contribution to Aβ peptide aggregation, or hy drogen peroxide (H2O2) resulting from monoamine-oxidase-catalyzed deamination of bi ogenic amines such as adrenaline, dopamine, and serotonin [3] irreversibly leading to ox idative stress (OS). Faced with this pathophysiological complexity, the multitarget strategy initially in troduced by Melchiorre and colleagues [4] based on the development of new ligands abl to bind simultaneously to various enzymatic systems or receptors involved in the progres of AD seems to be the most appropriate strategy to find new effective drugs. Accordingly, by following this paradigm several multitarget-directed ligands for AD were developed with promising profiles by many research groups [5][6][7][8][9]. Our contribu tions in this area to developing novel multitarget-directed ligands are based on the use o multicomponent reactions (MCRs) for their easy performance, time saving, versatility and the diversity of the resulting scaffolds [10][11][12][13]. In addition, MCRs are atom economi as most atoms of the reactants, if not all, are found in the product, [14] and therefore re spond to the challenges of sustainability ("green chemistry") [15]. Continuing with our contributions in this area, we report here the design, synthesi via Biginelli MCR, and biological evaluation of a new family of BIGI 4a-d and BIGI 5a-d as new multitarget-directed ligands resulting from the combination of three scaffolds o interest for AD: (i) a typical Acetylcholinesterase (AChE) inhibitor motif, such as the ben zylpiperidine present in donepezil; (ii) the central dihydropyrimidone core which has potential calcium channel modulation activity similar to classical dihydropyridines [16] such as SQ32926 [16,17]; and (iii) propargyl ether residue, an electrophilic substrate, as an analogue of propargyl amine which is known as an antioxidant able to direct scaveng ROS/RNS [18][19][20] and present in selegiline and rasagiline, both able to induce nuclea translation of nuclear factor erythroid 2-related factor (Nrf2) and increase binding to th antioxidant response element (ARE) [21] (Figure 1). Indeed, AD is associated with low levels of ACh. This dysfunction can be reversed by the use of cholinesterase (ChE) inhibitors, one of the main treatment options availabl [22]. The cytosolic calcium overload leads to mitochondrial damage and activation of cel apoptosis [23] and has been identified as a crucial factor in AD. Consequently, calcium channel blockade is commonly recognized as a useful pharmacological tool in the treat ment of AD. Antioxidants and Nrf2 pathway induction in the AREc32 cell line are also interesting pharmacological targets for the development of new drug candidates. Th Keap1-Nrf2-ARE signaling pathway constitutes one of the most important endogenou antioxidant mechanisms able to regulate the production of oxidants. This is based on th activation of nuclear factor (erythroid-derived 2)-like 2 (Nrf2) [24] which is a very im portant protein in the organization of antioxidant defenses by triggering the endogenou Indeed, AD is associated with low levels of ACh. This dysfunction can be reversed by the use of cholinesterase (ChE) inhibitors, one of the main treatment options available [22]. The cytosolic calcium overload leads to mitochondrial damage and activation of cell apoptosis [23] and has been identified as a crucial factor in AD. Consequently, calcium channel blockade is commonly recognized as a useful pharmacological tool in the treatment of AD. Antioxidants and Nrf2 pathway induction in the AREc32 cell line are also interesting pharmacological targets for the development of new drug candidates. The Keap1-Nrf2-ARE signaling pathway constitutes one of the most important endogenous antioxidant mechanisms able to regulate the production of oxidants. This is based on the activation of nuclear factor (erythroid-derived 2)-like 2 (Nrf2) [24] which is a very important protein in the organization of antioxidant defenses by triggering the endogenous expression of detoxifying enzymes and leads to the downregulation of iNOS and COX-2 enzymes. Thus, the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d (Scheme 1, Table 1) were investigated for their calcium channels, ChE inhibition, antioxidant power, and Nrf2 activation. From these studies, we identified BIGI 4b, BIGI 4d, and BIGI 5b as new and very promising hit agents for potential AD therapy combining activities against three biological targets, as these compounds are good Ca 2+ channel blockers with potent cholinesterase inhibition and an Nrf2-ARE-activating effect. Synthesis The synthesis of the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d was carried out in a one-pot Biginelli reaction of aldehydes 3a-b with ethyl acetoacetate and ureas 2a-d using sodium bisulfate as a catalyst in acetic acid at room temperature for 48 h and refluxed for an additional 4h. (Scheme 2). Aldehyde 3a was prepared from the 4-hydroxybenzaldehyde and propargyl bromide under typical Williamson reaction conditions (Scheme 1), whereas aldehyde 3b was prepared by the Mitsunobu reaction, under the conditions described by Mertens [25], from but-3-yn-1-ol and 3-substituted 4-hydroxybenzaldehydes, in the presence of PPh 3 and di-isopropyl azodicarboxylate (DIAD), in THF at room temperature (rt) (Scheme 1). Ureas 2a-d were obtained from commercial benzylpiperidines 1a-b (n = 0,1) with benzoyl isocyanate or benzoylthioisocyante, in CH 2 Cl 2 at ambient temperature for 1 h, followed by hydrolysis of the resulting compounds with NaOH for 72 h [26] (Scheme 2). All new compounds were characterized by 1 H and 13 C NMR and ESI-MS showing data in good agreement with their structure which are collected in the Experimental Section and Supplementary Information. expression of detoxifying enzymes and leads to the downregulation of iNOS and COX-2 enzymes. Thus, the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d (Scheme 1, Table 1) were investigated for their calcium channels, ChE inhibition, antioxidant power, and Nrf2 activation. From these studies, we identified BIGI 4b, BIGI 4d, and BIGI 5b as new and very promising hit agents for potential AD therapy combining activities against three biological targets, as these compounds are good Ca 2+ channel blockers with potent cholinesterase inhibition and an Nrf2-ARE-activating effect. Synthesis The synthesis of the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d was carried out in a one-pot Biginelli reaction of aldehydes 3a-b with ethyl acetoacetate and ureas 2a-d using sodium bisulfate as a catalyst in acetic acid at room temperature for 48 h and refluxed for an additional 4h. (Scheme 2). Aldehyde 3a was prepared from the 4-hydroxybenzaldehyde and propargyl bromide under typical Williamson reaction conditions (Scheme 1), whereas aldehyde 3b was prepared by the Mitsunobu reaction, under the conditions described by Mertens [25], from but-3-yn-1-ol and 3-substituted 4-hydroxybenzaldehydes, in the presence of PPh3 and di-isopropyl azodicarboxylate (DIAD), in THF at room temperature (rt) (Scheme 1). Ureas 2a-d were obtained from commercial benzylpiperidines 1a-b (n = 0,1) with benzoyl isocyanate or benzoylthioisocyante, in CH2Cl2 at ambient temperature for 1 h, followed by hydrolysis of the resulting compounds with NaOH for 72 h [26] (Scheme 2). All new compounds were characterized by 1 H and 13 C NMR and ESI-MS showing data in good agreement with their structure which are collected in the Experimental Section and Supporting Information. Scheme 1. Synthesis of targeted multitarget-directed ligands BIGI 4a-d and BIGI 5a-d using one-pot Biginelli reaction. Biological Evaluation To verify the effectiveness of our compounds to simultaneously hit the selected targets, compounds BIGI 4a-d and BIGI 5a-d were submitted to inhibition of human ChE Scheme 1. Synthesis of targeted multitarget-directed ligands BIGI 4a-d and BIGI 5a-d using one-pot Biginelli reaction. expression of detoxifying enzymes and leads to the downregulation of iNOS and COX-2 enzymes. Thus, the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d (Scheme 1, Table 1) were investigated for their calcium channels, ChE inhibition, antioxidant power, and Nrf2 activation. From these studies, we identified BIGI 4b, BIGI 4d, and BIGI 5b as new and very promising hit agents for potential AD therapy combining activities against three biological targets, as these compounds are good Ca 2+ channel blockers with potent cholinesterase inhibition and an Nrf2-ARE-activating effect. Synthesis The synthesis of the new multitarget-directed ligands BIGI 4a-d and BIGI 5a-d was carried out in a one-pot Biginelli reaction of aldehydes 3a-b with ethyl acetoacetate and ureas 2a-d using sodium bisulfate as a catalyst in acetic acid at room temperature for 48 h and refluxed for an additional 4h. (Scheme 2). Aldehyde 3a was prepared from the 4-hydroxybenzaldehyde and propargyl bromide under typical Williamson reaction conditions (Scheme 1), whereas aldehyde 3b was prepared by the Mitsunobu reaction, under the conditions described by Mertens [25], from but-3-yn-1-ol and 3-substituted 4-hydroxybenzaldehydes, in the presence of PPh3 and di-isopropyl azodicarboxylate (DIAD), in THF at room temperature (rt) (Scheme 1). Ureas 2a-d were obtained from commercial benzylpiperidines 1a-b (n = 0,1) with benzoyl isocyanate or benzoylthioisocyante, in CH2Cl2 at ambient temperature for 1 h, followed by hydrolysis of the resulting compounds with NaOH for 72 h [26] (Scheme 2). All new compounds were characterized by 1 H and 13 C NMR and ESI-MS showing data in good agreement with their structure which are collected in the Experimental Section and Supporting Information. Biological Evaluation To verify the effectiveness of our compounds to simultaneously hit the selected targets, compounds BIGI 4a-d and BIGI 5a-d were submitted to inhibition of human ChE Biological Evaluation To verify the effectiveness of our compounds to simultaneously hit the selected targets, compounds BIGI 4a-d and BIGI 5a-d were submitted to inhibition of human ChE (hChE) and blockade of the calcium channel as well as antioxidant activity and Nrf2 transcriptional activation. Inhibition of hAChE and eqBuChE For the ChE inhibition experiments, we used the Ellman protocol [27] with hAChE and eqBuChE, and Donepezil and Tacrine were used as references. As shown in Table 1, four compounds, BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d, showed good hAchE inhibition with IC 50 equal to 342, 462, 352, and 1271nM, respectively, compared with Donepezil which showed an IC 50 equal to 12.7 nM. The two best compounds, BIGI 4b and BIGI 5b, are only 27-fold less active than Donepezil. For the structure-activity relationship (SAR), the linker length between benzylpiperidine and the central dihydropyrimidine nitrogen atom seems to play an important role. Indeed, no activity was observed when the linker length was n = 0 while all compounds with n = 1 were AChE inhibitors. Regarding the length of the linker between the oxygen and the triple bond, a slight variation in activity was observed between BIGI 4b (m = 1, 342 nM) and BIGI 4d (m = 2, 462 nM), both carrying a dihydropyridinone. However, a strong variation was observed between the analogues bearing dihydropyrimidinethiones BIGI 5b, (m = 1, IC 50 = 352 nM) and BIGI 5d (m = 2, IC 50 = 1271 nM), suggesting an effect of the sulfur atom on the activity. Regarding eqBuChE inhibition, except for compound BIGI 4c, all compounds showed micromolar inhibition ranging from 4.78 µM for BIGI 4b to 33.8 µM for BIGI 5c compared with tacrine which showed an IC 50 equal to 2.2 nM. Interestingly, the two best AChE inhibitors, BIGI 4b and BIGI 5b, are also the best BuChE inhibitors with IC 50 equal to 4.78 µM and 5.43 µM, respectively. For SAR, the compounds with linker length n = 1 always showed better activity than their analogues with n = 0. The Ca +2 channel blockade capacity of compounds BIGI 4a-d and BIGI 5a-d, and nimodipine as standard, at 10 µM concentration was carried out following the usual methodology [28]. As shown in Table 1, the observed % values ranged from 40 (BIGI 4c) to 74 (BIGI 5b). Seven of the eight compounds showed better calcium inhibition than the standard nimodipine. The most potent compounds were, in increasing order, BIGI 5c (63%), BIGI 4b (67%), BIGI 5d (68%), and BIGI 5b (74%), thus comparing very favorably with nimodipine (50%). From the point of view of the structure-activity relationship (SAR), compounds bearing n = 1 length as the linker always showed better results than those bearing n = 0. For the same lengths of the linkers m and n, the compounds with thioureas were always more active than the analogues with urea moiety. Antioxidant assay. The antioxidant activity of compounds BIGI 4a-d and BIGI 5a-d was determined by the ORAC-FL method. [29] Their radical scavenging properties are expressed as Trolox equivalents (TE) units. Melatonin was used as the positive control with an ORAC value of 2.45 [30]. As shown in Table 1, all compounds showed good antioxidant capacity. The best compounds were, in ascending order, BIGI 4b (1.60 TE), BIGI 5c (1.75 TE), and BIGI 4d (1.85 TE) and were found to be on average only 1.4 times less active than melatonin. Concerning the structure-activity relationship, no clear SAR could be established. Nrf2 transcriptional activation potencies of compounds BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d The Nrf2-ARE-activating effect of BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d, the most balanced compounds against cholinesterase inhibition, calcium channel blockade, and antioxidant power, was determined in vitro using a cell-based luciferase assay in the AREc32 cell line, which represents a good model for the redox-dependent activation of Nrf2 [31]. TBHQ was used as the positive control. AREc32 cells were treated over 24 h with increasing concentrations of each compound (5, 10, 50, 100 µM) and then luciferase activity was measured. Preliminarily, the cytotoxicity of compounds against AREc32 cells were evaluated, showing no toxicity until 50 µM for compounds BIGI 4b, BIGI 4d, and BIGI 5b and until 100 µM for compound BIGI 5d. As shown in Figure 2, compounds BIGI 4b, BIGI 4d, and BIGI 5b were able to successfully induce the Nrf2 transcriptional pathway at the concentrations of 5, 10, and 50 µM and up to 100 µM for compound BIGI 5d. Nrf2 transcriptional activation potencies of compounds BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d The Nrf2-ARE-activating effect of BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d, the most balanced compounds against cholinesterase inhibition, calcium channel blockade, and antioxidant power, was determined in vitro using a cell-based luciferase assay in the AREc32 cell line, which represents a good model for the redox-dependent activation of Nrf2 [31]. TBHQ was used as the positive control. AREc32 cells were treated over 24 h with increasing concentrations of each compound (5, 10, 50, 100 μM) and then luciferase activity was measured. Preliminarily, the cytotoxicity of compounds against AREc32 cells were evaluated, showing no toxicity until 50 μM for compounds BIGI 4b, BIGI 4d, and BIGI 5b and until 100 μM for compound BIGI 5d. As shown in Figure 2, compounds BIGI 4b, BIGI 4d, and BIGI 5b were able to successfully induce the Nrf2 transcriptional pathway at the concentrations of 5, 10, and 50 μM and up to 100 μM for compound BIGI 5d. CD values (i.e., the concentration required to double the specific activity of the luciferase reporter) were calculated to compare the relative potencies. As shown in Table 2, compounds BIGI 4b, BIGI 4d, and BIGI 5b were the most potent with a CD value equal to 7.1, 7.7, and 13.5 μM, respectively, compared with TBHQ which showed 0.8 μM. These compounds are thus only 9-to 17-fold less active than TBHQ, one of the most potent activators of Nrf2. Compound 5d also showed an interesting CD value equal to 33.4 μM and was therefore 42 times less active than TBHQ but, for example, 2 times more active than melatonin (CD = 60μM) [32] which is able to induce transcriptional pathway through several mechanisms [33]. ADME Studies To predict the physicochemical properties of the compounds, "Data Warrior", a physical and chemical property visualization and analysis software developed by Idorsia Pharmaceuticals Ltd., (Allschwil, Switzerland) was used. This software allows the predic- CD values (i.e., the concentration required to double the specific activity of the luciferase reporter) were calculated to compare the relative potencies. As shown in Table 2, compounds BIGI 4b, BIGI 4d, and BIGI 5b were the most potent with a CD value equal to 7.1, 7.7, and 13.5 µM, respectively, compared with TBHQ which showed 0.8 µM. These compounds are thus only 9-to 17-fold less active than TBHQ, one of the most potent activators of Nrf2. Compound 5d also showed an interesting CD value equal to 33.4 µM and was therefore 42 times less active than TBHQ but, for example, 2 times more active than melatonin (CD = 60 µM) [32] which is able to induce transcriptional pathway through several mechanisms [33]. ADME Studies To predict the physicochemical properties of the compounds, "Data Warrior", a physical and chemical property visualization and analysis software developed by Idorsia Pharmaceuticals Ltd., (Allschwil, Switzerland) was used. This software allows the prediction of drug-like properties using different parameters of Lipinski's rule of five (molecular weight, LogP, LogS, H-Acceptors, H-Donors, and topological polar surface (TPSA)). As shown in Table 3, all the compounds presented MW values slightly higher than 500 g/mol, the value corresponding to the most orally administered drugs and used as a basis to establish Lipinski's rule of five. Lipophilicity is an important physiochemical property that determines whether the molecule will cross the biological membrane with predictive CLogP less than 5. Interestingly, all compounds showed suitable lipophilicity with CLogP values between 4.3929 and 4.9412, which compared very favorably to Donepezil which showed a CLogP equal to 4.2149. In order to gain more insight into the lipophilicity which plays a crucial role in determining the solubility of drug candidates in the biological system, we also calculated the log S values of these compounds. Table 3 shows that the CLogS of these compounds slightly exceed the limits set for this parameter and are comparable very favorably to Donepezil. The number of donor and acceptor hydrogen bonds is in agreement with Lipinski's rule of five. The number of donor hydrogen bonds is lower than 5 and the number of acceptors is also lower than 10 for all compounds. TPSA corresponds to the van der Waals surface of the molecule's polar atoms (usually oxygen and nitrogen) and their attached hydrogens. A polar surface area no greater than 140 Å 2 as suggested by Veber's Rule. Interestingly, all the compounds have a TPSA of < 100 Å 2 . Moreover, compounds 4a-d exhibit lower PSA than compounds 5a-d. Data Warrior also calculates "Druglikeness" as a qualitative concept to predict whether synthesized compounds are drug-like. This parameter is calculated using data including LogP, LogS, and molar mass, but also using other parameters such as the presence of structures with specific pharmacological properties (such as enones that can be mutagenic and carcinogenic). It can be noticed that all compounds have an interesting "Druglikeness" prediction even when they have a molecular weight higher than 500 g/mol. Materials and Methods Melting points ( • C) were determined with a Kofler hot bench and are uncorrected. Analytical thin-layer chromatography (TLC) on silica-gel-precoated aluminum sheets (Type 60 F254, 0.25 mm thickness; from Merck, Darmstadt, Germany) was employed to follow the progress of the reactions and to check the purity and homogeneity of the synthesized products. Nuclear magnetic resonance spectra (NMR) were recorded on a BRUCKER DRX-400 AVANCE spectrometer (at 400 MHz for 1 H and 100 MHz for 13 C), using dimethylsulfoxide (DMSO-d6) as the solvent and tetramethylsilane (TMS) as the internal standard. The chemical shifts are expressed in parts per million (ppm) and the multiplicities of 1 H NMR signals were designated as follows: s: singlet; d: doublet; t: triplet; q: quartet; and m: multiplet, and coupling constants were expressed in hertz (Hz). Highresolution mass spectra (HRMS) were recorded using a Bruker micrOTOF-Q II spectrometer (Bruker Daltonics) in positive electrospray ionization time-of-flight mode at UCA Clermont Ferrand, France. General Procedure for the Synthesis of Ureas and Thioureas Ureas and thioureas were prepared according to the literature [26]. To a solution of the appropriate amine (1 eq, 6.9 mmol) in dichloromethane (13 mL), a solution of benzoyl isocyanate or benzoyl isothiocyanate (1.2 eq) in dichloromethane (26 mL) was added dropwise at 0 • C. The mixture was stirred for 1h at room temperature. The solvent was then evaporated under reduced pressure and the residue triturated in Et 2 O and filtrated. It was dissolved in a mixture of iPrOH (26 mL) and aqueous NaOH (26 mL, 3.75M). The mixture was stirred for 72 h. The alcohol was evaporated and the aqueous phase extracted three times with AcOEt. The organic phases were collected, washed with brine, dried over anhydrous sodium sulfate, and evaporated. The residue was triturated in ether to give the desired compound. 13 Synthesis of 4-(prop-2-yn-1-yloxy)benzaldehyde (3a) The aldehyde 3a was synthesized according to the literature [34]. A suspension of 4-hydroxybenzaldehyde (1 eq, 1.5 g) and potassium carbonate (1.3 eq, 2.2 g) in acetone (30 mL) was refluxed for 30 min. Once cooled to room temperature, 3-bromoprop-1-yne (1.6 eq, 1.5 mL) was added slowly and the resulting mixture was refluxed for 2h. The solvent was then evaporated and the residue solubilized in water and extracted three times with AcOEt. The organic phases were collected and dried over anhydrous sodium sulfate. Activated carbon was added and the solution left under stirring at 40 • C for 15 min. It was then filtrated on Celite. The filtrate was evaporated and the residue recrystallized using a mixture of AcOEt/hexane (1:2) to give the desired aldehyde. 3.1.6. Synthesis of 4-(but-3-yn-1-yloxy)benzaldehyde (3b) The aldehyde 3b was synthesized according to the literature [25]. A solution of 4-hydroxybenzaldehyde (1 eq, 1 g), triphenylphosphine (2 eq, 4.3 g), and but-3-yn-1-ol (1.5 eq, 0.93 mL) in THF (60 mL) was cooled to 0 • C. Diisopropyl azodicarboxylate (1.5 eq, 4.418 mL) was added dropwise and the mixture left under stirring at room temperature overnight. After evaporation of the solvent, the residue was solubilized in AcOEt, washed with NaOH (1M) and brine, dried over anhydrous Na 2 SO 4 , and evaporated. The residue was triturated with hexane and filtered. The filtrate was evaporated and purified by flash column chromatography using a mixture of hexane/AcOEt (90:10) to give the desired aldehyde. General Procedure for the Synthesis of Biginelli Products The appropriate aldehyde (1 eq) and urea or thiourea were solubilized in glacial acetic acid and the resulting mixture was stirred at room temperature for 2 h. Ethyl acetoacetate (1.2 eq) and sodium hydrogen sulfate (1 eq) were added and the suspension left under stirring for 48h at room temperature, then refluxed for 4h. The catalyst was filtered. The filtrate was evaporated and purified by flash column chromatography using a mixture of CH 2 Cl 2 /MeOH/NH 3 (95:4.05:0.05) to obtain the desired hybrid. 15 (m, 5H). 13 13 hAChE and eqBuChE The inhibitory capacity of the compounds on AChE biological activity was evaluated using the spectrometric method of Ellman [27]. Acetyl-or butyrylthiocholine iodide and 5,5dithiobis-(2-nitrobenzoic) acid (DTNB) were purchased from Sigma Aldrich. Lyophilized BuChE from equine serum (Sigma Aldrich) was dissolved in 0.2 M phosphate buffer pH 7.4 to obtain enzyme solution stock with 2.5 units/mL enzyme activity. AChE from human erythrocytes (buffered aqueous solution, ≥500 units/mg protein (BCA), Sigma Aldrich) was diluted in 20 mM HEPES buffer, pH 8, 0.1% Triton X-100, to obtain an enzyme solution with 0.25 unit/mL enzyme activity. In the procedure, 100 µL of 0.3 mM DTNB dissolved in phosphate buffer pH 7.4 was added to the 96-well plate followed by 50 µL of test compound solution and 50 µL of enzyme (0.05 U final). After 5 min of preincubation at 25 • C, the reaction was then initiated by the injection of 50 µL of 10 mM acetyl-or butyrylthiocholine iodide solution. The hydrolysis of acetyl-or butyrylthiocholine was monitored by the formation of yellow 5-thio-2-nitrobenzoate anion as the result of the reaction of DTNB with thiocholine, released by the enzymatic hydrolysis of acetyl-or butyrylthiocholine, at a wavelength of 412 nm using a 96-well microplate reader (TECAN Infinite M200, Lyon, France). Test compounds were dissolved in analytical-grade DMSO. Donepezil was used as a reference standard. The rate of absorbance increase at 412 nm was followed every minute for 10 min. Assays were performed in singlicate during three independent tests, with a blank containing all components except acetyl-or butyrylthiocholine in order to account for non-enzymatic reaction. The reaction slopes were compared, and the percent inhibition due to the presence of test compounds was calculated by the following: 100 − (vi/v0 × 100) where vi is the rate calculated in the presence of inhibitor and v0 is the enzyme activity. The first screening of AChE and BuChE activity was carried out at a 10 −6 or 10 −5 M concentration of the compounds under study. For the compounds with significant inhibition (≥50%), IC 50 values were determined graphically by plotting the % inhibition versus the logarithm of six inhibitor concentrations in the assay solution using the GraphPad Prism 6. Calcium Channel Blockade The evaluation of the calcium channel blockade of compounds BIGI 4a-d and BIGI 5a-d was carried out using FLIPR Calcium 6 indicator according to previously described protocols [35,36]. In brief, FLIPR-loaded SH-SY5Y cells were exposed to nimodipine (10 µM, reference inhibitor), DMSO (0.1%, vehicle), or our compounds of interest (10 µM) for 10 min. Calcium flux was triggered using KCl and CaCl 2 (90 and 5 mM, respectively) and the resulting change in fluorescence was recorded (λ Ex = 485 nm; λ Em = 525 nm). Data were gathered in three independent experiments with eight technical replicates per experiment. Outlier detection by Grubbs' test was performed and outlying values were excluded from further analysis. Oxygen Radical Absorbance Capacity Assay The antioxidant activity of hybrids BIGI 4a-d and BIGI 5a-d was measured by ORAC-FL assay using fluorescein as a fluorescent probe. Briefly, fluorescein and antioxidant were incubated in a black 96-well microplate (Nunc) for 15 min at 37 • C. 2,2 -Azobis(amidinopropane) dihydrochloride was then added quickly using the built-in injector of a Varioskan Flash plate reader (Thermo Scientific, Waltham, MA, USA). The fluorescence was measured at 485 nm (excitation wavelength) and 535 nm (emission wavelength) each min for 1h. All the reactions were made in triplicate and at least three different assays were performed for each sample. Nrf2 Transcriptional Activation Potencies of Compounds BIGI 4b, BIGI 4d, BIGI 5b, and BIGI 5d Treatment of stable ARE-luciferase reporter cells with the tested compounds and evaluation of the luciferase activity: The NRF2/ARE-luciferase reporter HEK293 stable cell line (Signosis, Santa Clara, CA, USA) was maintained in Dulbecco's MEM high-glucose (DMEM) medium supplemented with 10% FBS and penicillin-streptomycin at 37 • C in 95% air/5% CO 2 . For treatment, the cells were seeded at a density of 2 × 10 4 per well in 96-well white microtiter plates. After 48 h, the culture medium was replaced with fresh DMEM supplemented with 0.1% FBS containing different concentrations of the tested compounds or DMSO (0.1%) in duplicate. Luciferase activity was measured after 24 h of treatment using the Bright-Glo luciferase assay system (Promega) according to the manufacturer's instructions. Treatment of stable ARE-luciferase reporter cells with the tested compounds and evaluation of the cellular viability: The NRF2/ARE-luciferase reporter HEK293 stable cell line was seeded and treated as described for the luciferase activity determination, except that transparent culture plates were used instead of white plates. After 24 h of incubation with the tested compounds, the percent of cell viability was measured using MTT assay. Conclusions In the present study, we designed and synthesized via the Biginelli multicomponent reaction eight new compounds. From all the biological and physicochemical results gathered in this study, we identified compounds BIGI 4b, BIGI 4d, and BIGI 5b as new multitargetdirected ligands able to simultaneously address cholinesterase inhibition, calcium channel blockade, antioxidant power measured by ORAC assay, and the Nrf2-ARE-activating effect. In addition, these compounds showed suitable physicochemical properties and "Druglikeness" scores for druggability predicted by the Data Warrior software. This study revealed that compounds BIGI 4b, BIGI 4d, and BIGI 5b may be promising agents for further research into the treatment of Alzheimer's disease. Work is now in progress in our laboratories to develop analogues with the best pharmacological profile. The results will be reported in due course.
6,904.8
2022-12-22T00:00:00.000
[ "Biology", "Chemistry" ]
Research on Market Stock Index Prediction Based on Network Security and Deep Learning , Introduction Finance is important core competitiveness of a country, and its proportion in the national economy has been increasing year by year [1]. As an important part of the financial service system supporting the real economy, the stock market will also become a part of the country's core competitiveness [2]. With the vigorous development of the national economy, strong policy support, and the gradual improvement of the public's awareness of financial management, more and more institutions and individuals are actively participating in stock market transactions [3,4]. e demand for related financial services has also followed, so stock price forecasting has become an issue that professional analysts and investors attach great importance to it [5]. With the increasing influence of the stock market on economic trends, forecasting the trend of stocks has become a hot topic in research. Many researchers have conducted scientific and meticulous research on the stock market, trying to formulate rules for the operation of the stock market. However, the results of the research have found that the changes in the stock market seem to be unrelated [6,7]. e efficient market hypothesis theory proposed by Eugene Fame is a more authoritative explanation in the current financial circles to study the law of stock market changes. In this theory, the stock price is mainly affected by future information, namely news, rather than being driven by current or past prices [8][9][10]. As a long-term concern of the capital market, stock market forecasting attracts people to use various methods for related research because of its predictable and generous returns. e improvement of forecasting methods has further improved the forecasting results [11]. For example, Lin et al. proposed an end-to-end hybrid neural network, which uses convolutional neural networks (CNNs) to extract data features, and uses long-and shortterm memory recurrent neural networks to capture the longterm dependence in the historical trend sequence of the time series to learn. Contextual features predict the trend of stock market prices [12]. Hu et al. designed a hybrid attention network (HAN) to predict stock trends based on related news sequences [13]. Li et al. proposed a multitask recurrent neural network (RNN) and a high-order Markov random domain to predict the movement direction of stock prices [14]. rough a multitask RNN framework, feature information was extracted from the original market data of individual stocks. Most investors' investment decisions are not made solely through technical analysis of listed companies. erefore, technical analysis can be combined with the news available to investors and the sentiment in response to the news to quantitatively predict the price trend of the stock [15]. e traditional stock forecasting model adopts the forecasting model based on stock time series analysis, but the time series model cannot consider the influence of investor sentiment on the stock market changes. In order to use investor sentiment information to make more accurate predictions on the stock market, this paper establishes a stock index prediction model based on time series and deep learning. Based on the time series model, it is proposed to use CNN to extract deep emotional information to replace basic emotional features at the emotional extraction level. Related Technology Overview 2.1. Network Security Overview. Network security situation prediction refers to the time sequence prediction of the network security state in the future for a period of time based on the current network environment state combined with historical data of the network security situation, so as to prevent possible network attacks in advance [16]. e extraction of situation elements is the basis of network security situation awareness. Comprehensive and accurate network security situation data collection and the effectiveness of the established situation index system are important guarantees for the correctness of situation assessment [17]. e extraction of situation elements requires that the situation indicators can be extracted from the network environment according to the established situation index system [18,19]. After a series of technical processing of cleaning, integration, reduction, transformation, and fusion, they will be used as situation elements for subsequent situation assessment and be fully prepared for the situation forecast. Relevant technologies for situation element extraction include situation index extraction and data preprocessing [20]. Market Stock Index. ere are many ways to predict stocks. e two commonly used methods are fundamental analysis and technical analysis [21,22]. e two methods are briefly described below. is article adopts a technical approach, so in this section, it focuses on the research status of analytical methods based on technical means. Fundamental analysis is also called qualitative analysis [23]. Fundamental analysis is a subjective analysis method that relies on the experience of financial practitioners [24]. is method is based on macrolevel information, such as the company's financial and operating conditions. Experts rely on this macroinformation, coupled with personal experience and judgment, to realize the prediction and inference of the future trend of the stock [25][26][27]. e conventional methods include the Delphi method, principal probability method, cross probability method, and leading indicator method. e effectiveness of qualitative forecasting methods largely depends on the expert's own knowledge of the stock market and the expert's ability and experience. When the expert's knowledge and experience level is high, the prediction of the stock market will be accurate, but if the expert lacks experience or insufficient ability, the prediction result will be quite different from the actual situation [28]. is method has great uncertainty and subjective dependence, so it cannot describe the stock market objectively in accurate and objective language. Figure 1 shows the distribution map of the influencing factors of the financial market index. e analysis method based on a data mining algorithm is the process of mining potential valuable, fixed, and regular stock prebarium models from a large amount of data. In the era of big data, the stock market data is also increasing in multiples [29]. It is becoming increasingly unrealistic to summarize the changing laws of the stock market by human statistics. erefore, the current technical research on the stock market is based on the analysis methods of data mining algorithms [30]. e stock prediction model constructed in this paper is also based on the model in the specific direction of deep learning in data mining. erefore, the price of stocks contains all the effective information of the stock market, and the generation of news information in real life is often random. On this basis, the stock price will also follow the random walk theory, so the use of technical means to analyze stock market changes is invalid [31,32]. However, with the emergence of more and more studies, especially the theoretical perspectives of integrated finance, behavioral economics, and behavioral finance, researchers have gradually begun to believe that the efficient market hypothesis is not completely correct [33]. Because of the influence of various factors in the market, investors may make irrational behavioral decisions on this information. is also proves from the side that the stock market, in reality, is not a strong and effective market in the true sense, which provides the possibility for technical analysis. In actual situations, the market is far from being fully effective, and many factors that affect stock prices, such as investor sentiment, cannot be fully known to investors. In addition, investors are emotional and unable to respond in a timely manner, and it is difficult for a strong, efficient market to exist [34,35]. ere is room for excess profits in the market. Research on the herd effect shows that the sentiments of other investors will affect the investment decisions of individual investors. Market Stock Prediction Based on Deep Learning Stock price prediction has great value in seeking to maximize the profit of a stock investment, and related technologies have been studied for decades. According to the efficient market hypothesis, news can have an impact on stock prices, which also shows that events have a driving effect on the stock market. In the field of natural language processing (NLP), public news and social media are the two main data sources for stock prediction [36]. Time Series Model. e object of the stock model based on time series is the historical data of stocks. e core step is to divide the historical data of stocks to facilitate the subsequent stock market forecasts. In this model, the first and most important step is to collect and process time series data. When predicting a time series, it is mainly by observing the trend changes of the time series first and predicting future time series changes by learning the law of past changes. Time series data often have a large amount of data and are difficult to process directly. is requires dividing it and dividing the time series by finding the key trend points. rough this division method, the originally complex data can be compressed while also removing some noise in the stock sequence. Some points that are not helpful for prediction, so that the retained information is more effective for the model to learn the changes in the time series data, and the time series rules can be found more clearly. Deep Learning Model. It has been mentioned in the introduction that the theoretical basis of the model based on financial time series is the efficient market hypothesis. It is believed that investors will make investment decisions objectively in accordance with financial laws when making investment decisions without being affected by subjective factors. However, in the real investment environment, investors may not necessarily invest in a completely rational way. ey will be subject to other external interferences, such as financial news and news events on social media, which will cause emotional changes and interfere with investment decisions. In this section, two improved models are proposed. First, for traditional classifiers (such as SVM and KNN) to deal with the general problem of time series data classification, with the help of the recurrent neural network to facilitate the modeling of time series data, a depth-based stock prediction model learned, and on the basis of this model, the sentiment analysis results of stock-related data in the social media text are added to construct a trend prediction model that integrates basic emotional features. Among the deep learning technologies that have emerged in recent years, convolutional neural networks are the most widely used. Figure 2 shows the index prediction process based on deep learning. Traditional image features are often artificial features, that is, artificially explore some features to complete the task, and the pros and cons of the artificial features will directly affect the effect of task completion. In the convolutional neural network, the work of feature extraction is completed by the convolution kernel without manual participation. At present, with the development of Internet big data, the improvement of hardware computing power, and the optimization of software algorithms, the structure of convolutional neural networks is diverse, and it is no longer the former shallow network. Many deep networks can be trained well. But no matter how the structure of the convolutional neural network model changes, its basic components are similar, including input layer, convolution layer, pooling layer, activation layer, and fully connected layer. In a convolutional neural network, each neuron in the hidden layer can be regarded as a convolution kernel, and each convolution kernel will perform a sliding convolution operation on the image: e convolution kernel is used to extract the features of the image, thanks to its sparse connection and weight sharing: For the same convolution kernel, it will be updated in each round only when one iteration is completed. erefore, for the same convolution kernel, in the same round of iteration, the weight of each convolution is unchanged, so it is called weight sharing: e size of the image after the convolution operation is related to factors such as the size of the convolution kernel, the step size, and the pooling size. Usually, several consecutive convolutional layers are used to extract more features, but this also means a large amount of calculation and parameters. erefore, in order to reduce the amount of calculation and compress the image feature map, a pooling layer is generally added in the middle of the continuous convolutional layer: Security and Communication Networks e operation of the pooling layer is very similar to the operation of the convolutional layer, and the size of the output image can be realized to be half of the input image size without filling. According to different needs, there are two main operations of the pooling layer, namely maximum pooling and average pooling: e essence of convolutional neural network training is to make the model have a good fit for the data, and at the same time have a good generalization ability: e convolution operation is essentially a linear operation. In order to make the model have a better expressive ability, it is often necessary to add a certain degree of nonlinearity, that is, add an activation layer after the convolution layer: e activation layer structure is relatively simple, generally, just an activation function used to add nonlinearity to the output result of the convolutional layer. Commonly used activation functions include the Sigmoid function, Tanh function, and Re LU function: a n � 1 2 It can be found from the Tanh function and its derivative that it is very similar to the Sigmoid form, and the function image is very similar. Simulation Environment and Data. Compared with individual stocks, the volatility of stock indexes is generally smaller because stock indexes are composed of many stocks in different industries and can better reflect the overall economic momentum and overall conditions. erefore, the most representative Shanghai Stock Exchange Index (Shanghai Stock Exchange Index, code 000001) and Shenzhen Stock Exchange Index (Shenzhen Component Index, code 399001) are selected as the research objects. Select historical stock data with a time span from January 1, 2015, to December 31, 2019. e data includes 7 attributes: date, closing price, opening price, highest price, lowest price, rising or falling price, and volume. All data are downloaded from the Tushar financial big data platform. According to the time span, three different experimental data sets are set up. e data of 1,219 trading days in 5 years from 2015 to 2019 is the first group, the data of 731 trading days in 3 years from 2017 to 2019 is the second group, and the data of 244 trading days in 2019 is the first group-three groups. Use deep learning models to train these three data sets and predict the closing prices of the two stock indexes. Index Forecasting Effect Analysis. Using the 1219-day data samples of the Shanghai Composite Index for 5 years from 2015 to 2019, the stock data of 10 consecutive days and 20 days were used as input samples to establish a prediction model for closing price prediction. ese two models are called SHYSD10 and SHYSD20, respectively. Figures 3 and 4 show their prediction results. Figure 3 shows the prediction results of the Shanghai Composite Index at 10-day intervals. Figure 4 shows the forecast results of the Shanghai Composite Index at 20 consecutive days. e naming rules of the models in this article are as follows: First, SH and SZ, respectively, refer to the prediction of the Shanghai Composite Index or the Shenzhen Component Index, Ym refers to the time span of the data sample used for m years, and Dn refers to the use of continuous n days the data is used as the input sample, so I will not repeat it below. Using the 731-day data sample of the Shanghai Composite Index for 3 years from 2017 to 2019, 5 consecutive days and 10 days of stock data were used as input samples to establish a prediction model for closing price prediction. Call these two models Y3D5 and Y3D10, respectively. Figures 5 and 6 show their prediction results, respectively. Figure 5 shows the prediction results of the Shanghai Composite Index at 5 consecutive days. Figure 6 shows the prediction results of the Shanghai Composite Index from 2017 to 2019 at 10-day intervals. It can be found from the above that both models have achieved good results when predicting the closing prices of two stock indexes and four stocks. e method used in the comparative analysis of the two models is the same as that in the previous chapter. Convolutional neural network and other methods in stock index prediction comparison are shown in Figure 7. In order to verify the comparison effect of the method proposed in this paper with other methods in the past, this paper compresses the deep learning prediction model with radial basis function neural network and Kalman filter neural network [37][38][39]. e comparison result of convolutional neural network and other methods in stock index prediction is shown in Figure 7. Compared with the ordinary neural network model, the average absolute error of the convolutional neural network model is reduced by 11.6070, 12.4070, the average absolute percentage error is reduced by 11.1070, 10.4070, and the root means square error is reduced by 8.070 and 9.8070, respectively. e accuracy of price change forecasts increased by 4.50 and 2.90, respectively. e average absolute percentage error of the forecast is within 2%, and the accuracy of the upper and lower forecast is above 53%. e model has good generalization ability and can make more accurate inventory forecasts. At the same time, through comparative analysis of four groups of experiments with 10 groups, 20 groups, 30 groups, and 50 groups of time steps, it is found that the prediction performance of the deep learning neural network is indeed related to the selected time step. Conclusion e changes in the stock market play a vital role in the country's economic trends, and future research on the stock market must be a hot topic in the field of intelligent forecasting. e main research topic of this paper is the shortterm trend forecast modeling of stocks based on investor sentiment extraction and compare the influence of multiple information sources on the accuracy of the model. In order to solve the above-mentioned problems, this article has carried out research work from two aspects. As a long-term concern of the capital market, stock market forecasting attracts people to use various methods for related research because of its predictable and generous returns. e improvement of forecasting methods has further improved the forecasting results. In order to use investor sentiment information to make more accurate predictions on the stock market, this paper establishes a stock index prediction model based on time series and deep learning. Based on the time series model, it is proposed to use CNN to extract deep emotional information to replace basic emotional features at the emotional extraction level. At the data source level, additional information sources such as fundamental features are introduced to further improve the prediction performance of the model. e results show that the algorithm of the scheme is feasible and effective, and it can better predict the changes in the market stock index. In the future, we will further carry out relevant research in order to provide a reference and suggestion for the development of the financial market. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,501
2021-04-23T00:00:00.000
[ "Computer Science", "Business" ]
The roles of glucagon-like peptide-2 and the intestinal epithelial insulin-like growth factor-1 receptor in regulating microvillus length Microvilli are tiny projections on the apical end of enterocytes, aiding in the digestion and absorption of nutrients. One of their key features is uniform length, but how this is regulated is poorly understood. Glucagon-like peptide-2 (GLP-2) has been shown to increase microvillus length but, the requirement of its downstream mediator, the intestinal epithelial insulin-like growth factor-1 receptor (IE-IGF-1R), and the microvillus proteins acted upon by GLP-2, remain unknown. Using IE-IGF-1R knockout (KO) mice, treated with either long-acting human (h) (GLY2)GLP-2 or vehicle for 11d, it was found that the h(GLY2)GLP-2-induced increase in microvillus length required the IE-IGF-1R. Furthermore, IE-IGF-1R KO alone resulted in a significant decrease in microvillus length. Examination of the brush border membrane proteome as well as of whole jejunal mucosa demonstrated that villin was increased with h(GLY2)GLP-2 treatment in an IE-IGF-1R-dependent manner. Under both basal conditions and with h(GLY2)GLP-2 treatment of the IE-IGF-1R KO mice, changes in villin, IRTKS-1, harmonin, β-actin, and myosin-1a did not explain the decrease in microvillus length, in either the brush border or jejunal mucosa of KO animals. Collectively, these studies define a new role for the IE-IGF-1R within the microvillus, in both the signaling cascade induced by GLP-2, as well as endogenously. Differential proteomic expression in isolated brush border membranes. To understand the changes underlying the alterations in microvillus length, both with GLP-2 treatment and in the IE-IGF-1R KO mice, brush border membranes (BBM) were isolated from control and KO, vehicle-and h(GLY 2 )GLP-2-treated mice. Western blot demonstrated an 11.6-fold enrichment in the brush border protein, villin, as compared to the cytoplasmic protein, GAPDH (preliminary data, not shown). Mass spectrometry analysis identified over 1,200 proteins in the samples (Supplemental Table 1). Mouse Mines (www.mousemines.org) was used to identify the presence of ontologies within the category of "cellular component", including "cytoskeleton", "mitochondrial", "nuclear", "organelle", "membrane" and "microvillus". However, further analysis focused on the "actin filament" gene ontology (GO) category, which was significantly enriched (p < 0.05) and included a total of 71 known proteins, including those that are found in the microvillus. Protein expression levels were then determined for each group of animals relative to the vehicle-treated control mice (Fig. 3a). Marked differences in expression patterns were observed between vehicle-and h(GLY 2 )GLP-2-treated control and IE-IGF-1R KO mice, as well as between control and KO animals for each treatment group. To better understand these differential expression patterns, BBM structural proteins were categorized according to: 1) effects of GLP-2 that required the IE-IGF-1R; and 2) effects of the loss of the IE-IGF-1R alone. Select structural proteins, chosen for their roles in regulating microvillus length, were then analyzed in greater detail. (e)). n = 5 for microvillus length; microvilli in 28 cells from 5 different villi were measured per mouse. n = 3 for packing angle; a minimum of 40 angles were taken per mouse. n = 3 for microvillus density in a 600 × 600 µm box. *P < 0.05 for the difference between treatment groups, # P < 0.05 for the difference from controls. Scale bar is 100 µm. www.nature.com/scientificreports www.nature.com/scientificreports/ Villin. In the isolated BBM, VILLIN demonstrated a significant correlation (R 2 = 0.97; p < 0.05) for effects of h(GLY 2 )GLP-2 that require the IE-IGF-1R (category 1; Fig. 3b). VILLIN expression was increased to 1.43-fold in h(GLY 2 )GLP-2 treated control mice, with no change seen in h(GLY 2 )GLP-2 treated IE-IGF-1R KO animals. To further understand villin expression changes, the jejunal mucosa was analyzed in greater detail. Although Villin mRNA levels showed no difference between groups (Fig. 4a), changes in VILLIN protein levels, as determined by immunoblot, paralleled the BBM findings, with an 8.13-fold increase in the h(GLY 2 )GLP-2 treated control mice (p < 0.05; Fig. 4b). Immunofluorescence analysis of jejunal sections supported these findings (Fig. 4c), as control mice given h(GLY 2 )GLP-2 showed suggested increases in the intensity of VILLIN staining at the brush border. Furthermore, the GLP-2-induced increase in VILLIN required the IE-IGF-1R, as KO mice failed to show significant increases in protein expression as well as staining intensity. IRTKS-1. IRTKS-1 levels in the BBM demonstrated a significant correlation (R 2 = 0.93; p < 0.05) for effects of the IE-IGF-1R KO alone, being higher in both the vehicle (2.39-fold) and h(GLY 2 )GLP-2 treated (1.38-fold) KO mice, as compared to the control animals (category 2; Fig. 3b). Interestingly, although mRNA transcript levels were increased by GLP-2 treatment in both groups of animals (p < 0.05; Fig. 5a), the increase in IRTKS-1 in the IE-IGF-1R KO BBM was not observed by western blot of the jejunal mucosa. Instead, there was a 6.80-fold www.nature.com/scientificreports www.nature.com/scientificreports/ increase in IRTKS-1 levels in the h(GLY 2 )GLP-2 treated control mice, and a smaller, non-significant increase in the KO animals (Fig. 5b). These changes were not observable under immunofluorescence, with the intensity and localization remaining unchanged (Fig. 5c). Harmonin. In the BBM, HARMONIN also had a significant correlation (R 2 = 0.95; p < 0.05) for effects of the IE-IGF-1R KO alone, with 3.85-fold and 1.69-fold increases in expression in vehicle and h(GLY 2 )GLP-2 treated IE-IGF-1R KO mice, respectively (category 2; Fig. 3b). However, harmonin in the entire jejunal mucosa showed different results. Harmonin transcript levels were significantly decreased in the IE-IGF-1R KO vehicle mice, compared to those in the controls (p < 0.05; Fig. 6a), whereas the mucosa showed no change in HARMONIN expression in response to either h(GLY 2 )GLP-2 treatment or the KO (Fig. 6b). Furthermore, this trend was not apparent by harmonin immunostaining of the jejunum, as relative intensities and localization appeared similar between groups, with localization predominantly in the brush border (Fig. 6c). β-actin. In the BBM, β-ACTIN had a significant correlation (R 2 = 0.95; p < 0.05) for effects of the IE-IGF-1R KO alone (category 2; Fig. 3b). However, only the vehicle treated KO mice had higher β-ACTIN levels, with a 1.75-fold increase as compared to h(GLY 2 )GLP-2 treated KO mice, which demonstrated only a non-significant 1.18-fold increase. No changes in β-actin transcript levels were detected in the jejunal mucosa (Fig. 7a), while examination of protein levels in these tissue samples showed different results as compared to the isolated BBM, such that β-ACTIN showed a significant 4.3-fold increase in the jejunal mucosal of h(GLY 2 )GLP-2 treated control mice (p < 0.05; Fig. 7b). This finding was supported by immunostaining, where observable increases in fluorescent intensity were observed in h(GLY 2 )GLP-2 control mice, in both the brush border and basolaterally ( www.nature.com/scientificreports www.nature.com/scientificreports/ GLP-2 failed to demonstrate increases in protein expression, or in the intensity of the immunostaining in the brush border and basolaterally. Myosin-1a. Finally, in the BBM, MYO-1A did not show a significant correlation with either GLP-2 treatment or IE-IGF-1R KO (Fig. 3b). Analyses of the jejunal mucosa also failed to demonstrate any changes in jejunal mucosal expression of myo-1a at either the mRNA or protein levels (Fig. 8). Discussion Microvilli display uniform length, which is of functional importance 1 . Proteins within the BB region have therefore been studied to better understand their influence on microvillus length; however, the factors that regulate these proteins are poorly understood. The results of the present study indicate that the expression of several proteins with key roles in brush border structure is differentially regulated by the intestinotrophic hormone, GLP-2, as well as by one of its downstream signaling mediators, the IE-IGF-1R. The BBM proteome has been isolated previously, and was reported to express approximately 650 identified proteins 2 . Unexpectedly, the isolation conducted herein contained over 1,200 proteins. Although the isolation protocols were identical, one key difference that may account for such a drastic difference in the number of identified proteins was the mass spectrometry instrument. The Orbitrap Fusion Lumos, used in the current study, has a faster scanning time and is more sensitive, as compared to the Thermo Scientific LTQ linear ion-trap utilized previously 2 , and therefore represents a technological advance given the almost ten-year time gap between the two proteomes. This may allow for the identification of previously unidentified brush border proteins, www.nature.com/scientificreports www.nature.com/scientificreports/ leading to enhanced understanding of the development and maintenance of this critical intestinal epithelial cell compartment. The BBM isolation reported in this paper was validated in two ways. First, through western blot, probing for the ratio of VILLIN to GAPDH in both BBM and jejunal mucosa samples. VILLIN is a protein that, within the enterocytes, is only found in the brush border 34 ; thus, the ratio of VILLIN:GAPDH should be, and was found to be, higher in the BBM isolate. Another form of validation was conducted by analysis of the proteome using Mouse Mines, with the results showing protein categories as previously reported 2 , and significant up regulation of specific GO terms, under "cellular component" ontologies, that one would expect to find in the brush border, such as "brush border", "microvillus", "cluster of actin-based projections", and "actin filament bundles". h(GLY 2 )GLP-2 has been previously reported to increase microvillus length 23 , but how this is achieved is poorly understood. Through the use of the IE-IGF-1R KO mice, we have now shown that the increase in microvillus length with h(GLY 2 )GLP-2 treatment requires the IE-IGF-1R. One key microvillus protein that was regulated by the GLP-2 -IE-IGF-1R pathway was villin, an actin-bundling protein [35][36][37] , with a well-established role in epithelial wound repair 38 . Although it's deletion is not sufficient to abolish the formation of microvilli 5 , villin over-expression has been shown to induce the expression of membrane projections in cells that do not naturally do so 39 . In the present study, the protein but not the transcript levels of villin were increased with h(GLY 2 )GLP-2 treatment, in an IE-IGF-1R-dependent manner. Decreases in VILLIN levels have been reported previously in patients with Crohn's Disease, while mRNA levels also remained unchanged 40,41 . These findings suggest possible translational regulation of villin, as opposed to transcriptional modulation. Furthermore, with links to decreased IGF-1 levels in Crohn's disease patients 42 www.nature.com/scientificreports www.nature.com/scientificreports/ studies involving GLP-2 administration to intestinal epithelial villin-deficient animals, with and without induced enteritis, may prove useful to further interrogate this pathway. Another interesting result was the lack of increase in the core-forming protein, β-actin, in the BBM of control mice treated with h(GLY 2 )GLP-2. With an increase in microvillus length, it would follow that there should be an increase in β-actin within the BBM, as this protein forms the core. However, the overall increase in microvillus length, although significant, was only 0.2 µm and, thus, may have been too small to detect an increase in β-actin expression. It has been shown that the ratio of β-actin to villin expression is 1:4 44 ; thus, increases in villin expression are more likely to be detected as compared to β-actin, within the BBM itself. However, when the entire jejunal mucosa was analyzed, there was a significant increase in β-actin expression with h(GLY 2 )GLP-2 treatment, which required the IE-IGF-1R. This result is also consistent with the immunostaining, which showed an increase in the intensity of the basolateral staining. Thus, the GLP-2 -IE-IGF-1R pathway is also important for regulating the expression of β-actin in the jejunal mucosa. Unexpectedly, the IE-IGF-1R KO mice presented with shorter microvilli. The IE-IGF-1R has not been reported to play a role in regulating microvillus length, with IGF-1 and its receptor better known for their roles in whole body growth 45,46 , metabolism 47 , and cancer 48 . Focusing on the intestine, IGF-1 and its receptor are major determinants in colorectal cancer 49 and may be decreased in some patients with Crohn's disease 42 . IGF-1 signaling has also been shown to mediate the intestinotrophic effects of h(GLY 2 )GLP-2 17,29,31-33 , as well as to stimulate Na + / K + -ATPase activity and enhance Na + -coupled glucose absorption in enterocytes 50 . The present studies therefore define a new role for the IGF-1R in regulating the length of the intestinal brush border. Decreases in microvillus length in response to BBM protein KO have been reported previously 8,9,11,13,14 , but these models also demonstrate changes in the actin core cytoplasmic projections (rootlets) 8 integrity of the plasma membrane 9 , and/or microvillus membrane 11 , hallmark features that were not present in the IE-IGF-1R KO mice. Plastin-1a KO mice have 20% shorter microvilli, but also lack rootlets, which was not observed with IE-IGF-1R KO 8 . Similarly, combinatory KO of villin, espin, and plastin-1a decreases both microvillus and rootlet length 14 . In contrast, KO of desmoplakin, a desmosomal protein, results in microvilli that are shorter but also misshaped, the latter of which was not seen in the IE-IGF-1R KO mice 51 . Harmonin KO animals also display shorter microvilli, in addition to changes in architecture and density that were not observed in the IE-IGF-1R KO 13 . Finally, KO of myo-1a results in longer, less densely packed microvilli, with extrusions of cellular cytoplasm in the intestinal lumen 9 , again, a phenotype that was not noted in the IE-IGF-1R KO model. Collectively, these findings indicate a unique role for the IGF-1R in regulating proteins that contribute to microvillus length without affecting microvillus density or structure. IRTKS-1 is known to play an important role in membrane bending and microvillus formation 3,52 and was recently reported to increase microvillus length during enterocyte differentiation 4 . IRTKS-1 may be regulated by the IE-IGF-1R, as the isolated BBM showed a significant correlation wherein the KO mice had higher expression levels than controls. What is still unclear is whether the IE-IGF-1R negatively regulates the expression of IRTKS-1, or if this protein is increased in the KO mice in compensation for the decrease in microvillus length. The jejunal mucosa analysis indicates the latter of the two possibilities. IRTKS-1 is not exclusively located in the BBM so, upon measuring the protein levels in the mucosa, they remained unchanged in the vehicle KO mice compared to vehicle controls, showing the IE-IGF-1R does not regulate overall expression of IRTKS-1. Furthermore, although h(GLY 2 )GLP-2 increased IRTKS-1 in the mucosa, with a smaller concentration of IRTKS-1 located in the BBM, the effects of GLP-2 were undetectable in this compartment. Instead, the BBM proteome showed a compensatory action of the enterocyte to elongate its microvillus. Interestingly, espin-8 expression in the BBM also showed increases in the IE-IGF-1R KO, as compared to the controls (data shown in Supplemental Table 1). Espin-8 has www.nature.com/scientificreports www.nature.com/scientificreports/ been reported to play a role in the effects of IRTKS-1 on the brush border 4 . The increase in espin-8 therefore supports a role for the increased IRTKS-1 in the KO mice as a compensatory attempt to enhance microvillus length. The change in harmonin expression with the BBM may also indicate another compensatory change in the IE-IGF-1R KO to increase microvillus length. Harmonin levels were positively correlated with the effects of the KO alone, such that the KO mice, regardless of treatment, had higher levels of harmonin as compared to the control animals. Harmonin is known to be expressed primarily in the BBM 13 , as confirmed by the pattern of immunostaining, so it was surprising that the jejunal mucosa did not mirror this result. However, the differential sensitivities of the two methods must be considered and, therefore, their ability to detect changes in a protein which is expressed in relatively small amounts. Myosin-1a is a protein that is exclusive to the brush border 53 and, accordingly, was found to be expressed in both the BBM and jejunal mucosa, with predominant localization to the brush border. However, myosin-1a is known to be important for membrane-actin cytoskeletal adhesion 9 and myosin-1a KO mice present with membrane extrusions and decreased microvillus density, hallmark features that were not seen here in response to either GLP-2 treatment or IGF-1R KO. Although GLP-2 treatment is known to increase intestinal digestive and absorptive capacity 16,19,54,55 , KO of the GLP-2R reduces amino acid absorption but does not impact whole body growth 56 . It still remains unclear as to whether there are any functional consequences following IE-IGF-1R KO except for an inability to increase proliferation 29 or barrier function 17 in response to GLP-2 administration. However, no changes in body weight have been reported for the IE-IGF-1R KO mice 17,29 , as also found in the present study (data not shown). Notwithstanding, these studies have all been conducted for only two-weeks following induction of the KO which, given the relatively short time in additional to the small decrease in microvillus length, may not be sufficient to detect a robust change in body weight. Indeed, this is consistent with other KO models, wherein key brush border proteins are absent but no overall physiological effect is seen 5,8,9,13,14,57 . Consistent with this possibility, myo-1a, which was unaffected by IE-IGF-1R KO, has been shown to also be important in membrane trafficking within the microvillus 9 , and sucrase isomaltase and alkaline phosphatase were detected at normal levels in the BBM from the IE-IGF-1R KO (Supplemental Table 1), suggestive of normal trafficking. Finally, one limitation of this study is the use of whole small intestine to prepare the brush border proteome, as compared to the molecular analyses of jejunal mucosa and full-thickness cross-sections. As the GLP-2 receptor is expressed at highest levels in the jejunum 24 , many studies have focused on this region of the gut 17,29,54,55 . However, as a consequence, changes in the proteome may also reflect differences in duodenal and/or ileal microvillar proteins that are not occurring in the jejunum. Further studies will be required to determine the possibility of site-specific differences in the regulation of microvillus length. In conclusion, the results of the present study demonstrate not only that the IE-IGF-1R is essential for the GLP-2 induced increase in microvillus length, but that it also plays a key role in the maintenance of basal microvillus length under non-stimulated conditions. Although changes in several key microvillus proteins were demonstrated in response to experimental manipulation of these systems, the complex interplay of these proteins and the downstream mechanism(s) by which they regulate microvillus length remain to be fully elucidated. Following induction, 0.1 μg/g h(GLY 2 )GLP-2 (American Peptide Company, Sunnyvale, CA) or vehicle (PBS) was injected sc q24hr for 11 d, with the final treatment given 3 hr prior to euthanasia 17,29,32,33 . The small intestine was collected and flushed with cold PBS. Multiple 2 cm mid-jejunal sections were then fixed in 10% formalin for 24 hr for paraffin embedding and sectioning (University Health Network, Toronto, ON), fixed in paraformaldehyde for 24 hr for frozen sectioning, fixed in 2.5% glutaraldehyde in Sorensen buffer for transmission electron microscopy (TEM), or frozen on dry ice and stored at −80 °C for mRNA and protein analysis; alternatively, the entire small intestine was collected for brush border membrane (BBM) isolation. BBM isolation. As previously described 2 www.nature.com/scientificreports www.nature.com/scientificreports/ with 0.5 mM Pefabloc-SC (500 g, 8 min), and then washed twice in Solution A (75 mM KCl, 10 mM Imidazole, 1 mM EGTA, 5 mM MgCl 2 , and 0.02% Na-azide; pH 7.2). After resuspension of the pellet in Solution A, a 60% sucrose solution was added to make a final concentration of 50%. A 40% sucrose solution was layered on top, and the samples were spun at 146,900 g for 1.5 hr (Beckman Optima XPN 80, SW70Ti rotor). The interface between the two gradients, containing the isolated BBM was then removed and washed twice with solution A (500 g, 8 min), and stored at −80 °C. The isolated protein was quantified by BCA assay (Pierce BCA Assay, Thermo Scientific), and underwent western blot validation. Mass spectrometry was performed by the SPARC BioCentre (The Hospital for Sick Children, Toronto, ON, Canada), using the Thermo Scientific Orbitrap Fusion-LumosTribid Mass Spectrometer (ThermoFisher, San Jose, CA) outfitted with a nanospray source and EASY-nLC 1200 nano-LC system (ThermoFisher, San Jose, CA) and equipped with ETD mode. Lyophilized peptide mixtures were dissolved in 0.1% formic acid and loaded onto a 75 μm × 50 cm PepMax RSLC EASY-Spray column filled with 2 μM C18 beads (ThermoFisher San, Jose CA) at a pressure of 900Bar and a temperature of 60 °C. Peptides were eluted over 240 min at a rate of 250 nL/min using a gradient set up as follows, where Buffer A is 0.1% formic acid and Buffer B is 80% acetonitrile, 0.1% formic Acid, all v/v in HPLC grade water: 0-100 min, 3-25%B; 100-228 min, 25-44%B; 228-230 min, 44-100%B; 230-240 min, 100%B. Peptides were introduced by nano-electrospray into the Fusion-Lumos mass spectrometer (Thermo-Fisher). Data were acquired using the MultiNotch MS3 acquisition with synchronous precursor selection (SPS) with a cycle time of 5 sec. MS1 acquisition was performed with a scan range of 550 m/z-1800 m/z with resolution set to 120 000, maximum injection time of 50 msec and AGC target set to 4e5. Isolation for MS2 scans was performed in the quadrupole, with an isolation window of 0.7. MS2 scans were done in the linear ion trap with a maximum injection time of 50 msec and a normalized collision energy of 35%. For MS3 scans, HCD was used, with a collision energy of 30% and scans were measured in the orbitrap with a resolution of 50000, a scan range of 100 m/z-500 m/z, an AGC Target of 3e4, and a maximum injection time of 50 msec. The dynamic exclusion was applied using a maximum exclusion list of 500 with an exclusion duration of 20 sec. Mouse mines analysis. A total of 1,282 proteins were identified, and run through Mouse Mines (www. mousemines.org), returning a list of 1,223 genes. Parent ontologies, as well as significantly enriched gene ontologies, were determined. The significantly enriched "actin filament" gene ontology was used to narrow down the list to 71 proteins relevant to the brush border region. Protein spectral counts were then normalized to the highest count, giving new values ranging from 0.01-1.0. These normalized values were then analyzed by Pearson correlations against models which showed differential expression between groups. A change was defined as any normalized value which was ≥0.3 compared to other groups, and no change between groups was defined as having a ≤ 0.1 difference in normalized expression, with 0.5 as the baseline. Microscopy. Crypt-villus height was measured on hematoxylin/eosin-stained slides, with a minimum of 20 villi and crypts per mouse to make n = 1 (AxioVision software, Carl Zeiss, Canada). For immunofluorescent staining, one mouse from each treatment and each genotype group was represented on each slide. After xylene dewaxing and rehydration for paraffin-embedded slides, or warming up to room temperature for frozen slides, they underwent heat-induced antigen retrieval, using citrate buffer for VILLIN, and Tris-EGTA for MYOSIN-1A, β-ACTIN, HARMONIN and IRTKS-1 (Baiap2l1). Tissues were blocked in 10% normal goat serum, and incubated with primary antibodies (Supplemental Table 2) overnight at 4 °C. Secondary antibodies (Supplemental Table 2) were added for 1 hr, in the dark, at room temperature. An AxioPlan epifluorescence microscope (Carl Zeiss, Canada) was used for all imaging, with constant exposure times used for all 4 samples per slide. Z-stacking and deconvolution was conducted using the AxioVision software when required. Negative controls were run by omission of all primary antisera (Supplemental Fig. 6). For TEM, glutaraldehyde fixed tissues were post-fixed in 2% phosphate-buffered OsO 4 , followed by dehydration in acetone, and embedding and polymerization in Epon-Araldite (Hospital for Sick Children, Toronto, Ontario, Canada). For determining microvillus and terminal web length, a minimum of 4 measurements was taken for each cell for a total of 28 cells from at least 5 different villi per mouse to make n = 1, on a Hitachi H-7000 electron microscope. Microvillus length was determined, using the Hitachi software, by measuring from the distal tip to the plasma membrane, and the terminal web was measured from the plasma membrane to the end of cytoplasmic projection. Packing angle was determined by taking a minimum of 40 measurements of the angle between three adjacent microvillus cross sections, to make n = 1 for each mouse. Microvillus density was determined as the number of intact microvillus cross-sections within a 3.6 × 10 8 µm 2 box, for n = 3 mice per group. RNA isolation and real-time quantitative reverse-transcription polymerase chain reaction (q-RT-PCR). Total RNA was isolated from jejunal mucosal scrapes using the RNeasy kit with QiaShredder (Qiagen, Inc, Mississauga, ON, Canada), converted into complementary DNA (5x all-in-one RT Master Mix, abm), and assessed by Taqman Gene Expression Assay (Applied Biosystems, Foster City, CA), using TaqMan primers as shown in Supplemental Table 3). The delta-delta CT method was used to normalize relative mRNA levels 60 . Western blot. Jejunal mucosal scrapes were sonicated in RIPA lysis buffer, and protein contained in BBM isolates and mucosal scrapes was quantified by BCA assay (Pierce BCA Assay, Thermo Scientific). Proteins were run on an 8% PAGE gel, transferred onto a PVDF membrane, and incubated with primary antibodies overnight at 4 °C (Supplemental Table 4). Secondary antibodies were incubated for 1 hr, at room temperature (Supplemental Table 4), and visualized using ECL detection reagent (Cell Signaling Technology). Samples were normalized to small intestinal controls where appropriate, and all samples were normalized to vehicle-treated control mice. Each
5,906.6
2019-09-10T00:00:00.000
[ "Biology", "Medicine" ]
Flow-dependent myosin recruitment during Drosophila cellularization requires zygotic dunk activity Actomyosin contractility underlies force generation in morphogenesis ranging from cytokinesis to epithelial extension or invagination. In Drosophila, the cleavage of the syncytial blastoderm is initiated by an actomyosin network at the base of membrane furrows that invaginate from the surface of the embryo. It remains unclear how this network forms and how it affects tissue mechanics. Here, we show that during Drosophila cleavage, myosin recruitment to the cleavage furrows proceeds in temporally distinct phases of tension-driven cortical flow and direct recruitment, regulated by different zygotic genes. We identify the gene dunk, which we show is transiently transcribed when cellularization starts and functions to maintain cortical myosin during the flow phase. The subsequent direct myosin recruitment, however, is Dunk-independent but requires Slam. The Slam-dependent direct recruitment of myosin is sufficient to drive cleavage in the dunk mutant, and the subsequent development of the mutant is normal. In the dunk mutant, cortical myosin loss triggers misdirected flow and disrupts the hexagonal packing of the ingressing furrows. Computer simulation coupled with laser ablation suggests that Dunk-dependent maintenance of cortical myosin enables mechanical tension build-up, thereby providing a mechanism to guide myosin flow and define the hexagonal symmetry of the furrows. Summary: During Drosophila cellularisation, myosin recruitment to the cleavage furrows proceeds in temporally and mechanistically distinct phases separately regulated by dunk and slam. INTRODUCTION Contraction of filamentous actin networks by non-muscle myosin II (hereafter 'myosin') provides a widely used mechanism for force generation during cell and tissue morphogenesis (Munjal and Lecuit, 2014;Martin and Goldstein, 2014). For example, a variety of morphogenetic processes ranging from cytokinesis to tissue spreading and elongation are driven by constriction of contractile actomyosin rings (Green et al., 2012;Martin and Lewis, 1992;Hutson et al., 2003;Behrndt et al., 2012;Sehring et al., 2014). Recruitment of myosin to specific regions of the cell cortex is a crucial step for the subsequent assembly of contractile actomyosin structures and essentially determines the spatiotemporal distribution of forces. Studies in various organisms suggest two general strategies for this recruitment. In the first mechanism, myosin filaments are assembled and recruited to relatively broad regions of the cell cortex and subsequently undergo motor-dependent flow ('cortical flow'). Myosin flows are thought to be powered by asymmetrical contraction and have been demonstrated in many actomyosin-based processes, such as cytokinesis (DeBiasio et al., 1996;Yumura et al., 2008;Uehara et al., 2010), embryo polarity establishment (Munro et al., 2004), convergent extension (Rauzi et al., 2010), cell sheet spreading (Behrndt et al., 2012) and apical constriction (Munjal et al., 2015). The second mechanism suggests direct recruitment of myosin filaments to the designated regions of the cell cortex without undergoing cortical flow ('direct recruitment'; Zang and Spudich, 1998;Yumura et al., 2008;Zhou and Wang, 2008;Vale et al., 2009;Beach and Egelhoff, 2009;Ma et al., 2012). These mechanisms appear to function redundantly in different systems to ensure the proper assembly of the actomyosin contractile machineries. However, the timing and extent of the contribution of each of the mechanisms to myosin recruitment and localization remain elusive. Drosophila cellularization, an atypical cytokinesis that cleaves the syncytial embryos, provides a unique system to study the assembly and reorganization of actomyosin structures. Before cellularization, the embryo undergoes 13 nuclear divisions without cytokinesis, resulting in a syncytium with ∼5000 nuclei spread out at the periphery of the embryo. At the beginning of interphase cycle 14, membrane furrows invaginate from the surface of the embryo and form a honeycomb-like hexagonal array, with each hexagonal unit enclosing one nucleus (Mazumdar and Mazumdar, 2002). As soon as the membrane furrows start to invaginate, actin and myosin accumulate at the invagination front and form a network ( Fig. 1A-C; Kiehart, 1990;Young et al., 1991;Schejter and Wieschaus, 1993;Field and Alberts, 1995;Foe et al., 2000;Royou, et al., 2004). As furrow ingression proceeds, the actomyosin network reorganizes into individual actomyosin rings, which then constrict to close the basal side of the newly formed cells in a manner resembling typical animal cytokinesis (Schejter and Wieschaus, 1993). Although the initial geometries are different, a number of proteins involved in cytokinesis also function in cellularization, such as the scaffolding proteins Scraps (Anillin homolog; Field et al., 2005) and Septins (Adam et al., 2000), the small GTPase Rho1 (RhoA homolog; Crawford et al., 1998), and the formin-family actin nucleator Diaphanous (Dia; Afshar et al., 2000;Grosshans et al., 2005), to name a few. Drosophila cellularization also requires additional regulation apart from that in typical cytokinesis. Although most proteins functioning in cellularization are maternally provided, the timing of cellularization is critically controlled by a small set of zygotic genes that does not function in post-cellularization cytokinesis. Previous genome-wide deficiency screens have identified a small number of genomic regions that are zygotically required for cellularization (Merrill et al., 1988;Wieschaus and Sweeton, 1988). Several genes have been subsequently cloned. Four of them, bottleneck (bnk), nullo, Serendipity α (Sry-α) and slam, are specifically expressed during cellularization and regulate the organization of the actomyosin network (Schweisguth et al., 1990;Schejter and Wieschaus, 1993;Postner and Wieschaus, 1994;Lecuit et al., 2002;Stein et al., 2002;Grosshans et al., 2005;Sokac and Wieschaus, 2008a;Wenzl et al., 2010;Zheng et al., 2013). Despite the identification of these genes, it remains unclear how myosin is recruited to the invagination front and how the actomyosin network is established and regulated. Here, we demonstrate that the recruitment of myosin during Drosophila cellularization proceeds in two temporally distinct phases. First, a tension-driven cortical flow brings myosin to the leading edge of the cleavage furrows. Subsequently, additional myosin is directly recruited from the cytoplasm to the leading edge. The myosin flow is anisotropic and is similar to the tension-based flow that drives myosin into the cleavage furrow during typical cytokinesis. By cloning and characterizing a cellularization-specific gene dunk, we demonstrate that the cortical flow-mediated myosin recruitment requires a Dunk-dependent mechanism that prevents myosin loss from the cortex. We also present genetic evidence that the direct recruitment of myosin in the second phase is Dunkindependent but requires Slam. Our findings demonstrate that separate myosin recruitment mechanisms are developmentally modulated by different zygotic genes to regulate cellularization in a coordinated manner. RESULTS Biphasic recruitment of myosin to the leading edge of the cleavage furrows during cellularization In order to elucidate how myosin is recruited to the invagination front during cellularization, we made high-resolution, time-lapse movies of myosin using embryos expressing Sqh-GFP (myosin regulatory light chain fused to GFP; Royou et al., 2002; Movies 1 and 2). Fig. 1D shows the 3D rendering of Sqh-GFP during early and mid-cellularization with the myosin structures pseudocolorcoded corresponding to their depth from the apical surface. Fig. 1E shows the corresponding projections at the invagination front. At the transition between telophase 13 and interphase 14, myosin first appears at the base of the retracting metaphase furrows (the old furrows) along the circumference of the previous mitotic figure, ∼5 µm below the apical surface (shown as 'blue myosin' in Fig. 1D at t=0 min; see Fig. 1F for an illustration of the old and new furrows). Myosin puncta appear at the apical cortex approximately 1 min later (shown as 'magenta myosin' in Fig. 1D at t=0 min), and are slightly more enriched near the prospective furrow (the new furrow) between the two corresponding daughter nuclei. As that furrow begins to invaginate, myosin puncta flow towards and become enriched at the base of the furrow (Fig. 1D,E, arrows). To illustrate the myosin flow better, we generated kymographs in the direction either perpendicular or parallel to the newly formed furrow (Fig. 1G). In the direction perpendicular to the furrow, myosin trajectories converge towards the furrow (Fig. 1G, left), whereas in the direction parallel to the furrow, myosin trajectories remain parallel to each other (Fig. 1G, right). These results demonstrate that myosin predominantly moves in the direction perpendicular to the edge but not parallel to the edge. We further examined the velocity distribution of myosin flow using particle image velocimetry (PIV) analysis. The result confirmed the strong velocity anisotropy biased towards the direction parallel to the furrow (Fig. 1H). The onset of apical myosin flow coincides with the formation of the new furrow, which in our experiments we define as the beginning of cellularization (t=0). At t=4-5 min, the old and new furrows reach the same depth of ∼3 µm from the surface of the embryo and become nearly indistinguishable (Fig. 1D,E; Fig. S1). At this point, myosin at the base of old and new furrows appears to join and forms an interconnected network across the entire invagination front (Fig. 1E at t=6 min). At t=20 min, as the invagination front passes approximately one-third of nuclei length, the actomyosin network starts to reorganize into individual rings surrounding each nucleus (Fig. 1E at t=20 min). These rings become well resolved at t=30 min, as furrow ingression transitions from a slow-growing to a fast-growing phase (Merrill et al., 1988;Lecuit and Wieschaus, 2000;Figard et al., 2013). During the first 12 min after the onset of cellularization, as myosin flows continuously towards the base of the furrows, the interfaces of the myosin network separating adjacent nuclei (which we call 'edges') narrow in width (Fig. 1I), whereas the total myosin intensity in the forming network plus apical cortex remains constant (Fig. 1J). During the next 20 min, the width of edges no longer changes (Fig. 1I), but total myosin intensity increases (Fig. 1J). These observations identify two temporally distinct phases in myosin recruitment to the invagination front. In the first phase, the myosin puncta present at the apical cortex undergo a cortical flow towards the base of the ingressing furrow (henceforth 'the flow phase'). In the second phase, new myosin appears to be directly recruited from the cytoplasm to the invagination front without cortical flow (henceforth 'the recruitment phase'). Therefore, during Drosophila cellularization, myosin cortical flow and direct myosin recruitment are used at distinct times to recruit myosin to the base of the newly formed furrows (Fig. 1F). The actomyosin network at the invagination front is under tension The flow of myosin during early cellularization is reminiscent of the tension-driven myosin flows thought to play a role in contractile ring formation during cytokinesis. To test whether the cortex is under tension, we used a focused UV laser beam to ablate the invagination front in flow-phase embryos. If the cortex is under tension, the surrounding tissues will undergo recoil, and the initial velocity of recoil is proportional to the resting tension divided by the viscous drag, which is assumed to be constant between experiments (Hutson et al., 2003;Martin et al., 2010). Single spot incision in the middle of an edge resulted in an immediate displacement of the surrounding tissues away from the incision site ( Fig. 2A, tissue movement is indicated by arrows; Movie 3). This tension appears to arise at the beginning of cycle 14 simultaneous with myosin recruitment to the surface, as laser incision made at the apical cortex before this time point did not induce appreciable tissue recoil (Fig. 2B, as indicated by lack of changes in the apical cortex; Movie 3). These results suggest that tension at the invagination front is due to actomyosin contractility. If the tension and myosin flow described above is analogous to motor-dependent myosin flow in cytokinesis, it will move and align cytoskeletal elements, thereby accounting for the narrowing of the myosin band. PIV analysis demonstrated that the movement of the surrounding tissue after laser ablation is anisotropic. As demonstrated in Fig. 2C,D and quantified in Fig. 2E, the velocity vectors parallel to the ablated edge (V x ) are larger than those perpendicular to the edge (V y ). This anisotropy in tissue tension is an expected pattern because if the initial broad contractile network drives a flow that is perpendicular to the edge, tension in that direction will be released. The interconnectedness of the forming network would not allow flow between vertices (Fig. 1F,G) and thus tensions will remain high in directions parallel to the edge. In the following sections, we present genetic and molecular evidence that the separate phases of myosin recruitment require distinct zygotic gene activities and that the anisotropy of the myosin flow requires maintenance of myosin at the cortex that is developmentally regulated by a novel gene dunk. Mutation in dunk causes cortical myosin loss specifically during the flow phase In a genome-wide mapping and transcriptional profiling study of Drosophila heterochromatin , we identified dunk (CG34137; FlyBase Genome Annotators, 2006), a blastoderm stage-specific gene located near the centromere of the second chromosome. dunk is an intron-less gene that encodes a 246-aminoacid-long protein with no previously characterized homologs or well-defined structural motifs (data not shown). dunk has close homologs in several other Drosophila species and a more distant homolog in house flies, but has no obvious homologs in other species. Interestingly, we identified two conserved binding sites for the zinc-finger transcriptional activator Zelda near the transcription start site of dunk, a feature shared by many Zelda-dependent early transcribed zygotic genes in Drosophila (Liang et al., 2008). In situ hybridization demonstrates that dunk transcripts are not present in pre-syncytial embryos and only become detectable at cycle 13 (Fig. 3). dunk transcripts peak in early cellularization, are distributed uniformly across the whole embryo, and then rapidly diminish during late cellularization. We identified a P-element insertion allele of dunk generated by the Berkeley Drosophila Genome Project gene disruption project (Bellen et al., 2004; henceforth dunk 1 ). The dunk 1 allele is predicted to generate truncated proteins that lack the C-terminal three-quarters of the normal sequence. No transcript was detected in embryos homozygous for dunk 1 , suggesting that this mutant is a protein null (see below). dunk 1 mutant embryos show defects in myosin organization shortly after the onset of cellularization. In wild-type embryos, myosin is evenly distributed across the invagination front and forms a network. By contrast, in dunk 1 homozygous mutant embryos, myosin distribution becomes inhomogeneous, with myosin preferentially accumulating at the vertices (Fig. 4A, red arrows) and being depleted from many of the edges (Fig. 4A, green arrows; hence the name disrupted underground network). By contrast, F-actin and Rho1 remain homogeneously distributed at the invagination front with no obvious reduction in protein levels ( Fig. 4B,C, the early cellularization panels). Therefore, the myosin defect observed in dunk 1 mutants is not likely to be due to loss or redistribution of F-actin or Rho1. At later stages, the distributions of myosin, F-actin and Rho1 all become abnormal in dunk 1 mutants (Fig. 4B, the mid-cellularization panels), which probably reflects the defect in the morphology of the invagination front (see below). To illuminate how the myosin phenotype arises in dunk 1 mutant embryos, we examined Sqh-GFP in live embryos ( Fig. 4D-F; Movie 4). At the beginning of cellularization, the initial cortical recruitment of myosin is similar in wild-type and dunk 1 mutant embryos (Fig. 4D, t=2.4 min). In the flow phase, however, myosin rapidly becomes depleted from most edges but remains at vertices and short edges, consistent with the observation in fixed embryos ( Fig. 4D, t=8 min). Quantification of myosin intensity at the invagination front demonstrated that the drop in cortical myosin intensity occurs within the first 5 min and myosin remains low throughout the flow phase (Fig. 4E). During the recruitment phase, however, myosin intensity increases at a rate comparable to that in wild type (Fig. 4E). To illustrate the biased myosin distribution towards vertices in dunk 1 mutants, we plotted the ratio of myosin density at vertices to that at edges (Fig. 4F). In wild type, the ratio remains close to 1. By contrast, in dunk 1 mutant embryos, the ratio quickly increases to >2 and remains at the peak during the flow phase. The ratio then gradually returns to 1 during the recruitment phase as new myosin is recruited to the invagination front ( Fig. 4F). Taken together, these results suggest that Dunk is specifically required for maintenance of cortical myosin during the flow phase. In the subsequent recruitment phase, a Dunkindependent process appears to function to recruit additional myosin to the invagination front. The loss of cortical myosin seen in dunk mutant embryos might result from alterations in the rate of myosin recruitment and dissociation to and from the cortex. We tested this possibility by measuring the rate of fluorescence recovery after photobleaching (FRAP) on cortical Sqh-GFP ( Fig. S2A; Movie 5). Interestingly, we found that the rate of myosin fluorescence recovery in dunk mutant embryos is identical to that in wild type, as long as the FRAP recovery is adjusted to the decreasing levels of myosin in the mutant at the time of FRAP (half recovery time in dunk mutants is 70.5±15.0 s, mean±s.d., compared with 71.2±18.4 s in wild type; Fig. S2B-D). Therefore, Dunk does not appear to directly regulate the rate of myosin turnover at the cortex and probably promotes cortical myosin stability through other mechanisms. Myosin flow is misdirected in dunk mutant embryos In wild-type embryos throughout the flow phase, myosin flow is perpendicular to the furrow, and within each edge myosin does not flow parallel to the edge (Fig. 1G,H). This flow pattern tightens the network, while maintaining global network architecture and a relatively uniform distribution of myosin along each edge. In dunk 1 embryos, however, the flow of cortical myosin is no longer restricted to the direction perpendicular to the edges (Fig. 5A-F; Movies 6, 7). In most cases (as represented in Fig. 5A,C,E), myosin flows towards the neighboring vertices and becomes depleted from the center of the edge. In the remaining cases (as represented in Fig. 5B,D,F), myosin remains on the edge and flows towards the center. Kymograph analysis shows that in the direction parallel to the edge, the myosin trajectories are no longer parallel to each other as in the wild type (Fig. 1G), but instead either diverge from (Fig. 5C) or converge towards (Fig. 5D) the center of the edge. PIV analysis also demonstrates that the velocity vectors of myosin flow become largely parallel to the edge, either pointing away from (Fig. 5E) or pointing towards (Fig. 5F) the center of the edge. The flow of myosin parallel to the edge can have one of two consequences on the morphology of the edge. When myosin flows towards the neighboring vertices, the edge stretches in length (Fig. 5A). By contrast, when myosin flows towards the center of the edge, the edge undergoes a contraction and effective shortening (Fig. 5B). As a result, the length of edge and the myosin intensity along that edge becomes negatively correlated (i.e. short borders have more myosin; Fig. 5G). The changes in edge length disrupt the hexagonal symmetry of the ingressing furrows, resulting in an irregular network composed of angular units as shown by staining of basal adherens junctions (Fig. 5H). The actomyosin rings subsequently formed also acquire irregular shapes (Fig. 4A, the mid-cellularization panels). Together, these observations suggest that the biased accumulation of myosin at vertices and the irregular packing geometry of the ingressing furrows seen in dunk 1 mutants is a direct consequence of the misdirected myosin flow that occurs coincident with cortical myosin loss. A Dunk-dependent mechanical mechanism that guides the anisotropic cortical myosin flow What is the mechanistic link between Dunk-dependent maintenance of cortical myosin and the anisotropy of the myosin flow? If the anisotropic myosin flow seen in wild-type embryos requires tension to be maintained in directions parallel to the edge, cortical myosin loss in dunk 1 mutants may disrupt the interconnectedness of the actomyosin network and thereby release the tension that is necessary to constrain the direction of the flow. To test this paradigm, we generated a computer model to investigate the behavior of a two-dimensional, interconnected contractile network ( Fig. 6A; supplementary Materials and Methods). Each edge of this virtual network contains multiple constricting parallel fibers connected by myosin nodes that resemble the puncta of cortical myosin as well as other cortical components that myosin associates with. The myosin nodes are initially spaced broadly to reflect the initial meshwork-like appearance of the actomyosin cytoskeleton (Fig. 1E). During simulation, the myosin nodes undergo dynamic turnover with a recruitment rate of k on and a dissociation rate of k off (see supplementary Materials and Methods for details). It is worth noting that k on and k off do not necessarily reflect the dynamics of cortical myosin turnover as measured in our FRAP experiments, because we do not distinguish whether changes in k on and k off are a result of alterations in the rate of the myosin recruitment and dissociation to and from its cortical binding sites, or a result of alterations in the availability of these sites. When k on ≫k off , the number of myosin nodes is constant, and the network remains In situ hybridization of wild-type embryos with an antisense dunk probe. Note that dunk is transiently induced immediately before cellularization starts and rapidly diminishes during late cellularization. Arrows indicate the depth of invagination font. Boxed regions are enlarged to the right. Scale bar: 100 µm. interconnected. As the model moves towards its minimum energy, the myosin nodes move anisotropically similar to the myosin flow observed in wild-type embryos. Within each edge, myosin nodes move perpendicular but not parallel to the edge, effectively reducing the width of edge without affecting the global architecture of the network (Fig. 6B,D; Movie 8; [k on =0.05, k off =0.001]). As we reduced k on /k off , our model generated phenotypes that mimic the effect of Dunk loss. The final energy minimum configuration that best approximates the dunk phenotype is given by the parameters [k on =0.003, k off =0.001]. In this scenario, loss of myosin nodes generates local breaks within the network that cannot be reconnected promptly. As myosin continues to contract, the network ruptures, with most myosin nodes moving towards the neighboring vertices and becoming depleted from edges. As a result, the myosin nodes accumulate at discrete foci that are usually centered at vertices (Fig. 6C,E; Movie 8). Importantly, the biased vertex accumulation of myosin nodes results from unbiased distribution of myosin turnover dynamics within the contractile network, presumably as a result of the geometrical constraints of an interconnected network. Our simulation therefore demonstrates that the destabilized myosin flow and the resulting altered myosin distribution, in particular the accumulation of myosin at the vertices, can be a simple mechanical outcome of myosin loss that disrupts the tension balance within a contractile network. Dunk colocalizes with myosin at the invagination front during early and mid-cellularization We generated polyclonal antibodies against full-length Dunk protein and examined its subcellular localization during cellularization. Dunk displays extensive colocalization with myosin at the invagination front throughout early to mid-cellularization (Fig. 7A). At the onset of cellularization, Dunk is first detected at the old furrows surrounding the previous mitotic figure (Fig. 7A, arrowheads), followed by an enrichment at the furrows between neighboring nuclei where myosin puncta are also enriched ( Fig. 7A,B, arrows). Dunk localizes to the actomyosin rings during mid-cellularization but quickly becomes undetectable at late cellularization (Fig. 7A). Basal adherens junctions, as detected by staining of DE-cadherin (Shotgun), localize immediately apically to both myosin and Dunk (Fig. 7C, compare arrowhead and arrows). No specific protein signal was detected in dunk 1 mutant embryos when we stained them with the Dunk antibody, which confirmed that dunk 1 is a protein null ( Fig. 7B; Fig. S3). We also made a rescue construct of dunk, in which the expression of full-length dunk coding sequence plus a C-terminal 3×HA tag is under the control of the nullo promoter (P nullo ; Hunter et al., 2002). Dunk-3HA showed similar localization as endogenous Dunk and rescued the myosin phenotype in dunk 1 mutant embryos (Fig. S4). Ectopically expressed Dunk localizes to contractile actomyosin structures in post-blastoderm-stage embryos dunk is not normally expressed in post-blastoderm-stage embryos. The rescue construct of dunk contains 14 UAS GAL4-binding sites upstream of P nullo , allowing us to express dunk ectopically at later stages using different GAL4 drivers. When expressed at postblastoderm stages using maternally supplied GAL4 (67.15; Hunter and Wieschaus, 2000), Dunk-3HA invariably accumulates at locations where myosin is also enriched, such as the cytokinetic rings in the dividing cells and the apical cortex of apically constricting cells during the formation of ventral furrow and posterior midgut (Fig. 7D-F). These observations suggest that Dunk can be recruited to actomyosin contractile structures independently of other cellularization-specific gene products. Simultaneously eliminating dunk and slam disrupts both myosin flow and direct myosin recruitment Despite the early myosin loss during the flow phase in dunk 1 mutant embryos, new myosin can still be recruited to the invagination front during the recruitment phase at a rate similar to that in wild-type embryos (Fig. 4E). This myosin recruitment replenishes myosin at the invagination front and allows the formation of the actomyosin network which eventually rearranges into individual rings ( Fig. 4D; t=25 min). Although these rings are often less rounded and distorted in shape, they are capable of driving basal closure during late cellularization (data not shown). dunk 1 mutant embryos show no obvious defects in the rate of furrow ingression (Fig. S5), and subsequent development is normal. What accounts for the Dunk-independent myosin recruitment during the recruitment phase? A possible candidate is Slam. Slam plays a crucial role in extension of the cleavage furrows during cellularization (Lecuit et al., 2002;Acharya et al., 2014). In slam mutants, the assembly of the basal-lateral surface at the invagination front is defective, and the rate of furrow ingression is greatly reduced. Interestingly, Slam has also been shown to promote the accumulation of myosin at the invagination front (Lecuit et al., 2002;Acharya et al., 2014). Slam directly binds to RhoGEF2 and is required for recruitment of RhoGEF2 to the invagination front (Wenzl et al., 2010). RhoGEF2 is a guanine nucleotide exchange factor for Rho1 and has been shown to promote actin assembly and myosin activation at the invagination front through Rho1 and its downstream effectors (Crawford et al., 1998;Padash Barmchi et al., 2005;Grosshans et al., 2005). In order to test whether Slam is required for new myosin recruitment during the recruitment phase, we examined slam dunk double mutant embryos and compared them with individual single mutants. In the double mutant, the increase in myosin intensity during the recruitment phase seen in both wild-type and dunk mutant embryos is completely abolished (Fig. 8A,B). The edges where myosin is lost during the flow phase remain devoid of myosin throughout cellularization (Fig. 8A, arrows). As a result, myosin never forms an interconnected network and the subsequent reorganization into individual contractile rings completely fails ( Fig. 8A; Movie 9). In slam single mutants, myosin intensity barely increases during the recruitment phase (Fig. 8A,B), yet the actomyosin network remains partially connected (Fig. 8C, arrows). This is in contrast to the slam dunk double mutant in which the actomyosin network completely breaks down into discrete foci (Fig. 8C, arrowheads). Therefore, the Dunk-dependent stabilization of myosin during the flow phase and the Slam-dependent new myosin recruitment synergistically contribute to actomyosin organization at the invagination front. Overall, our results suggest that cortical myosin flow and direct myosin recruitment are separately regulated by Dunk and Slam during cellularization. Whereas the flow phase requires Dunk, the recruitment phase is Dunk independent but requires Slam. It is worth noting that slam mutant embryos also show an overall reduction in cortical myosin intensity shortly after the onset of cellularization (Fig. 8B). It is conceivable that this early myosin loss is an indirect effect of the defects in the assembly of the basal-lateral surface previously seen in slam mutant embryos (Lecuit et al., 2002;Acharya et al., 2014). However, we could not formally exclude the possibility that Slam functions in both phases of myosin recruitment. DISCUSSION Mechanisms regulating the recruitment and maintenance of myosin at the cell cortex play a crucial role in actomyosin-mediated force generation, which drives many morphogenetic processes. During Drosophila cellularization, 5000 contractile actomyosin rings are formed simultaneously with great temporal and spatial precision to drive the cleavage of the syncytial blastoderm. In this study, we found that myosin is recruited to the cellularization front by two distinct mechanisms that act at consecutive phases. In the flow phase (t=0-12 min), myosin recruited to the apical cortex rapidly flows towards the base of the newly formed furrows while the intensity of cortical myosin remains constant. In the recruitment phase (t=12-30 min), more myosin is directly recruited to the leading edge without undergoing cortical flow (Fig. 1I,J). Both cortical flow and direct recruitment have been implicated in the recruitment of myosin to the equatorial cortex during animal cytokinesis, but the timing and extent of their involvement remain unclear. Our results for the first time show that the two mechanisms can be used sequentially in a cytokinetic process. In addition, we show that these distinct phases are under separate developmental regulation by transcription of different zygotic genes. Drosophila cellularization occurs coincident with a key developmental transition called mid-blastula transition (MBT), which is characterized by activation of zygotic gene expression (Edgar and Schubiger, 1986). We identified dunk, which we show to be transcriptionally activated immediately before cellularization (Fig. 3) and to be required specifically during the flow phase to maintain myosin at the cortex. In dunk mutant embryos, myosin rapidly dissociates from the cortex after the onset of the flow phase ( Fig. 4D-F). Residual myosin preferentially flows towards the vertices and causes nearly complete depletion of myosin from edges (Fig. 5A,B). In the recruitment phase, however, myosin replenishes at the invagination front through a Slam-dependent but Dunkindependent, direct myosin recruitment pathway (Fig. 4D-F; Fig. 8). Drosophila cellularization therefore provides an example of a process in which the independent mechanisms of cortical myosin flow and direct myosin recruitment are used in separate phases with distinct genetic regulation. The identification of two phases of myosin recruitment also enables us to define the Dunk-dependent cellular mechanism that regulates myosin flow. The flow of myosin immediately after the onset of cellularization is similar to the tension-driven myosin flow observed in Dictyostelium, Drosophila and mammalian cells during cytokinesis (Yumura et al., 2008;Uehara et al., 2010;DeBiasio et al., 1996). Using laser ablation, we demonstrate that the invagination front is under tension (Fig. 2). The build-up of tension is coincident with myosin recruitment to the cortex and probably results from the intrinsic contractility of the actomyosin structures induced by spatially restricted Rho1 activation near the equatorial cortex (Green et al., 2012). Geometric constraints at the invagination front further define the tension balance across the actomyosin network and impose an anisotropy on the myosin constrictions such that myosin flow mainly occurs perpendicular to the edges. We propose that Dunk-dependent stabilization of myosin recruitment sites enables the establishment of an interconnected actomyosin network that maintains tension in directions parallel to the edges, thereby providing a mechanical mechanism to guide the anisotropic myosin flow. In the absence of Dunk, the loss of cortical myosin disrupts the interconnectedness of the actomyosin network and causes myosin to flow parallel to the edge. In support of this paradigm, our computer simulation demonstrates that the observed effect of Dunk loss on myosin flow, in particular the biased flow towards vertices, could be a simple mechanical consequence of reducing myosin from the cortex (Fig. 6). Our results suggest that maintaining myosin levels at the cortex is a general requirement for reinforcing the mechanical stability and directionality of the actomyosin flow, although in different systems the specific molecular players might be different. It will be of interest to compare the mechanism used in Drosophila cellularization with those employed in other processes, such as the cortical actomyosin flow in Caenorhabditis elegans embryos during polarity establishment (Munro et al., 2004) and yolk actomyosin flow during zebrafish epiboly (Behrndt et al., 2012). Previous experiments reducing actomyosin contractility argue against a major role of basal contractility in the rate of furrow ingression during cellularization (Royou et al., 2004;Thomas and Wieschaus, 2004). Our analysis of the dunk mutants, however, reveals an important role of the actomyosin network in regulating basal morphology. When tension balance is lost, myosin flow is redirected to the axis parallel to the edge. As myosin flows away from the edge, the edge elongates. Conversely, as myosin becomes concentrated at the edge, the edge shortens (Fig. 5A-G). In extreme cases, the shortening of the edges causes the neighboring vertices to merge, resulting in abnormal basal morphology with an increase of quadrilaterals at the cost of hexagons and pentagons (Fig. 5H). As a consequence, the actomyosin rings later formed at the invagination front also acquire irregular shapes (Fig. 4A). Our findings suggest that actomyosin contractility prior to the formation of discrete cytokinetic rings may serve the following functions: (1) enrichment of myosin at the invagination front through cortical flow, (2) alignment of an initially less organized meshwork of actomyosin filaments into arrays parallel to the edge, and (3) maintenance of tension balance at the invagination front as a mechanism for regulating the hexagonal packing of the ingressing furrows. Actomyosin-based contractility has been widely implicated in a variety of morphogenetic processes (Munjal and Lecuit, 2014;Martin and Goldstein, 2014). It is crucial to understand how myosin is recruited to the right place where forces are generated and how developmental control of this recruitment regulates force generation and the resulting tissue mechanics. Because cellularization in Drosophila allows us to distinguish temporally the cortical flow and direct recruitment mechanisms thought to play roles in most cytokinetic processes, it offers an advantageous system in which to investigate these mechanisms separately. Our identification of zygotic genes that independently regulate each phase also provides a unique opportunity to study the interplay between cell signaling, actomyosin organization and tissue mechanics during a morphogenetic process. The dynamics of myosin in wild-type and dunk 1 mutant embryos was monitored in embryos from the yw sqh AX3 ; sqhGFP stock (Royou et al., 2002) and the dunk 1 ; sqhGFP stock (this study), respectively. sqh encodes Drosophila regulatory light chain of non-muscle Myosin II (Karess et al., 1991). Generation of dunk-rescuing construct For phenotypic rescue experiments, a fusion DNA containing (from 5′ to 3′) the nullo promoter (P nullo =498 bp sequence upstream of the nullo CDS), dunk (CG34137) CDS and a sequence encoding a 3×HA tag was synthesized by GenScript USA and was subsequently inserted into a transformation vector containing the attB site ( pTiger, courtesy of S. Ferguson, State University of New York at Fredonia, Fredonia, NY, USA). The vector also contains 14 UAS GAL4-binding sites upstream of P nullo , allowing us to express dunk ectopically at later stages using different GAL4 drivers. The resulting construct was sent to Genetic Services for integration into the attP40 and attP2 site using the phiC31 integrase system (Groth et al., 2004). Dunk-3×HA expressed under the control of P nullo shows similar spatial distribution and temporal pattern as the endogenous Dunk. Generation of Dunk antibody An N-terminal His-tagged Dunk full-length fusion protein was expressed in Escherichia coli and purified by GenScript USA. Purified antigen was injected into rats and guinea pigs by a commercial supplier (Panigen). Raw serum was used for immunostaining of fixed embryos. No specific protein signal was detected at the invagination front in dunk mutant embryos when stained with the Dunk antibody. In situ hybridization was performed following standard procedures (Tautz and Pfeifle, 1989) using a 0.7 kb antisense RNA probe to dunk. The RNA probe was generated by in vitro transcription using the following primers to produce the DNA template: gcggatccatgtcagcattcacctgcacacag and taatacgactcactatagggtatgctcagccgaccttttt. Live imaging To prepare embryos for live imaging, manually staged embryos expressing Sqh-GFP were collected at room temperature (22-25°C) on agar plates, dechorionated in 50% bleach for 2-4 min, rinsed thoroughly with water, and transferred on a 35 mm MatTek glass-bottom dish (MatTek Corporation). Distilled water was then added to the dish well to completely cover the embryos. All imaging was performed in water at room temperature. For quantification of Sqh-GFP intensities, Sqh-GFP videos were obtained on a Leica SP5 confocal microscope with a 63×/1.3 NA glycerin-immersion objective lens. A 5× zoom was used. Fifteen confocal z-sections with a step size of 1 µm were acquired every 12 s. The image size was 512×512 pixels, which corresponds to a lateral pixel size of 96 nm. The total imaged volume is approximately 49×49×14 µm. For 3D reconstruction of myosin structures, Sqh-GFP videos were obtained on a Leica SP5 confocal microscope with a 100×/1.4 NA oil-immersion objective lens. A 5× zoom was used. Twelve confocal z-sections with a step size of 0.5 µm were acquired every 4.2 s. The image size was 512×256 pixels, which corresponds to a lateral pixel size of 60.5 nm. The total imaged volume is approximately 31×15×5.5 µm. It is worth noting that this imaging approach (high magnification and fast frame rate, which is necessary to reveal the morphological details in 3D reconstruction and the fast dynamics of the myosin flow) causes photobleaching over time. For this reason, for experiments in which we quantify the cortical myosin intensity, we imaged the sample with lower magnification and slower frame rate to minimize photobleaching. The latter approach, however, does not provide sufficient resolution (both temporal and spatial) to visualize details of the myosin flow. Image analysis, quantification and statistics For quantification of myosin fluorescence intensity at the invagination front, the Sqh-GFP movies were analyzed using MATLAB (Image Processing Toolbox, The MathWorks, Natick, MA, USA) as follows. First, images were subject to background subtraction. The background is defined as the cytoplasm level of Sqh-GFP at regions right below the nuclei at the beginning of cellularization. Second, background-corrected images from five adjacent confocal cross-sections (∼4 µm thick) covering the entire invagination front were summed. Third, total intensity was calculated from the summed image at each time point. Finally, the signal was normalized between embryos according to the intensity of cytoplasmic Sqh-GFP. To compare myosin fluorescence intensity at the edges and vertices, the Sqh-GFP movies were analyzed as follows. Images were subject to background subtraction and summed for five slices, which covers the entire invagination front as mentioned above. To define signals that belong to edges versus vertices, the basal outline of the cells (as marked by Sqh-GFP) were segmented using the MATLAB-based software package Embryo Development Geometry Explorer (EDGE; Gelbart et al., 2012). In EDGE, the outlines of individual cells are represented by polygons and tracked over time. Along each polygon, we define points less than 1.2 µm away from the nearest vertex as 'vertex', and points more than 1.2 µm away from the nearest vertex as 'edge'. Mean intensity was integrated at vertices and edges along the corresponding line segments with a width of 0.6 µm. The intensity was then normalized between embryos according to the intensity of cytoplasmic Sqh-GFP. To measure the correlation between edge length and myosin intensity, a correlation coefficient was calculated between the edge length and the mean Sqh-GFP intensity along the edge per time point per embryo. To measure the rate of furrow ingression, kymographs were generated from Sqh-GFP movies using MATLAB, and the depth of the invagination front from the surface of the embryos over time was manually measured using ImageJ. Laser ablation Sqh-GFP embryos were prepared for live imaging and were imaged using a spinning disk confocal microscope (Ultraview; PerkinElmer) with a 60×/1.4 NA oil-immersion objective (Nikon), a 488-nm laser, and an electronmultiplying charge-coupled device camera (C9100-13; Hamamatsu). The microscope was controlled with Volocity acquisition software (Improvision). Ablation was performed using a Micropoint laser (Andor Technology) tuned to 365 nm. For each ablation, a focused laser beam was targeted to the middle of an edge marked by Sqh-GFP to generate a point incision. Time-lapse movies of a single z-slice focused at the level of the invagination front were acquired immediately before and after ablation to measure the movement of surrounding tissues upon release of tension. As a control, ablation was performed at the apical cortex in embryos at cycle 13 anaphase. Velocity maps of myosin flow during the flow phase and tissue movement immediately after laser ablations were generated using the MATLAB-based software OpenPIV (Taylor et al., 2010) with a spacing/overlap of 8×8 pixels and an interrogation window size of 32×32 pixels. For the laser-ablation experiments, average velocity map was generated from 24 ablations in eight embryos at approximately 5 min after the onset of cellularization. FRAP analysis of cortical myosin turnover Sqh-GFP embryos at early cellularization were prepared for live imaging and were imaged on a Leica SP5 confocal microscope with a 63×/1.3 NA glycerinimmersion objective lens. A 5× zoom was used. Photo-bleaching was performed on a single z-slice focused at the level of the invagination front. A rectangular region (∼35 µm×10 µm) was bleached using the 458, 476, 488 and 496 lines from the argon laser operating at 75% laser power. Fifteen iterations were used for bleaching, which lasted approximately 5.5 s. Six confocal zsections with a step size of 1 µm, which spans the invagination front, were acquired every 2.35 s before and after photobleaching. The image size was 512×512 pixels, which corresponds to a lateral pixel size of 96 nm. The total imaged volume was approximately 49×49×5 µm. To acquire the rate of fluorescence recovery, we analyzed the FRAP movies using MATLAB as follows. For each time point, we generated weighted sum from all six z-slices after background subtraction. Background was defined as 50% of the cytoplasmic level of Sqh-GFP. We used 50% rather than 100% of the cytoplasmic intensity as background in order to capture the initial recovery of the cortical signals. The weight for each slice was proportional to the Sqh-GFP signal from the unbleached control region after subtracting the cytoplasmic signal. We then measured the fluorescence intensities from the summed images for both the bleached region and the control unbleached region. The fluorescence intensities were normalized to between 0 and 1 for each region. To acquire the half recovery time t 1/2 , we took the ratio between the intensities measured within the bleached region and the control region from the same embryo. In both wild-type and dunk mutant embryos, the bleached region becomes nearly fully recovered as far as the myosin loss is compensated for. We therefore defined the time when the ratio reaches 0.5 as the half recovery time. Measurements from seven wild-type and 16 dunk 1 mutant embryos were analyzed, and the average t 1/2 was reported. Computer simulation of a contractile network In order to demonstrate how tension drives anisotropic myosin flow and how myosin turnover at the cortex affect the mechanics of the network, we generated a computer model to simulate the behavior of an interconnected contractile network. See supplementary Materials and Methods for full details of the model and source code for image analysis and computational modeling.
9,955.4
2016-07-01T00:00:00.000
[ "Biology" ]
OpenML: An R Package to Connect to the Machine Learning Platform OpenML OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its usage in combination with the machine learning R package mlr. We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks. Furthermore, we also show how to upload results of experiments, share them with others and download results from other users. Beyond ensuring reproducibility of results, the OpenML platform automates much of the drudge work, speeds up research, facilitates collaboration and increases the users' visibility online. Introduction OpenML is an online machine learning platform for sharing and organizing data, machine learning algorithms and experiments . It is designed to create a frictionless, networked ecosystem (Nielsen, 2012), allowing people all over the world to collaborate and build directly on each other's latest ideas, data and results. Key elements of OpenML are data sets, tasks, flows and runs: -Data sets can be shared (under a licence) by uploading them or simply linking to existing data repositories (e.g., mldata.org, figshare.com). For known data formats (e.g., ARFF for tabular data), OpenML will automatically analyze and annotate the data sets with measurable characteristics to support detailed search and further analysis. Data sets can be repeatedly updated or changed and are then automatically versioned. -Tasks can be viewed as containers including a data set and additional information defining what is to be learned. They define which input data are given and which output data should be obtained. For instance, classification tasks will provide the target feature, the evaluation measure (e.g., the area under the curve) and the estimation procedure (e.g., cross-validation splits) as inputs. As output they expect a description of the machine learning algorithm or workflow that was used and, if available, its predictions. -Flows are implementations of single machine learning algorithms or whole workflows that solve a specific task, e.g., a random forest implementation is a flow that can be used to solve a classification or regression task. Ideally, flows are already implemented (or custom) algorithms in existing software that take OpenML tasks as inputs and can automatically read and solve them. They also contain a list (and description) of possible hyperparameters that are available for the algorithm. -Runs are the result of executing flows, optionally with preset hyperparameter values, on tasks and contain all expected outputs and evaluations of these outputs (e.g., the accuracy of predictions). Runs are fully reproducible because they are automatically linked to specific data sets, tasks, flows and hyperparameter settings. They also include the authors of the run and any additional information provided by them, such as runtimes. Similar to data mining challenge platforms (e.g., Kaggle; Carpenter, 2011), OpenML evaluates all submitted results (using a range of evaluation measures) and compares them online. The difference, however, is that OpenML is designed for collaboration rather than competition: anyone can browse, immediately build on and extend all shared results. As an open science platform, OpenML provides important benefits for the science community and beyond. Benefits for Science: Many sciences have made significant breakthroughs by adopting online tools that help organizing, structuring and analyzing scientific data online (Nielsen, 2012). Indeed, any shared idea, question, observation or tool may be noticed by someone who has just the right expertise to spark new ideas, answer open questions, reinterpret observations or reuse data and tools in unexpected new ways. Therefore, sharing research results and collaborating online as a (possibly cross-disciplinary) team enables scientists to quickly build on and extend the results of others, fostering new discoveries. Moreover, ever larger studies become feasible as a lot of data are already available. Questions such as "Which hyperparameter is important to tune?", "Which is the best known workflow for analyzing this data set?" or "Which data sets are similar in structure to my own?" can be answered in minutes by reusing prior experiments, instead of spending days setting up and running new experiments (Vanschoren et al, 2012). Benefits for Scientists: Scientists can also benefit personally from using Open-ML. For example, they can save time, because OpenML assists in many routine and tedious duties: finding data sets, tasks, flows and prior results, setting up experiments and organizing all experiments for further analysis. Moreover, new experiments are immediately compared to the state of the art without always having to rerun other people's experiments. Another benefit is that linking one's results to those of others has a large potential for new discoveries (Feurer et al, 2015;Post et al, 2016;Probst et al, 2017), leading to more publications and more collaboration with other scientists all over the world. Finally, OpenML can help scientists to reinforce their reputation by making their work (published or not) visible to a wide group of people and by showing how often one's data, code and experiments are downloaded or reused in the experiments of others. Benefits for Society: OpenML also provides a useful learning and working environment for students, citizen scientists and practitioners. Students and citizen scientist can easily explore the state of the art and work together with top minds by contributing their own algorithms and experiments. Teachers can challenge their students by letting them compete on OpenML tasks or by reusing OpenML data in assignments. Finally, machine learning practitioners can explore and reuse the best solutions for specific analysis problems, interact with the scientific community or efficiently try out many possible approaches. The remainder of this paper is structured as follows. First, we discuss the web services offered by the OpenML server and the website on OpenML.org that allows web access to all shared data and several tools for data organization and sharing. Second, we briefly introduce the mlr package Schiffner et al, 2016), which is a machine learning toolbox for R (R Core Team, 2016) and offers a unified interface to many machine learning algorithms. Third, we discuss and illustrate some important functions of the OpenML R package. After that, we illustrate its usage in combination with the mlr R package by conducting a short case study. Finally, we conclude with a discussion and an outlook to future developments. The OpenML Platform The OpenML platform consists of several layers of software: Web API: Any application (or web application), can communicate with the OpenML server through the extensive Web API, an application programming interface (API) that offers a set of calls (e.g., to download/upload data) using representational state transfer (REST) which is a simple, lightweight communication mechanism based on standard HTTP requests. Data sets, tasks, flows and runs can be created, read, updated, deleted, searched and tagged through simple HTTP calls. An overview of calls is available on http: //www.openml.org/api_docs. Website: OpenML.org is a website offering easy browsing, organization and sharing of all data, code and experiments. It allows users to easily search and browse all shared data sets, tasks, flows and runs, as well as to compare and visualize all combined results. It provides an easy way to check and manage your experiments anywhere, anytime and discuss them with others online. See Figure 1 for a few screenshots of the OpenML website. Programming Interfaces: OpenML also offers interfaces in multiple programming languages, such as the R interface presented here, which hides the API calls and allow scientists to interact with the server using language-specific functions. As we demonstrate below, the OpenML R package allows R users to search and download data sets and upload the results of machine learning experiments in just a few lines of code. Other interfaces exist for Python, Java and C# (.NET). For tools that usually operate through a graphical interface, such as WEKA (Hall et al, 2009), MOA (Bifet et al, 2010 and Rapid-Miner (van Rijn et al, 2013), plug-ins exist that integrate OpenML sharing facilities. OpenML is organized as an open source project, hosted on GitHub (https: //github.com/openml) and is free to use under the CC-BY licence. When uploading new data sets and code, users can select under which licence they wish to share the data, OpenML will then state licences and citation requests online and in descriptions downloaded from the Web API. OpenML has an active developer community and everyone is welcome to help extend it or post new suggestions through the website or through GitHub. Currently, there are close to 1 700 000 runs on about 20 000 data sets and 3 500 unique flows available on the OpenML platform. While still in beta development, it has over 1 400 registered users, over 1 800 frequent visitors and the website is visited by around 200 unique visitors every day, from all over the world. It currently has server-side support for classification, regression, clustering, data stream classification, learning curve analysis, survival analysis and machine learning challenges for classroom use. Fig. 1 Screenshots of the OpenML website. The top part shows the data set 'autos', with wiki description and descriptive overview of the data features. The bottom part shows a classification task, with an overview of the best submitted flows with respect to the predictive accuracy as performance measure. Every dot here is a single run (further to the right is better). The mlr R Package The mlr package Schiffner et al, 2016) offers a clean, easyto-use and flexible domain-specific language for machine learning experiments in R. An object-oriented interface is adopted to unify the definition of machine learning tasks, setup of learning algorithms, training of models, predicting and evaluating the algorithm's performance. This unified interface hides the actual implementations of the underlying learning algorithms. Replacing one learning algorithm with another becomes as easy as changing a string. Currently, mlr has built-in support for classification, regression, multilabel classification, clustering and survival analysis and includes in total 160 modelling techniques. A complete list of the integrated learners and how to integrate own learners, as well as further information on the mlr package can be found in the corresponding tutorial (http://mlr-org.github.io/mlr-tutorial/). A plethora of further functionality is implemented in mlr, e.g., assessment of generalization performance, comparison of different algorithms in a scientifically rigorous way, feature selection and algorithms for hyperparameter tuning, including Iterated F-Racing and Bayesian optimization with the package mlrMBO . On top of that, mlr offers a wrapper mechanism, which allows to extend learners through pre-train, post-train, pre-predict and post-predict hooks. A wrapper extends the current learner with added functionality and expands the hyperparameter set of the learner with additional hyperparameters provided by the wrapper. Currently, many wrappers are available, e.g., missing value imputation, class imbalance correction, feature selection, tuning, bagging and stacking, as well as a wrapper for user-defined data pre-processing. Wrappers can be nested in other wrappers, which can be used to create even more complex workflows. The package also supports parallelization on different levels based on different parallelization backends (local multicore, socket, MPI) with the package parallelMap (Bischl and or on managed high-performance systems via the package batchtools . Furthermore, visualization methods for research and teaching are also supplied. The OpenML package makes use of mlr as a supporting package. It offers methods to automatically run mlr learners (flows) on OpenML tasks while hiding all of the necessary structural transformations (see Section 4.4). The OpenML R Package The OpenML R package Casalicchio et al (2017) is an interface to interact with the OpenML server directly from within R. Users can retrieve data sets, tasks, flows and runs from the server and also create and upload their own. This section details how to install and configure the package and demonstrates its most important functionalities. Installation and Configuration To interact with the OpenML server, users need to authenticate using an API key, a secret string of characters that uniquely identifies the user. It is generated and shown on users' profile page after they register on the website http://www.openml.org. For demonstration purposes, we will use a public read-only API key that only allows to retrieve information from the server and should be replaced with the user's personal API key to be able to use all features. The R package can be easily installed and configured as follows: install.packages("OpenML") library("OpenML") saveOMLConfig(apikey = "c1994bdb7ecb3c6f3c8f3b35f4b47f1f") The saveOMLConfig function creates a config file, which is always located in a folder called .openml within the user's home directory. This file stores the user's API key and other configuration settings, which can always be changed manually or through the saveOMLConfig function. Alternatively, the setOMLConfig function allows to set the API key and the other entries temporarily, i.e., only for the current R session. Listing Information In this section, we show how to list the available OpenML data sets, tasks, flows and runs using listing functions that always return a data.frame containing the queried information. Each data set, task, flow and run has a unique ID, which can be used to access it directly. Listing Data Sets and Tasks: A list of all data sets and tasks that are available on the OpenML server can be obtained using the listOMLDataSets and listOMLTasks function, respectively. Each entry provides information such as the ID, the name and basic characteristics (e.g., number of features, number of observations, classes, missing values) of the corresponding data set. In addition, the list of tasks contains information about the task type (e.g., "Supervised Classification"), the evaluation measure (e.g., "Predictive Accuracy") and the estimation procedure (e.g., "10-fold Crossvalidation") used to estimate model performance. Note that multiple tasks can be defined for a specific data set, for example, the same data set can be used for multiple task types (e.g. classification and regression tasks) as well as for tasks differing in their estimation procedure, evaluation measure or target value. To find data sets or tasks that meet specific requirements, one can supply arguments to the listing functions. In the example below, we list all supervised classification tasks based on data sets having two classes for the target feature, between 500 and 999 instances, at most 100 features and no missing values: Listing Flows and Runs: When using the mlr package, flows are basically learners from mlr, which, as stated previously, can also be a more complex workflow when different mlr wrappers are nested. Custom flows can be created by integrating custom machine learning algorithms and wrappers into mlr. The list of all available flows on OpenML can be downloaded using the listOMLFlows function. Each entry contains information such as its ID, its name, its version and the user who first uploaded the flow to the server. Note that the list of flows will not only contain flows created with R, but also flows from other machine learning toolkits, such as WEKA (Hall et al, 2009), MOA (Bifet et al, 2010) and scikit-learn (Pedregosa et al, 2011), which can be recognized by the name of the flow. When a flow, along with a specific setup (e.g., specific hyperparameter values), is applied to a task, it creates a run. The listOMLRuns function lists all runs that, for example, refer to a specific task.id or flow.id. To list these evaluations as well, the listOMLRunEvaluations function can be used. In Figure 2, we used ggplot2 (Wickham, 2009) to visualize the predictive accuracy of runs, for which only flows created with mlr were applied to the task with ID 37: res = listOMLRunEvaluations(task.id = 37, tag = "openml_r_paper") res$flow.name = reorder(res$flow.name, res$predictive.accuracy) library("ggplot2") ggplot(res, aes(x = predictive.accuracy, y = flow.name)) + geom_point() + xlab("Predictive Accuracy") + ylab("Flow Name") mlr.classif.rpart (21) mlr.classif.ctree (9) mlr.classif.C50 (18) mlr.classif.C50 (2) mlr.classif.kknn (10) mlr.classif.rpart.bagged (9) mlr.classif.ksvm (7) mlr.classif.svm (6) mlr.classif.C50.bagged (8) mlr.classif.randomForest (17 Downloading OpenML Objects Most of the listing functions described in the previous section will list entities by their OpenML IDs, e.g., the task.id for tasks, the flow.id for flows and the run.id for runs. In this section, we show how these IDs can be used to download a certain data set, task, flow or run from the OpenML server. All downloaded data sets, tasks, flows and runs will be stored in the cachedir directory, which will be in the .openml folder by default but can also be specified in the configuration file (see Section 4.1). Before downloading an OpenML object, the cache directory will be checked if that object is already available in the cache. If so, no internet connection is necessary and the requested object is retrieved from the cache. Downloading Data Sets and Tasks: The getOMLDataSet function returns an S3-object of class OMLDataSet that contains the data set as a data.frame in a $data slot, in addition to some pieces of meta-information: ds = getOMLDataSet(data.id = 15) ds ## ## Data Set "breast-w" :: (Version = 1, OpenML ID = 15) ## Default Target Attribute: Class To retrieve tasks, the getOMLTask function can be used with their corresponding task ID. Note that the ID of a downloaded task is not equal to the ID of the data set. Each task is returned as an S3-object of class OMLTask and contains the OMLDataSet object as well as the predefined estimation procedure, evaluation measure and the target feature in an additional $input slot. Further technical information can be found in the package's help page. Downloading Flows and Runs: The getOMLFlow function downloads all information of the flow, such as the name, all necessary dependencies and all available hyperparameters that can be set. If the flow was created in R, it can be converted into an mlr learner using the convertOMLFlowToMlr function: mlr.lrn = convertOMLFlowToMlr(getOMLFlow(4782)) mlr.lrn ## Learner classif.randomForest from package randomForest ## Type: classif ## Name: Random Forest; Short name: rf ## Class: classif.randomForest ## Properties: twoclass,multiclass,numerics,factors,ordered,prob,class.weights ## Predict-Type: response ## Hyperparameters: This allows users to apply the downloaded learner to other tasks or to modify the learner using functions from mlr and produce new runs. The getOMLRun function downloads a single run and returns an OMLRun object containing all information that are connected to this run, such as the ID of the task and the ID of the flow: The most important information for reproducibility, next to the exact data set and flow version, are the hyperparameter and seed settings that were used to create this run. This information is contained in the OMLRun object and can be extracted via getOMLRunParList(run) and getOMLSeedParList(run), respectively. If the run solves a supervised regression or classification task, the corresponding predictions can be accessed via run$predictions and the evaluation measures computed by the server via run$output.data$evaluations. Creating Runs The easiest way to create a run is to define a learner, optionally with a preset hyperparameter value, using the mlr package. Each mlr learner can then be applied to a specific OMLTask object using the function runTaskMlr. This will create an OMLMlrRun object, for which the results can be uploaded to the OpenML server as described in the next section. For example, a random forest from the randomForest R package (Liaw and Wiener, 2002) can be instantiated using the makeLearner function from mlr and can be applied to a classification task via: lrn = makeLearner("classif.randomForest", mtry = 2) task = getOMLTask(task.id = 37) run.mlr = runTaskMlr(task, lrn) To run a previously downloaded OpenML flow, one can use the runTaskFlow function, optionally with a list of hyperparameters: flow = getOMLFlow(4782) run.flow = runTaskFlow(task, flow, par.list = list(mtry = 2)) To display benchmarking results, one can use the convertOMLMlrRunToBMR function to convert one or more OMLMlrRun objects to a single BenchmarkResult object from the mlr package so that several powerful plotting functions (see http://mlr-org.github.io/mlr-tutorial/release/html/benchmark_experiments for examples) from mlr can be applied to that object (see, e.g., Figure 3). Uploading and Tagging Uploading OpenML Objects: It is also possible to upload data sets, flows and runs to the OpenML server to share and organize experiments and results online. Data sets, for example, are uploaded with the uploadOMLDataSet function. OpenML will activate the data set if it passes all checks, meaning that it will be returned in listing calls. Creating tasks from data sets is currently only possible through the website, see http://www.openml.org/new/task. OMLFlow objects can be uploaded to the server with the uploadOMLFlow function and are automatically versioned by the server: when a learner is uploaded carrying a different R or package version, a new version number and flow.id is assigned. If the same flow has already been uploaded to the server, a message that the flow already exists is displayed and the associated flow.id is returned. Otherwise, the flow is uploaded and a new flow.id is assigned to it: lrn = makeLearner("classif.randomForest") flow.id = uploadOMLFlow(lrn) A run created with the runTaskMlr or the runTaskFlow function can be uploaded to the OpenML server using the uploadOMLRun function. The server will then automatically compute several evaluation measures for this run, which can be retrieved using the listOMLRunEvaluations function as described previously. Tagging and Untagging OpenML Objects: The tagOMLObject function is able to tag data sets, tasks, flows and runs with a user-defined string, so that finding OpenML objects with a specific tag becomes easier. For example, the task with ID 1 can be tagged as follows: tagOMLObject(id = 1, object = "task", tags = "test-tagging") To retrieve a list of objects with a given tag, the tag argument of the listing functions can be used (e.g., listOMLTasks(tag = "test-tagging")). The listing functions for data sets, tasks, flows and runs also show the tags that were already assigned, for example, we already tagged data sets from UCI (Asuncion and Newman, 2007) with the string "uci" so that they can be queried using listOMLDataSets(tag = "uci"). In order to remove one or more tags from an OpenML object, the untagOMLObject function can be used, however, only self-created tags can be removed, e.g.: untagOMLObject(id = 1, object = "task", tags = "test-tagging") Further Features Besides the aforementioned functionalities, the OpenML package allows to fill up the cache directory by downloading multiple objects at once (using the populateOMLCache function), to remove all files from the cache directory (using clearOMLCache), to get the current status of cached data sets (using getCachedOMLDataSetStatus), to delete OpenML objects created by the uploader (using deleteOMLObject), to list all estimation procedures (using listOMLEstimationProcedures) as well as all available evaluation measures (using listOMLEvaluationMeasures) and to get more detailed information on data sets (using getOMLDataSetQualities). Case Study In this section, we illustrate the usage of OpenML by performing a small comparison study between a random forest, bagged trees and single classification trees. We first create the respective binary classification learners using mlr, then query OpenML for suitable tasks, apply the learners to the tasks and finally evaluate the results. Evaluating Results We now apply all learners from lrn.list to the selected tasks using the runTaskMlr function and use the convertOMLMlrRunToBMR function to create a single BenchmarkResult object containing the results of all experiments. This allows using, for example, the plotBMRBoxplots function from mlr to visualize the experiment results (see Figure 3): We can upload and tag the runs, e.g., with the string "study 30" to facilitate finding and listing the results of the runs using this tag: lapply(runs, uploadOMLRun, tags = "study_30") The server will then compute all possible measures, which takes some time depending on the number of runs. The results can then be listed using the listOMLRunEvaluations function and can be visualized using the ggplot2 package: evals = listOMLRunEvaluations(tag = "study_30") evals$learner.name = as.factor(evals$learner.name) evals$task.id = as.factor(evals$task.id) library("ggplot2") ggplot(evals, aes(x = data.name, y = predictive.accuracy, colour = learner.name, group = learner.name, linetype = learner.name, shape = learner.name)) + geom_point() + geom_line() + ylab("Predictive Accuracy") + xlab("Data Set") + theme(axis.text.x = element_text(angle = -45, hjust = 0)) 0.7 0.8 . 4 Results of the produced runs. Each point represents the averaged predictive accuracy over all cross-validation iterations generated by running a particular learner on the respective task. Figure 4 shows the cross-validated predictive accuracies of our six learners on the considered tasks. Here, the random forest produced the best predictions, except on the tic-tac-toe data set, where the bagged C50 trees achieved a slightly better result. In general, the two bagged trees performed marginally worse than the random forest and better than the single tree learners. Conclusion and Outlook OpenML is an online platform for open machine learning that is aimed at connecting researchers who deal with any part of the machine learning workflow. The OpenML platform automates the sharing of machine learning tasks and experiments through the tools that scientists are already using, such as R. The OpenML package introduced in this paper makes it easy to share and reuse data sets, tasks, flows and runs directly from the current R session without the need of using other programming environments or the web interface. Current work is being done on implementing the possibility to connect to OpenML via browser notebooks (https://github.com/everware) and running analysis directly on online servers without the need of having R or any other software installed locally. In the future, it will also be possible that users can specify with whom they want to share, e.g., data sets.
5,838
2017-01-05T00:00:00.000
[ "Computer Science" ]
Two-Dimension Study of Methanol Internal-Overall Rotation in Argon Matrix In the approximation B3LYP/cc-pVTZ, the geometry of a methanol molecule surrounded by eight argon atoms has been optimized. By the independent rotation of the methyl and the hydroxyl groups at the fixed position of the C-O bond relative to the argon atoms there was obtained the two-dimensional grid of values of the internal-overall rotation energy. Despite the fact that, initially the energy was calculated for 65 points in the square 2 2 π π × , the presence of 2 / 3 π period for methyl group rotation has allowed to increase the number of points up to 195. The analytical approximation for internal rotation energy was found. Two dimensional Schrödinger equation for internal rotation – overall rotation of rotator with fixed axis was solved, energy levels, wave functions and transition probabilities were found. According to the results of these computations, degeneracy of the Е-type states is relieved with increase in splitting of the ground torsional state. Introduction It is known that low-temperature IR spectra of molecules in the matrix isolation characterized by narrow absorption bands due to the absence of intermolecular interactions and the rotational structure. However, matrix effects do exist [1][2][3] and they are often able to significantly complicate the interpretation of the spectra. These effects are mainly determined by stacking order of the matrix atoms around the trapped molecule. Therefore, modelling of the matrix structure and its influence on the vibrational spectra of the trapped molecules is an interesting and important task. Theoretical study of the matrix influence on the trapped molecules began only two decades after the first experimental works. Effective interaction potentials were studied first using se mi -e mpirical a nd ab initio calculations [4,5]. Later the supermolecule approach was applied to examine the effects on the structure and spectra of the ammonia-hydrogen halide complexes of adding 3 Ne or 3 Ar atoms [6,7]. Significant effects were calculated, particularly for the HBr complex. Latajka [8] examined the e f fe ct s o f ad d i n g up to 4 N 2 mo lec u le s to t he ammonia-hydrogen chloride complex and found that 3 N 2 molecules gave similar results to a cavity model in a medium with a relative permittivity of 1.5. Matrix effects on proton transfer in hydrogen -bonded molecular complexes were studied by Barnes [9]. As demonstrated by the recent studies [10][11][12], the matrix affects to the values of the internal rotation barriers in a methanol molecule, that is reflected by changes in splitting of the torsional states if compare with gas phase. Previously [13], in FTIR spectra of methanol recorded at a temperature of 10 K in the argon matrix such an effect has been considered as one of the possible mechanisms responsible for the appearance of a multiplet structure of some absorption bands, in particular a doublet of the bands at 1033.25 and 1036.5 cm-1 caused by the valence vibration of С-О bond. As far as we know, at the moment there are no theoretical works that investigate the influence of the matrix environment on the torsional motion in methanol molecule. In [14] a geometric model of a methanol molecule in the environment of eight argon atoms was proposed and approximately, using a one-dimensional approach, the problem of finding the torsional energy levels was solved. According to the computational results obtained, splitting of the ground torsional state is somewhat increased. Besides, splitting of the Е-type degenerate states occurs due to the relieved degeneracy. However, the more accurate two-dimensional solution is possible and is presented in this paper. Computation Method In approximation B3LYP/cc-pVTZ a configuration of the complex including a molecule of methanol and eight argon atoms was optimized using the package GAMESS [15]. The gradient convergence tolerance parameter OPTTOL was set to 10 -5 . No imaginary frequencies were found for optimized configuration. Positions of Ar, C and O atoms were fixed and independent rotation of O-H and CH 3 groups with respect to the argon atoms was performed. Zero values for OH ϕ and 3 CH ϕ were taken for optimized configuration. As it was found that the internal forces in a methanol molecule were greater than the interacting forces with the matrix, on optimization for all the parameters characterizing a position of СН 3 ОН molecule in the matrix, the methyl group rotation relative to the argon atoms was accompanied by the hydroxyl group rotation, and vice versa. Because of this, in the case of a methyl group rotation by steps of 50º the hydroxyl group position was additionally fixed relative to the matrix, and vice versa -for rotation of a hydroxyl group the position of a methyl group was fixed. Besides, all other internal parameters of СН 3 ОН were optimized. Then for every value of ϕ OH = 0º, 50º, 100º, 150º, 200º, 250º, 300º, 350º relative argon atoms the value of 3 CH ϕ = 0º, 50º, 100º, 150º, 200º, 250º, 300º, 350º relative argon atoms too were taken. As an approximation in the process of the subsequent computations, the geometric parameters of methyl and hydroxyl tops were considered to be constant. According to [14], the moments of inertia are as follows: Because of 120º period for rotation of a methyl group, cloning of the points was performed using the following relation for the internal rotation energy: , an analytical expression for the potential energy of internal rotation was derived in the following form: Substituting (2) into (1), we obtain the analytical expression of the internal rotation energy for methanol in the argon matrix by setting the coordinates s, t: , but in expression (3) the coefficient of s is not a rational number and internal rotation of a methanol molecule in the argon matrix is aperiodic. At the same time, as seen from Fig. 1, the principal change in the internal energy of the molecule is caused by changing of an internal rotation angle for s. No Rotation of Methanol Molecule as a Whole with Respect to ArgonMatrix Experimental spectra for methanol in the argon matrix demonstrate the absence of the rotational structure of vibrational absorption bands as actually there is no rotation of the molecule as a whole, yet exhibiting the bands due to torsional motion in СН 3 ОН. Because of this, it is desirable to consider the case of pure internal rotation in this molecule. According to the computational results, a minimum on the potential surface (2), on the surface ( , ) U s t it is also attained at the point with the coordinates s = t = 0. When there is no rotation of the molecule as a whole, an internal rotation is realized along the axis s. We can derive the form of a curve for the internal rotation potential energy from U(s,t) setting t equal to zero. The curve for U(s,0), its periodic part and perturbation, is given in Fig. 2. A Schrödinger equation for internal rotation takes the form In (4) the function U(s,0) has more than 30 terms. As a perturbation potential we will take all terms in U(s,0) apart from two periodic ones, which are represented by the function U S (s): Equation (4) assumes the form In this case finding of eigenvalues and eigenfunctions in the zeroth approximation is completely equivalent to [14]. Then we calculate the Hamiltonian matrix with the elements of the following form By diagonalization of the Hamiltonian matrix, we have found the energies and wave functions for torsional states of a methanol molecule in the argon matrix. The energy values are listed in Tab. 1; their positions with respect to the potential curve are given in Fig. 3 together with some wave functions. It is obvious that a change in the molecular dipole moment is due to rotation of a hydroxyl group. As the dipole moment projection onto the С-О axis is invariable with rotation of this group, changing of its components occurs in the plane perpendicular to the rotation axis only. We designate this Rotation of Methanol Molecule in Argon Matrix with Respect to С-О Axis If we assume that there is no internal rotation in methanol, but there is rotation of the molecule as whole with respect to the axis coincident with С-О bond, then such a rotation is no longer free in the surroundings of argon atoms. A function for the molecular rotation potential energy may be derived from U(s,t) provided s is zero. The curve for U(0,t) is shown in Fig. 5. To find the energy levels, we solve a Schrödinger equation of the following form: Potential energy is then of the following form: Solution of (10-12) is similar to that of equations (6,7). Eigenvalues of the energy for pure rotational levels are listed in Tab. 3. Positions of energy levels and some squared wave functions are demonstrated in Fig. 6 As seen, positions of pure rotational levels are significantly differing from those in the case of free rotation of the fixed-axis rotator for which the following is valid: E k = Dk 2 . (13) Using (13) and considering that D = 4.209 cm -1 , one can easily find the energies of rotational levels for the fixed-axis rotator. The Case of Motion of Two Types The results presented in Sections 3 and 4 show that the potential barriers of internal rotation are considerably higher than those associated with rotation of the methanol molecule as a whole relative to С-О bond, in the surroundings of argon atoms. In principle, there are no factors capable to strictly forbid both types of motion simultaneously. Because of this, we must consider this case as well. As the potential energy is a periodic function for both the coordinates OH ϕ and , a problem of two-dimensional large-amplitude motion is readily solved with the use of precisely these coordinates. A Schrödinger equation in this case takes the form: where Let the potential energy be given in the more general form then (1): , , Substituting (15) and (16) into (14), we obtain: 3 3 , (18) Instead of (17), we have: 3 3 , Then we construct the finite matrix with the dimensions 2 2 (2 1) (2 1) c c + × + ; c ∈  . This means that n and m are varying within the limits from -c to c per unity. From (19) we derive: 3 3 , Now we take (20) Fig. 7. Matrix elements of the dipole moment components can be computed with the use of . The following expression is used: The squared matrix elements of the dipole moment operator are given in Tab. 5. Fig. 8 shows the computed IR absorption spectrum. Discussion Despite the fact that in the case of torsional-rotational motion there is a greater number of the computed vibrational frequencies over the spectral region 0-500 cm -1 compared to the case of pure torsional motion, with due regard for the dipole moment matrix elements, the spectra shown in Figs. 4 and 8 are found very similar. Inclusion of the rotation leads to a blue shift of the absorption bands doublet in the region 0-50 cm -1 and to a considerable increase of its relative intensity. A similar blue shift is also observed for the doublet of absorption bands in the interval 200-250 cm -1 . The presence of a high-intensity absorption band in the spectral region 275-300 cm -1 seems to be the principal feature distinguishing the torsional spectrum as compared to the torsional-rotational IR spectrum. With inclusion of a rotation motion, in this interval a low-intensity doublet of absorption bands is observed. Though the proposed model including the argon matrix effect on a torsional-rotational spectrum for a methanol molecule is far from being perfect, it is interesting to correlate the computational results and experimental data. Unfortunately, in [13] IR spectra for a methanol molecule in the argon matrix have been recorded over the region 400-4000 cm -1 . In [17] the spectrum given for methanol in the argon matrix exhibits an absorption band at 271.5 cm -1 and reveals a growth of the intensity in the region of 213 cm -1 , where recording of this spectrum comes abruptly to an end. As a result, computations of normal vibrations were performed considering the only band at 271.5 cm -1 . In [18] an IR spectrum for a methanol molecule in the argon matrix is given in the region 40-4000 cm -1 . Taking into account limitations of the model and the assumptions made in the process of computations, a similarity of the computational results and experimental spectra is remarkable. Some of the absorption bands in the interval 40-120 cm -1 are assigned by the authors as phonon bands. In the spectral region 40-400 cm -1 , apart from these bands, two more absorption bands are observed at 223 and 272 cm -1 , the first of them having a higher intensity. Besides, a half-width of the band at 223 cm -1 is no less than 10 cm -1 , and this is not typical for spectra for the isolated matrix which are recorded at 10 К. The band profile seems to be indicative of a doublet of the overlapping absorption bands. As an intensity of the absorption band at 272 cm -1 is rather high the computed torsional spectrum for a methanol molecule seems to be closer to the experimental one. Thus, the band at 272 cm -1 may be correlated to the computed absorption band at 288.06 cm -1 , and a wide high-intensity band at 223 cm -1 -to a doublet of the computed bands at 204.82 and 217.76 cm -1 . It is possible to assume that a good agreement between the theoretical and experimental spectra is due to the fact that in all the cases we deal with practically undisturbed torsional motion in a molecule of methanol. But, according to [14,[19][20][21][22], a pure torsional spectrum of a methanol molecule recorded at a temperature below 10 К and constructed on the basis of the selection rules ( A A ⇔ , E E ⇔ ) in the spectral interval 0-400 cm -1 should be represented by two absorption bands with the frequencies 294.5 and 353.2 cm -1 . In the case of an insignificant population of the first degenerate torsional state of the E-type there is a possibility for observation of a low-intensity absorption band with the frequency 199.8 cm -1 . It is clear that the matrix effect leads both to the frequency shift and to a significant changing of the transition probability, and this is supported by the presence of two high-intensity absorption bands in an IR spectrum of methanol in the argon matrix over the region 200-300 cm -1 . Conclusions In the case when a molecule of methanol is surrounded by Wave number  cm  1  Relative intensities eight argon atoms the authors have considered internal rotation and rotation of the molecule as a whole with respect to the С-О bond, and also their joint motion. Splitting of the deepest double-degenerate torsional state was found to be 29.7 cm -1 . Rotation of the methanol molecule as a whole with respect to С-О bond takes place in a small decelerating potential, height of the barrier being no more than 60 cm -1 . Because of the elongated form, rotation of a methanol molecule with respect to the axes perpendicular to С-О bond is unlikely. The computations carried out point to the fact that a methanol molecule may be considered as a probe in analysis of the matrix properties. By simulation of the positions of argon atoms surrounding a molecule of methanol we can vary a two-dimensional potential surface and hence a theoretical IR spectrum of the molecule in the matrix. Correlation with the experiment is still required to reveal which of the types of the argon atoms surroundings is most probable.
3,724
2013-01-07T00:00:00.000
[ "Physics" ]
A Fuzzy Multi-Criteria Evaluation Model for the Coordination of Industrial Agglomeration and Regional Economic Growth e coordinated development between industrial agglomeration and regional economic growth is of great signi‚cance. Based on the theory, this article constructs a fuzzy multi-criteria evaluation model for the coordination of industrial agglomeration and regional economic growth. e model reveals the impact of industrial agglomeration on economic growth by establishing a regressionmodel.e output value of the secondary industry and the tertiary industry are classi‚ed, and the initial valuemethod is used for dimensionless processing. e experimental results show that using the panel unit root test, panel cointegration test, panel regression analysis, and gray correlation analysis to conduct empirical analysis and research, the capital agglomeration and labor agglomeration in regional industries both promote economic growth, and the correlation degree of sustainable agglomeration components reached 89.7%, which signi‚cantly and indirectly played a role in promoting the coordinated development of economic growth in regional economic growth. Introduction Netizens conduct various activities related to economy, life, communication, and digital information collection through the Internet, and the Internet plays an increasingly important role in the coordinated development of economic growth in production and life [1][2][3][4]. Various industrial agglomerations are carried out through the Internet to meet their own material and spiritual needs. Digital industrial agglomeration is an emerging mode of industrial agglomeration under the network economy. is model has many advantages and has di erent characteristics from traditional industrial agglomeration. e characteristics of digital industrial agglomeration and its inherent network rules have contributed to the vigorous development of digital industrial agglomeration [5][6][7][8]. Based on the de nition and characteristic analysis of digital industrial agglomeration, this article further expounds on the current situation and existing problems of regional digital industrial agglomeration and brie y expounds on the future development trend in regional digital industrial agglomeration from the theoretical level. From the perspective of the time-space connection of digital industrial agglomeration, digital industrial agglomeration is completed in the virtual network. is means that a large amount of personal digital information and materials, data, audio, video, etc. is stored in the virtual network. ese online storages are convenient and e cient, which facilitates the online industry agglomeration behavior of netizens. However, the question is how to deal with the digital information, materials, audio, video, and other network products stored in the virtual space online when these netizens pass away [9][10][11]. is creates another problem that will be faced in the digital age and the problem of digital heritage. Digital heritage is that with the expansion and continuous advancement of digital industrial agglomeration, economic and legal issues have arisen between network platform providers and demanders. is problem is not signi cant in contemporary times, but with the mass death of the rst generation of netizens, the dispute over the issue of digital heritage will come into people's field of vision [12][13][14][15]. is article focuses on the coordinated development of industrial agglomeration and regional economic growth. e main contributions are as follows: (i)From a macro perspective, this article studies the impact of regional residents' income, education level, and Internet on the industrial agglomeration behavior of digital industrial agglomerators through regression analysis. (ii)At the micro level, based on the planned behavior theory and technology acceptance model, this article constructs the influence model of the willingness of digital industry agglomerators. rough theoretical analysis, we know that the behavior of digital industrial agglomeration depends on the perception and usefulness of the digital industrial agglomeration. e rest of article is structured as follows: Section 2 describes the related works. In Section 3, the growth of digital information and the industrial agglomeration of collaborative technology are analyzed in detail. In Section 4, based on the collaborative technology of digital information growth, this article constructs the collaborative development model of industrial agglomeration and regional economic growth. In Section 5, the application of the proposed model is analyzed, and concluding remarks are presented in Section 6. Related Works ere are many ways to classify the externalities of coordinated development of regional economic growth. According to the difference in the way of production, direct network means that the mutual economic growth and synergistic development between industrial agglomerations using the same product or service. e use of one user will affect the efficiency of other users. When the number of users increases, the social value of the tool itself also increases, which is the direct network externality, the complementary products of the product or service will also increase, and the price will decrease, which is an indirect network externality according to the different final results produced by the synergistic development of mutual economic growth among users [16][17][18][19]. Aiming at some shortcomings of the growth pole theory, Nathan et al. [20] proposed the dual economic structure (Geographical and Economic) theory by using the dynamic disequilibrium analysis method. In addition to the research on developed regions and backward regions, Voronkova et al. [21] proposed how to take the coordinated development role of developed regions in leading economic growth and stimulate the development of backward regions, so as to eliminate the development of developed regions and backward regions. Gureev et al. [22] systematically introduced foreign theoretical results on industrial agglomeration and applied the theoretical results outside the region to conduct a classification and empirical study on regional industrial clusters. Eisebith et al. [23] believe that an industrial cluster is an organized concentration of capital, labor, growth synergistic technology, and entrepreneurs in certain industries, with very strong growth ability and rapid market development, so it must be very attractive to enterprises and organizations outside the cluster. Relevant enterprises and organizations will definitely migrate to cluster areas if they have the conditions, and this is most prominent in reality, which is the attraction of foreign investment by industrial agglomeration. Yuan et al. [24] studied the nonlinear impact of manufacturing agglomeration on Gee and its action path theoretically and empirically by using the dynamic spatial panel Dobbin model and intermediary effect model. Shi et al. [25] measured and analyzed the coupling coordination and spatiotemporal heterogeneity of economic development and ecological environment in 17 tropical and subtropical regions by geographical and time-weighted regression. e internal growth model emphasizes that knowledge spillovers not only generate increasing returns themselves, but also make other factors such as physical capital and labor have increasing returns, resulting in unconstrained growth. For the diffusion of growth synergy technology, industrial agglomeration has more obvious advantages. Due to the close geographical location and close connection between enterprises within the cluster, the diffusion speed of digital information and growth synergy technology is very fast. e rapid spread of digital information and growth of collaborative technologies enables every enterprise to quickly update equipment, adopt new production processes, and adjust the optimal input mix of factors, which can generally improve the productivity and output of enterprises, thereby increasing regional economic aggregates and market competitiveness and promote clusters. For example, social credit, social norms, and social networks will be greatly improved [26,27]. In view of this, on this basis, this article constructs a fuzzy multi-index evaluation model for the coordination between industrial agglomeration and regional economic growth. e model reveals the impact of industrial agglomeration on economic growth by establishing a regression model. Transformation of Digital Information Domain. When a digital product information field is downloaded from the Internet, it is easy to copy, and the same is true for digital product manufacturers. When the first digital product is produced, it can be copied at a low cost later, so that the digital product is raising new economic problems, the reproducibility of digital products forces companies to change their traditional competitive strategies, and the supply and demand of digital products in the market also complicates. Mathematical Problems in Engineering When the industrial scale location quotient is greater than 1, there was a significant difference in performance between simple and complex conditions, and this difference was not due to insufficient processing time. From the average observation, it is found that the correct rate of complex conditions is lower than that of simple conditions under various presentation time conditions. e industry in the Figure 1 is at a comparative disadvantage and its competitiveness is weak. e centrality of the information domain is an index to measure the degree to which an industry is located in the center of the entire industrial network. Specifically, it is measured by four indicators: degree centrality, intermediate centrality, closeness centrality, and eigenvector centrality. Among them, degree centrality is the most direct measure of network centrality. e larger the value of an industry's degree centrality, the more important the status of the industry is. Betweenness centrality measures the ability of an industry to act as a mediator for other industries by occupying the position of "middleman" on the shortest path connecting other industries. e greater the intermediate centrality, the stronger the effect of the industry on the indirect restriction of economic growth and coordinated development of other industries. Network Mode Training. e network model can find that this is mainly to measure the comparison of a certain industry in a certain region with the regional average, so as to measure the agglomeration ability and relative competitiveness of a certain industry. e larger the location quotient of the industry, the stronger the agglomeration and competitiveness of the industry in the region. Since we choose time-series data, this article uses the Granger causality test to illustrate the relationship between financial industry agglomeration, economic growth, and financial industry development. e basic principle of Granger causality is: when doing the co-regression of A to other variables (including the past value of A itself ), if the lag value of B is included, the expectation of A can be signifi- e specific process includes examining whether each variable is single-integrated, performing unit root test of time-series data on each variable, conducting cointegration test, and using Granger causality test to examine the relationship between variables. It is calculated that the in-degree of the Internet of things industry in Figure 2 is 10 and the out-degree is 20, which to a certain extent directly reflects the significant input-output relationship between the regional Internet of things industry and many industries. Secondly, the out-degree of industrial association ranks in the top ten among the 40 industries, while the in-degree of industrial association ranks relatively low, at 22, indicating that the consumption level of the IoT industry to other industries is relatively low, while the production process of other industries is relatively low. Industry Agglomeration Signal Compression. e closeness centrality of industrial agglomeration signals measures the average distance between an industry and other industries. A certain industry is denoted as node i, and the average distance from this node to all nodes in the network is di, then the reciprocal of d is the proximity centrality of node i. e shorter the average distance between an industry and other industries, the greater the closeness e eigenvector centrality is an index to measure the quality of the related industries of a certain industry with the eigenvectors of the correlation matrix. e larger the calculated value of the relevant indicators of the industrial center, the more often the industry is a dominant industry or a bottleneck industry. e path coefficient of the impact of human capital accumulation on economic growth is 0.153 (t − 2.259), the accumulation of human capital has a negative impact on economic growth, which contradicts the null hypothesis and rejects the hypothesis H4b. e path coefficient of the development affecting energy utilization efficiency is 0.804 (t � 36.187) and the path coefficient of the impact of energy utilization efficiency on economic growth is 0.290 (t − 4.278), indicating that the development will improve social energy utilization efficiency and reduce environmental pollution in promoting economic growth, accepting assumptions H5a, H5b. e influence path coefficients of digital information perception level, digital information transmission level, and digital information processing level on the industry are 0.348 (t − 20.214), 0.305 (t � 14.876), 0.348 (t − 20.214), 0.305 (t � 14.876), and 441 (t − 28.629), indicating that the higher the level of digital information perception, transmission, and processing in Figure 3 will significantly improve the development level industry, and the assumptions Hla, Hlb, and HlC are accepted. e path coefficient of the direct economic growth and coordinated development of IoT type industry development on economic growth is 0.333 (t − 2.408), indicating that the development of IoT type industry has a case impact on economic growth, and the hypothesis H2 is accepted. e path coefficient of the development of IoT type industry affecting growth and collaborative technological innovation is 0.895 (t � 55.975) and the path coefficient of the growth of collaborative technological innovation affecting economic growth is 0.406 (t − 3.193), indicating that the development of IoT type industry will promote the improvement in social growth synergistic technological innovation ability, and the improvement in growth synergistic technological innovation ability will further drive economic growth, accepting assumptions H3a, H3b. e path coefficient of the impact of IoT industry development on human capital accumulation is 0.739 (t − 13.888), indicating that IoT industry development will accelerate human capital accumulation, and the hypothesis H4a is accepted. Clustering Weight Distribution. e weight of agglomeration is mainly reflected in the influence of capital on agglomeration, so these three growth synergy factors are defined as the capital factors of industrial agglomeration; the second public growth synergy factor has a large load in the ratio of the number of enterprises and the number of employees, which is manifested as labor force. e impact on industrial agglomeration is therefore defined as the labor force growth synergistic factor of industrial agglomeration. e two public growth synergy factors both have a certain load on the fixed asset-to-equity ratio, and according to the nature of the asset-to-net value ratio, they are classified as the first public growth synergy factor. And the value of each growth synergy factor is greater than 0 in each common growth synergy factor, indicating that this classification is meaningful. In this way, two synergistic factors of public growth are obtained, one representing the capital factor and the other representing the labor factor, which also reflects the most essential requirements of industrial agglomeration. is shows that the digital information in Figure 4 contained in 1 lnx (Internet penetration rate) can basically be reflected by the information in 2 lnx (logarithm of per capita income of urban residents) and 3 lnx (logarithm of higher education level). In fact, the increase in income level and education level has indeed led to the development of the Internet. As a result, more and more people use the Internet, and the Internet penetration rate is also higher. e coefficient of the Internet penetration rate is negative because in the model, we performed the natural logarithm transformation of the data, the Internet penetration rate is less than 1, so the natural logarithm of the Internet penetration rate is negative, and the negative coefficient indicates the Internet penetration rate here is a correlation between the network industry agglomeration and the digital industry agglomeration. Construction of a Collaborative Development Model of Industrial Agglomeration and Regional Economic Growth Based on Digital Information Growth Collaborative Technology Digital Information Coding Optimization. e research assumptions of digital information coding are as follows: the level of digital information perception has a case effect on the development of IoT type industry and promotes the coordinated development of economic growth; the level of digital information transmission has a case on the development of IoT type industry. e level of digital information processing has a case effect on the development of IoT type industry and promotes the coordinated development of economic growth; the development of IoT type industry directly promotes economic growth; the development of IoT type industry has a case impact on the improvement in growth and collaborative technological innovation capabilities; the growth and technological innovation promote economic growth, and the development of IoT type industry has a case impact on the accumulation of human capital; the accumulation of human capital promotes economic growth; the development of IoT type industry has a case impact on the improvement in energy utilization efficiency. Mathematical Problems in Engineering Secondly, in the causal relationship between financial agglomeration and the development of the financial industry, there is also a mutual causal relationship. It is worth noting that there is an active influence of the financial industry on financial agglomeration in the case of a 1-stage lag, while in the case of a 2-stage lag, financial agglomeration actively images the development of the financial industry. is shows that the regional financial agglomeration first developed with the support of the development of the financial industry, but at the same time it also had an impact on the development of the financial industry. Reliability is used to test the reliability of the measurement item to the subject to be measured, and to test whether the set of measurement items in Figure 5 is the measurement index of this subject. Cronbach's coefficient α is the most commonly used reliability analysis index, and the size of its coefficient represents the size of reliability. When it is above 0.8, the reliability is the best. When it is between 0.7 and 0.8, it indicates that it has considerable reliability. When the Cronbach coefficient is above 0.6, the reliability is acceptable. When it is below 0.6, the reliability is insufficient. Validity means it, and the validity analysis of a questionnaire refers to the degree to which the analysis methods and means reflect the things that need to be measured. Shielding Effect of Industrial Agglomeration. However, the cost of collecting digital information increases with the expansion of the shielding scale of industrial agglomeration. erefore, with the increase of the number of people accessing the network, the cost of digital information network shows a decreasing form, but its marginal cost decreases relatively slowly, and the overall benefit increases, and the total benefit and marginal benefit will increase with the increase of the network scale. Secondly, digital information investment can not only bring general investment remuneration to investors, but also bring value-added remuneration to investors from the accumulation of digital information. It can be seen that the digital industrial agglomeration in Figure 6 shows a trend in increasing marginal returns. e correlation between IoT-perceived manufacturing and economic growth ranks in the middle, but the correlation also reaches 0.770, which shows two problems. Firstly, the correlation between IoT-perceived manufacturing and economic growth is very high. Sensing equipment, intelligent instrumentation, measurement and control equipment, and radio frequency identification (RFID) growth synergy technology have been widely used in electric power, transportation, security, logistics, medical, and other industries. y(c, c(a), c(b)) ey are essential to realize the intelligence and networking of traditional industries. Basic equipment products and technologies develop together. It is precisely by virtue of extensive industrial connections that the IoT perception industry substantially promotes economic growth. Secondly, there are still some obstacles to the development of the regional IoT-sensing manufacturing industry. For example, regional manufacturers only account for about 20% of the market share of the sensor industry, the lack of core intellectual property rights in the RFID industry, and the high cost. ese impediments weaken the power of IoT-aware manufacturing to drive economic growth. Regional Economic Cycle Gains. e regional economic cycle growth synergy factor graph can be regarded as a graphical representation of the rotated growth synergy factor loading matrix, and the variables in the graph are all logarithmic. It can be seen from the figure that WR and ER are relatively close, and AIR and TIP are relatively close. According to the nature of FA, FA is classified as pair, AIR, and TIP, and the same conclusion as the common growth synergistic factor extracted above can be seen. e abscissa is the serial number of the growth synergy factor, and the ordinate is the value of the corresponding characteristic root. It can be seen from the figure that the values of the two growth synergistic factors are generally high and connected into a steep line, while the eigenvalues after the third growth synergy factor in Table 1 are generally lower and connected into a gentle line. is further shows that it is more appropriate to extract two growth synergistic factors. From the perspective of the Internet of things subsectors, the Internet of things communication industry has the highest correlation with per capita GDP, reaching 0.929, followed by perception manufacturing, with a correlation of 0.770, and finally the Internet of things service industry, with a correlation of 0.689. is is in line with the current development of various subsectors of the Internet of things in the region. It is inseparably related to the digital information industry and the Internet industry. Since the development of the digital information industry, a relatively complete digital information infrastructure has been established in the region, and the communication capability has been greatly improved. Preprocessing of Digital Information Data. e alpha coefficient of each variable in the digital information data is above 0.7, and the questionnaire has considerable reliability. high validity. In terms of the price of the industrial agglomeration layer or service of the digital industry agglomeration, the average score is 3.76 points, and the standard deviation is 0.73, which is at the upper-middle level. ∵α × x(n) is shows that the price of digital industry agglomeration for digital industry agglomeration and the price of industrial agglomeration in the real economy market are between uncertain and agree that the price of digital industry agglomeration is lower than that of real economy industry agglomeration, which shows that the price factor is in people's decision to make digital industry agglomeration. e proportion of the choice of agglomeration is not as high as we usually think. ere are other reasons why people choose the way of digital industry agglomeration. From the perspective of the perceived usefulness of digital industry agglomeration and the ease of use of digital industry agglomeration, the average score of digital perceived usefulness in Figure 7 is 3.75 points, but the average score of digital ease of use is only 3.45 points, which shows that people generally believe that digital industrial agglomeration has brought convenience and improved efficiency to our lives, but in terms of ease of use of digital industrial agglomeration, the mastery of procedures and functions of digital industrial agglomeration need to be improved. In terms of subjective speculation and purchase intention of digital industrial agglomeration, the average scores of the two questions are 3.69 and 3.62, both of which are in the upper-middle range, indicating that in the context of today's Internet age, people's recognition of digital industrial agglomeration and purchase of digital industrial agglomeration are like the tendency of tiers, it is at the middle and upper level, and merchants need to further improve the quality of the digital industry agglomeration layer so that the digital industry agglomerators can be recognized by more people. Simulation of Coordinated Development of Regional Economic Growth. In this article, with the aid of the econometric analysis software EVIWS5.0, the ADF test method is used to test the LNRGDP, LNK, and LNL related data of the regional industry respectively for AIC and SC digital information minimum criteria to determine. e socalled cointegration relationship means that economic variables are nonstationary, but a certain linear combination of them may be stationary, that is, a group of variables maintains a set of linear relationship trends within a certain time interval, and there is a cointegration relationship between the variables. e nonequilibrium error must be a stationary sequence, otherwise it must be an I(1) sequence with a unit root. erefore, only two variables are tested for cointegration, the single integer order of the two variables should be the same. According to the results in Figure 8, the adjusted R 2 value is about 0.9908, indicating that the regression model can explain 99.08% of the variation of the dependent variable, and the fitting degree is excellent. en the results of the ttest of each explanatory variable are compared. e explanatory variable x1 cannot pass the t-test, indicating that the Internet penetration rate has no impact on the regional per capita network industry agglomeration, while the explanatory variables x2 and x3 have passed the t-test, indicating that the per capita urban resident income and higher education level have a significant impact on regional per capita digital industry agglomeration. According to the model results, when the Internet penetration rate increases by 1%, the regional per capita digital industry agglomeration increases by 0.7% on average; when the level of higher education increases by 1%, the regional per capita digital industry agglomeration decreases on average by 2.45% when the per capita urban income increases by 1%. At the same time, the regional per capita network industry agglomeration increased by about 4.60% on average. Although the regression model is significant, the explanatory variable x1 is not significant, and there is a negative correlation between the level of higher education and the level of per capita network industry agglomeration, so there may be multicollinearity in the regression equation. e stationarity test of it is necessary to test the stationarity of the observed time-series data in Figure 9. If the original data are stable after analysis, we can directly carry out regression and other analysis; if the data are not stable, we cannot directly carry out regression and other analysis on the original data, and we need to deal with the original data accordingly before continuing to carry out relevant economic analysis. Example Application and Analysis. In digital information data technology, there are null hypothesis and substitution hypothesis. e calculation results show that the standard deviation of the surveyed male industry agglomeration's perceived ease of use of digital industry agglomeration is about 0.868, while that of women is 0.778. erefore, it is believed that in terms of the perceived ease of use of digital industry agglomeration, the variance of the two populations is not the same and so on; and in the perceived usefulness of digital industrial agglomeration, the standard deviation of male industrial agglomeration satisfaction is 0.701, and that of women is 0.697, which is relatively close, so it is considered that the surveyed men and women have equal variance in the perceived usefulness of digital industrial agglomeration. en, the independent samples t-test in Table 2 was carried out on the perceived ease of use and usefulness of digital industrial agglomeration through SPSS. Firstly, it may set the total output value of the IoT industry as the parent sequence, denoted as x0, and set the per capita GDP and the added value of the primary, secondary, and tertiary industries as the comparison sequence, denoted as X1, X2, X3, and may use the gray correlation model to calculate the correlation between the Internet of things industry and the three industries. Secondly, it may set the measurement indicators of the development level of the three subindustries of the Internet of things as the parent Figure 9: Process of coordinated development of regional economic growth. sequence and set the per capita GDP and the added value of the three industries as the comparison sequence and may use the gray correlation model to calculate the correlation between the subindustries of the Internet of things and the three industries; the digital industrial agglomeration have a significant useful case with the price of the industrial agglomeration layer or service. When the perceived usefulness and ease of use of digital industrial agglomeration increase, the price of the service will also go up. e quality of the industrial agglomeration layer has a significant useful case with the perceived ease of use of digital industrial agglomeration (p � 0.02 < 0.05), but has no significant correlation with the perceived usefulness of digital industrial agglomeration (p � 0.772 > 0.05). e better the quality of the purchased industrial agglomeration layer, the more people will use the Internet for industrial agglomeration, but the purchased high-quality industrial agglomeration layer may not be needed in life. e regression results of the three models are all significantly positive, and the individual fixed effect model in Figure 10 has the highest goodness of fit R 2 (R 2 � 0.9779), indicating that the model has the best explanatory power. By comparing this article, we choose to use the individual fixed effect model for regression. e D-W statistic in the individual fixed effect model is 0.4625, which is greater than 0 and less than 2, indicating that the model has no significant autocorrelation and the estimated result is stable. e individual fixed effects (IFE) of the 11 provinces, autonomous regions, and municipalities in the region were obtained by regression of individual fixed effects, the coefficient value of t is 0.7501, indicating that the direct contribution of the digital information industry sector to economic growth is significantly positive, that is, the contribution of the direct economic growth synergistic development of the digital information industry to economic growth is 0.7501, assuming that other factors remain unchanged, if the output of the digital information industry increases by one yuan, the digital information industry will effectively drive the economic growth by 0.7501 yuan. Conclusion Digital information industry is a digital information service industry and digital information equipment manufacturing industry integrating collection, processing, storage, and circulation. Firstly, this article combs the basic theories of the development of digital information industry, including industrial correlation theory and new economic growth theory. en, it analyzes the coordinated development mechanism of economic growth that digital information industry promotes economic growth as well as the coordinated development mechanism and path mechanism of economic growth that affect economic growth. In addition, this article also establishes a gray correlation model from an empirical perspective to preliminarily explore the correlation between industry and economic growth. For the industrial networks of 40 industries, including the Internet of things industry, the social network analysis method is used to measure the impact of the Internet of things industry on the optimization of industrial structure. Finally, on the basis of theoretical and empirical analysis, this article puts forward relevant policy suggestions to vigorously develop the Internet of things industry and promote economic growth. In addition, the further work of this article is to study the impact of different factors in industrial agglomeration on the coordination of regional economic growth. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
7,079.8
2022-06-06T00:00:00.000
[ "Economics" ]
All-optical switching of a signal by a pair of interacting nematicons We investigate a power tunable junction formed by two interacting spatial solitons self-trapped in nematic liquid crystals. By launching a counter-propagating copolarized probe we assess the guided-wave behavior induced by the solitons and demonstrate a novel all-optical switch. Varying soliton power the probe gets trapped into one or two or three guided-waves by the soliton-induced index perturbation, an effect supported by the nonlocal nonlinearity. © 2012 Optical Society of America OCIS codes: (190.6135) Spatial solitons; (160.3710) Liquid crystals. References and links 1. Yu. S. Kivshar and G. P. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Academic, 2003). 2. M. Peccianti and G. Assanto, “Nematicons,” Phys. Rep. 516, 147–208 (2012). 3. A. Piccardi, A. Alberucci, N. Tabiryan, and G. Assanto, “Dark nematicons,” Opt. Lett. 36, 1456–1458 (2011). 4. M. Peccianti, G. Assanto, A. De Luca, C. Umeton, and I. C. Khoo, “Electrically assisted self-confinement and waveguiding in planar nematic liquid crystal cells”, Appl. Phys. Lett. 77, 7–9 (2000). 5. M. Peccianti and G. Assanto, “Signal readdressing by steering of spatial solitons in bulk nematic liquid crystals,” Opt. Lett. 26, 1690–1692 (2001). 6. J. Beeckman, K. Neyts, X. Hutsebaut, C. Cambournac, M. Haelterman, “Simulations and experiments on selffocusing conditions in nematic liquid-crystal planar cells,” Opt. Express 12, 1011 (2004). 7. Ya. V. Izdebskaya, A. S. Desyatnikov, G. Assanto, and Yu. S. Kivshar, “Multimode nematicon waveguides,” Opt. Lett. 36, 184–186 (2011). 8. M. Peccianti, K. A. Brzdakiewicz, and G. Assanto, “Nonlocal spatial soliton interactions in bulk nematic liquid crystals,” Opt. Lett. 27, 1460–1462 (2002). 9. M. Peccianti, C. Conti, G. Assanto, A. De Luca, and G. Umeton, “All-optical switching and logic gating with spatial solitons in liquid crystal,” Appl. Phys. Lett. 81, 3335–3337 (2002). 10. S. V. Serak, N. V. Tabiryan, M. Peccianti, and G. Assanto, “Spatial soliton all-optical logic gates,” IEEE Photon. Techn. Lett. 18, 1287–1289 (2006). 11. A. Piccardi, A. Alberucci, U. Bortolozzo, S. Residori, and G. Assanto, “Soliton gating and switching in liquid crystal light valve,” Appl. Phys. Lett. 96, 071104 (2010). 12. M. Peccianti, A. Dyadyusha, M. Kaczmarek, and G. Assanto, “Tunable refraction and reflection of self-confined light beams,” Nat. Phys. 2, 737–742 (2006). 13. Ya. V. Izdebskaya, V. G. Shvedov, A. S. Desyatnikov, W. Krolikowski, and Yu. S. Kivshar, “Soliton bending and routing induced by interaction with curved surfaces in nematic liquid crystals,” Opt. Lett. 35, 1692–1694 (2010). 14. R. Barboza, A. Alberucci, and G. Assanto, “Large electro-optic beam steering with nematicons,” Opt. Lett. 36, 2611–2613 (2011). 15. J.-F. Henninot, J.-F. Blach, and M. Warenghem, “Experimental study of nonlocality of spatial optical soliton excited in nematic liquid crystal,” J. Opt. A 9, 20–25 (2007). 16. Ya. V. Izdebskaya, V. G. Shvedov, A. S. Desyatnikov, W. Z. Krolikowski, M. Belic, G. Assanto, and Yu. S. Kivshar, “Counterpropagating nematicons in bias-free liquid crystals,” Opt. Express 18, 3258–3263 (2010). #169842 $15.00 USD Received 6 Jun 2012; revised 31 Aug 2012; accepted 30 Sep 2012; published 15 Oct 2012 (C) 2012 OSA 22 October 2012 / Vol. 20, No. 22 / OPTICS EXPRESS 24701 17. W. Krolikowski, O. Bang, N. I. Nikolov, D. Neshev, J. Wyller, J. J. Rasmussen, and D. Edmundson “Modulational instability, solitons and beam propagation in spatially nonlocal nonlinear media”, J. Opt. B 6, S288 (2004). 18. M. Szaleniec, R. Tokarz-Sobieraj, and W. Witko “Theoretical study of 1-(4-hexylcyclohexyl)-4isothiocyanatobenzene: molecular properties and spectral characteristics,” J. Mol. Model. 15, 935, (2009). 19. D. Buccoliero, A. S. Desyatnikov, W. Krolikowski, and Yu. S. Kivshar, “Laguerre and Hermite Soliton clusters in nonlocal nonlinear media”, Phys. Rev. Lett. 98, 053901 (2007). 20. S. Lopez-Aguayo, A. S. Desyatnikov, Yu. S. Kivshar, S. Skupin, W. Krolikowski, and O. Bang “Stable rotating dipole solitons in nonlocal optical media”, Opt. Lett. 6, 1100–1102 (2006). 21. Ya. V. Izdebskaya, A. S. Desyatnikov, G. Assanto, and Yu. S. Kivshar, “Dipole azimuthons and vortex charge flipping in nematic liquid crystals,” Opt. Express 19, 21457–21562 (2011). 22. C. Conti, M. Peccianti, and G. Assanto, “Route to nonlocality and observation of accessible solitons,” Phys. Rev. Lett. 91, 073901 (2003). 23. A. W. Snyder and D. J. Mitchell, “Accessible solitons,” Science 276, 1538 (1997). 24. M. Izutsu, Y. Nakai, and T. Suet, “Operation mechanism of the single-mode optical-waveguide Y junction,” Opt. Lett. 7, 136–138 (1982). Introduction The interaction of optical spatial solitons [1] has been studied extensively as a robust mechanism for all-optical, i.e. power-dependent, and reconfigurable spatial switching and routing of optical signals. In this paper we experimentally investigate the all-optical confinement and switching of a weak probe counter-propagating (CP) with respect to two interacting nematicons (CO, forward) forming a power-dependent (Y or X) junction by way of their mutual attraction.In particular, we study the transverse output profile of the CP probe versus the launch power of the two CO nematicons.The probe signal tends to split in the two arms of a Y-junction for low nematicon powers, it gradually gives rise to three outputs (two guided signals and a beam) at intermediate soliton excitations and, eventually, forms a single output beam at powers large enough for the nematicons to interlace into an X-junction.This symmetric switching and redistribution of signal power stems from the nonlocal index distribution produced by the reorientationl solitons and can be illustrated by a simple analytical model.The phenomenon presented hereby could become the core of a novel all-optically reconfigurable interconnect and/or signal router. Individual nematicon waveguides We use an unbiased cell with an NLC layer sandwiched between two parallel polycarbonate slides separated by 110 μm.The NLC 4-(trans-4'-hexylcyclohexyl) isothiocyanatobenzoate (6CHBT [18], with birefringence Δn ≈ 0.16) was planarly oriented in (x, z), with its elongated organic molecules anchored with optic axis (molecular director) at 45 • with respect to z.The cell was sealed at input and output by extra glass interfaces with rubbing along x in order to prevent the formation of a meniscus and beam depolarization [4].We excited CO spatial soli-tons in the plane (xz) by injecting Gaussian beams with the waist of about 3μm from a cw laser of wavelength λ 1 =532 nm, with electric field extraordinarily polarized (E||x) in order to induce nonlinear reorientation even below the Freedericks threshold.Figure 1(a) displays the image of a forward propagating beam launched with an input power P=2 mW, forming a nematicon with xz trajectory along the Poynting vector at a walk-off of nearly 5 • with respect to the input wave-vector along z due to the birefringence of the NLC. We launch a CP weak (147 μW) Gaussian beam from a cw laser of wavelength λ 2 =671nm through the opposite side of the cell, using a 10X microscope objective, resulting in an input waist w ≈ 3μm. Figure 1 (b,c) shows the evolution of the extraordinarily-polarized CP signal undergoing diffraction [Fig.1(b)] or guided-wave trapping [Fig.1(c)] in the absence or in the presence of a CO nematicon, respectively.In all the experiments we keep constant the CP launch power while varying the nematicon power P from 1 to 3 mW.Normalized intensity profiles of the CP signal, acquired in the plane (x, z) for various P, are shown in Fig. 1(d) after backward propagation over about 960 μm, as indicated by dashed lines in Fig. 1(b,c).It is apparent that, owing to the nonlocal character of the all-optical response, [4] the nematicon waveguide excited at 532nm effectively confines the backward propagating signal at the longer wavelength 671nm. Nematicon Y-junction Here we study the evolution of the CP signal interacting/guided by a symmetric Y-junction stemming from attraction and merging of two CO nematicons.Figure 2 is a sketch of the experimental setup.We employ a Mach-Zehnder arrangement (beam splitters BS1 and BS2, mirrors M1 and M2) to launch two closely-spaced CO nematicons.In order to guarantee mutual attraction and prevent interference, M2 was mounted on a piezoelectric transducer and rendered the two input beams mutually incoherent.Two equi-power extraordinarily-polarized beams at 532 nm are focused by a 10X lens and launched with parallel wave-vectors into the cell.The extraordinarily polarized probe beam is injected from the other end of the cell using the lens MO2.The beam dynamics along the cell is monitored by the camera CCD1 by collecting the light scattered through the top plate of the cell.The transverse dynamics (output images) of the signal is monitored by CCD2 with the aid of the semi-transparent mirror STM.The green light is blocked by band-pass filters (RF).Figure 3 shows some typical experimental results demonstrating the power-dependent dynamics of two interacting incoherent CO green nematicons initially separated by about 33 μm [(x, z) propagation, (a-d)] and the corresponding evolution of the weak signal (147μW constant power) backward propagating in the presence of the light-induced index perturbation [(x, y) transverse (e-h) and (x, z) longitudinal (i-l) views].For high enough green excitation, two spatial solitons are generated with trajectories depending on power: Fig. 3(a) shows two 0.6 mW solitons which travel in parallel while the CP red beam diffracts [Fig.3(e,i)]; as the power P increases up to 1.6 mW, the CO solitons attract and merge forming a Y-junction [Fig.3(b)], while the signal gets confined in a pair of soliton-induced waveguides and splits into two guided-wave outputs [Fig.3(f,j)]; for P >2 mW the two green solitons interlace [Fig.3(c)] and the signal propagates in the two arms of the Y as well as between them [Fig.3(g,k)], with more and more power in the middle spot at P increases, until eventually we observe just one centered output for P > 3.2 mW [Fig.3(h,l)].The signal mid-spot is substantially smaller than the diffracted spot [Fig.3(e,i)], suggesting that the probe is actually guided by the index perturbation induced by the soliton pair.Noticeably, the results are similar if the separation between nematicons increase up to 1.5 times. Next we measure the signal powers in the three output spots versus input nematicon power P [Fig.4(a)], placing aperture and power meter 1.5 m away from the output.The inset in Fig. 4(a) identifies the various data sets: power transfer is apparent from outputs P 1 and P 2 to the mid spot P 3 versus soliton excitation P.This trend is better illustrated in Fig. 4(b), where we compare the intensity profiles of the three signal outputs versus P. Different input powers P correspond to the various profiles, as indicated in the legend: just two signal spots are visible for P = 1.3 mW, whereas the third output is observed for P > 2.2 mW. Discussion In order to explain the unexpected splitting of the CP signal, we recall the theory of higherorder nonlocal solitons [19][20][21].In fact, pairs of CO nematicons can form bound states, similar to dipole solitons [20] and belonging to the broader class of soliton clusters [19], including spiraling dipoles [21].Here we consider the propagation of a paraxial beam in a dielectric medium with a Kerr-type nonlinearity described by the nonlocal nonlinear Schrödinger equation (NNLSE), i∂ E/∂ z + ∇ 2 E + N(I)E = 0, where z and (x,y) stand for one propagation and two transverse coordinates, respectively, and ∇ = (∂ x , ∂ y ) [2,22].The nonlinear correction to the refractive index, N(I) = K(| r − ρ|)I( ρ)d ρ = 0, describes nematicon-induced waveguide potentials.The kernel K of the convolution integral is determined by the physical mechanism supporting the nonlinear response [17].Here we assume a Gaussian response K(r) = π −1 σ −2 exp −r 2 /σ 2 , with σ the nonlocality range.When σ → 0 we recover the (local) Kerr model with K(r) → δ (r) and N(I) ∝ I, whereas in the limit of a large nonlocality σ a (with a a characteristic transverse scale of the intensity localization), the waveguide effectively approaches a harmonic trap N(I) ∼ −Pr 2 (see Ref. [23]).For two interacting beams we use a dipole ansatz, E(x, y, z) = Ax exp −r 2 /2a 2 + ikz , with real amplitude A, half-width a, and propagation constant k.Variational solutions can then be derived [19] writing A and a as functions of soliton constant k and spatial scale σ .However, the NNLSE scaling property is such that the solution for any σ , A = A 1 /σ 2 , a = a 1 σ , and k = k 1 /σ 2 can be expressed in terms of A 1 , a 1 , k 1 obtained for σ = 1.The scale invariant soliton power is and can be used as a universal parameter.The corresponding refractive index N (I(x, y)) is Figure 5 graphs the changes in index profile (Eq.2) with soliton power P; we use a constant soliton width a = 1 and allow σ to vary.The power P clearly plays the role of scaling parameter for solitons, the shape of which in turn defines the profile of the induced waveguides.The guided modes with propagation constant β , E linear = U(x, y) exp(iβ z), can be found as the stationary solutions to NNLSE, −βU + ∇ 2 U + N(x, y)U = 0.The dipole soliton itself describes the antisymmetric mode with β = k.At low powers, a CP signal input in the Y-junction generates the symmetric mode [24] of the double-hump potential in Fig. 5(a), as observed in Fig. 3(f).As the soliton power increases, the index profile resembles a harmonic potential [23], as in Fig. 5(c); hence, the lowest order symmetric mode is bell-shaped, as in Fig. 3(h). Conclusions We demonstrated all-optical switching based on a signal confined by two interacting nematicons.At low powers the "nematicon beam-splitter" can guide the counter-propagating signal to the two outputs of a Y junction, at higher powers a third spot appears and progressively drags power from the guided modes of the junction, eventually carrying the whole signal excitation.The effect stems from the highly nonlocal nonlinearity, providing a wide guiding potential even when the nematicons do not overlap.The reported phenomenon is promising for the implementation of novel all-optically reconfigurable interconnects and signal processors. Fig. 1 .Fig. 2 .Fig. 3 .Fig. 4 . Fig. 1.Individual nematicon waveguide and counter-propagating probe signal: (a) green forward nematicon excited at P=2 mW and propagating along the extraordinary walk-off angle with respect to the input wave-vector along z, (b) diffracting CP probe beam in the absence of nematicon; (c) nematicon-guided CP signal.(d) Intensity profiles of the CP probe for various nematicon excitations P; the transverse profiles are acquired from images of signal evolution in the plane (x, z) nearby z = 0, as marked by white dashed lines in (b,c).
3,353
2012-10-22T00:00:00.000
[ "Physics" ]
Improvement of transmission loss of bellows through thickness improvement and structural modification The bellows of the vehicle are vulnerable to noise because of the low transmission loss among the components. Therefore, in this study, we modified the thickness and the structure of the bellows to improve transmission loss. Based on the impedance tube test, the transmission loss of the silicon rubber specimen – the main material of the bellows – was analyzed; the results confirmed low transmission loss in the low-frequency region. An analysis of the natural vibration model of the simple model indicated that in the low-frequency region of the bellows, a number of vibrations occurred because of the vibration of the outer and inner components. Accordingly, to improve transmission loss, the improvement introduced by varying the thickness was analyzed, and the results confirmed that the noise performance improved by more than 3 dB for a thickness of 3.5 mm in the double-layer structure. In addition, the transmission loss improved in the low-frequency region after acoustic-structure coupling analysis was performed based on a simple model. To compare the actual performance between the existing and improved bellows, a noise comparison test was performed. The test results confirm that the existing noise reduction index improved by 3 dB from 30 to 33 dB when the thickness was increased to 3.5 mm; in the frequency domain, the highest noise performance was improved with an improvement of 5.6 dB at 160 Hz. Introduction Railroad vehicles operate under various driving environments such as open areas, tunnels, bridges, and noises generated at this time enter the vehicle through the vehicle body. 1,2 In particular, railway vehicles are exposed to noises of complex and various frequency characteristics owing to the noise generated in the various driving environments. Noise tends to flow into components with the lowest transmission loss, and therefore, to minimize such noise effectively, it is necessary to increase the transmission loss of the vehicle body. To this end, the bellows of a railroad vehicle are components vulnerable to noise because of their low transmission loss among the other components of the vehicle. The main reason for this low transmission loss is attributed to the silicone rubber used to fabricated it Transportation Environmental Research Team, Korea Railroad Research Institute, Uiwang, Korea to facilitate contraction and expansion during the curved driving of the vehicle. Silicone rubber has the characteristic of low weight per unit area, which considerably affects transmission loss, and it is significantly lower than that of aluminum materials manufactured for the vehicle body. However, studies on improving the transmission loss for bellows of railway vehicles are lacking. Research on the improving transmission loss of railway vehicles has been focused on the vehicle body. In particular, honeycomb panels of various types have been investigated to increase transmission loss and reduce the vibration of railway vehicles. [3][4][5] Thus, transmission loss for the vehicle body was predicted, and a study was conducted on floating floor structures to lighten the vehicle body to improve vehicle performance. 6 Another study was conducted to analyze the effect of transmission loss by examining the boundary conditions of the vehicle body. 7 In addition, the use of viscoelastic materials was investigated to reduce the noise and vibration of the vehicle body. 8 In general, studies on improving the transmission loss of railroad vehicles have been conducted for a vehicle body made of aluminum, and therefore, their consideration of the bellows made of silicon rubber is limited. Noise studies on the bellows of railroad vehicles were conducted by focusing on the external noise generated by the vehicle. It was confirmed that high noise was generated in the bellows when the railway vehicle traveled at high speeds [9][10][11] ; this was attributed to the generation of a large amount of noise caused by the turbulent flow around the vehicle when driving at high speed. These noises are generated in the low-frequency range, and they tend to increase when driving through a tunnel. 12 Furthermore, it was confirmed that indoor noise can be significantly reduced when side barriers are installed at the bellows to minimize the inflow of vehicles. 13 However, research on improving noise performance by changing the structural body of the bellows of railway vehicles is insufficient. Research on the structure of bellows in railways has been performed to improve the durability of materials composed of silicone rubber. 14 In this study, the analysis of various deformations that can occur when the vehicle is driven in a curved line was analyzed by considering the nonlinearity of silicone rubber. Furthermore, considering that the bellows of the vehicle are devices that connect vehicles, an analysis of the suspension system for the related part was performed. 15 Thus, research on increasing the transmission loss of bellows of railway vehicles in terms of noise remains insufficient. Therefore, this study aimed to improve the transmission loss of bellows by changing the structure of the connecting membrane. First, the transmission loss of the silicon rubber specimen and the main material of the connecting membrane was analyzed based on the results of the impedance tube test. The results confirmed that the transmission loss in the low-frequency region was low for the connecting membrane. Furthermore, based on a two-dimensional simple model, we analyzed the natural frequency and mode to analyze the cause of noise generation in the bellows. To derive a method for improving the transmission loss of the bellows, the transmission loss analysis was performed for various thickness change models. To analyze the transmission loss of the proposed model, the transmission loss was analyzed through an acoustic structure coupling analysis based on a simple model. Through this, an effective design in terms of the thickness and structure that can improve noise was derived. The verification test conducted based on the actual size of the bellows, and a noise performance comparison test with the existing connecting membrane was performed. Performance analysis of existing bellows A method to improve the noise transmission performance of the existing bellows was reviewed based on transmission loss and structural dynamic analysis. First, based on the specimen extracted from the existing bellows, a transmission loss measurement test using an impedance tube was performed. The transmission loss measurement method using impedance performed in this study is shown in Figure 1. 16 The specimen was placed at the center of the impedance tube, and two microphones (MP1 and MP2) were placed at the location in the space where the noise was incident. Furthermore, two microphones (MP3 and MP4) were placed on opposite sides, where the sound traveled through the specimen. All microphones were placed at the same height within the tube. Meanwhile, the surface area of the specimen was selected as the reference position (x = 0), and the positions of the microphones were 3 1, 3 2, 3 3, and 3 4, respectively. When a plane wave (pe j(wtÀkx) ) was incident in the tube in the one-dimensional plane, the sound pressure measured at each microphone could be expressed excluding the time domain as where k denotes the wave number; A and B indicate the incident and reflected wave components in the upstream tube, respectively; and C and D indicate the transmitted and reflected wave components in the downstream tube, respectively. If the above equation is developed around A, B, C, and D, it can be summarized as To simplify the calculation of transmission loss, the two microphones are placed at equal intervals. In this case, the transmission loss coefficient can be expressed as C/ A, and the transmission loss is calculated as TL = 20 log e jks À H 12 e jks À H 34 À 20 log H t j j dB ð9Þ this implies H 12 = p 2 =p 1 , which can be seen as the transfer function, which is the ratio of the Fourier-transform component between the sound pressures at positions 1 and 2. In addition, H 34 = p 4 =p 3 is the transfer function, which is the ratio of the Fourier-transform component between the sound pressures at positions 3 and 4. In addition, represents the ratio between the autospectrum in the upstream tube s u and the autospectrum in the downstream tube s d , respectively. For this analysis, white noise was generated through the spectrum analyzer, and the noise signal was simultaneously measured through each of the four microphones. The specimen to be measured in this study was fabricated based on the material of the actual bellows. The thickness of the specimen was 2.5 mm and the diameter was 30 mm, which was installed in the impedance tube, as shown in Figure 2. The impedance measurement equipment for the transmission loss measurement was composed as shown in Figure 3. White noise was emitted into the tube through the loudspeaker, and the noise was measured before it was transmitted to the specimen through two microphones. In addition, the noise passing through the specimen from the opposite side was measured through two microphones. The inner diameter of the impedance tube was 30 mm. White noise was generated using a spectrum analyzer (B&K 3550) and amplified through a power amplifier (B&K 2706). Based on the analysis of the transmission loss for the specimen of the bellows (as shown in Figure 4), the transmission loss drops sharply in the resonance region of approximately 35 Hz in the low-frequency region. Furthermore, in the region above 100 Hz, it is increased by following the mass law of transmission loss; it was confirmed that the transmission loss is approximately 13-17 dB up to 1000 Hz. The specimen had a considerably lower transmission loss value compared to the other materials used in railway vehicles such as extruded aluminum. Bellows are configured in a tube shape (as shown in Figure 5), and they are composed of various mechanical devices to connect to the vehicle. If this shape is considered for the analysis, it would require a considerable amount of time, and it would be difficult to perform an accurate analysis. Therefore, an analysis model ( Figure 6) based on the cross section of tube-shaped bellows was adopted in this study. The bellows had a shape in which eight semi-sphered corrugations comprised both the outer and inner sides. The end parts of each corrugation were fixed by a connecting pin; moreover, the analysis was performed using the physical properties of silicone rubber to reflect the dynamic properties of actual bellows. Meanwhile, Young's modulus, density, and Poisson values were equal to 170e9 Pa, 2329 kg=m 3 , and 0.28, respectively. In addition, the analysis model consisted of 13,874 meshes, and the analysis was performed using the analysis program COMSOL. The natural frequency and natural mode were examined through the eigenvalue analysis of the analysis model. Based on the low-frequency region, the natural frequency was derived ( Table 1). The derived results confirm that a number of natural frequencies exist from 54.2 Hz. The results of sequentially deriving the calculated natural frequencies from the low-frequency region confirmed that 20 natural frequencies exist up to 694.4 Hz. In particular, because the low-frequency region is sensitive to the vibration mode, a review of the related mode was performed, as shown in Figure 7. At 54 Hz, the outer corrugation part vibrates outward; furthermore, the inner corrugation part vibrates inward at 56 Hz. In particular, because the analysis has a shape wherein the outer and inner corrugations are separated from each other, the two eigenmodes exist at a frequency close to 55 Hz. Furthermore, in the 78 Hz region, the vibration was generated in the form of a number of corrugations shaking in the lateral direction. The acoustic-structure coupling analysis was performed on the external acoustic incident model based on the simple model to analyze the effect on transmission loss when an external sound is incident. First, as shown in Figure 8, a model for the air layer was added in the space, and the noise transmitted through the inside of the bellows was analyzed in an environment in which a plane wave of 1 Pa was horizontally incident from the outside. The number of meshes in the analysis model was 88569, and it had a dense shape because of the complex shape around the bellows. The transmission loss incident on the bellows was analyzed (Figure 9) focusing on the low-frequency region. Based on the analysis results in the 31.5 Hz region (1/3 octave band), it can be seen that the noise incident from the outside of the bellows passes through the inside of the bellows, and it is transmitted to the interior area of the railway. Furthermore, the noise was reduced at each step. When the noise of 31.5 Hz is generated from outside and is incident, it can be seen that the vibration of the bellows shows the shape of the outer corrugations vibrating inward. Furthermore, it can be confirmed that the noise generated from the outside of the bellows is transmitted through the inside of the bellows, as seen in the 31.5 Hz area, even in the 63 Hz area. At this time, the vibration shows the shape of the inner corrugations vibrating inward; the most stressed part was the inner corrugations at both ends. Performance analysis of thickness improvement model A review was conducted to improve the noise transmission performance of the bellows by changing the structure without filling the inside with a sound-absorbing material. To improve the transmission loss, it is effective to increase the thickness of the bellows; therefore, based on the basic theory of transmission loss, the effect of thickness in silicone rubber on the increase in transmission loss was examined. The bellows were Noh composed of a shape with a number of wrinkles; however, the assumption of a silicone rubber plate was considered to simplify the analysis. In the case of a single partition, the transmission loss was calculated as follows. Furthermore, it was assumed that the partitions were in the air layer of the same medium and were flat, nonporous, and flexible. The sound power transmission coefficient (t) can be calculated as 17 where m denotes the mass per unit area of the bellows; r and c denote the density and speed of sound in the air medium, respectively; and v and h denote the angular frequency and in-vacuo loss factor, respectively. Transmission loss (TL) can be obtained as The surface density of the silicone rubber was 3.33 kg/m 2 , and the frequency range of interest was set from 20 to 3000 Hz. According to the general transmission loss law, In this equation, quantitative relation between the thickness and transmission loss for a bellows can described by the following relations. where t denotes the thickness of the bellows. Therefore, the transmission loss can be obtained as Thus, as the thickness increases, the TL increases as well. Therefore, from the viewpoint of transmission loss, it means that the thickness of the bellows should be increased. However, increased thickness of the bellows may increase its manufacturing cost and may interfere with its movement. Therefore, the suitable thickness of the bellows should be considered based on various aspects. Based on this relationship, when the thickness of silicone rubber is 2.5, 3.0, 3.5, and 4.0 mm, the transmission loss can be calculated as shown in Figure 10. It is not easy to measure or analyze the spring constant of the bellows because the comparison is performed by focusing on the region after the resonance frequency; therefore, it can be seen that the transmission loss increases as the thickness increases. When the thickness increases to 4.0 mm, there is a 4.1 dB increase compared to that for the 2.5 mm thickness. Further, when the thickness was increased to 3.5 and 3.0 mm, the noise increase was 2.9 and 1.6 dB, respectively The bellows are in the form of a tube, and therefore, they can be viewed as a two-layer structure rather than a single-layer structure. Accordingly, an analysis of the transmission loss was performed based on the two-layer structure. In the case of a two-layer structure, the theoretical equation for the transmission loss is as follows: At first, m 1 and m 2 are assumed to be the mass of the cross-sectional area of the silicon panel, and the length of the two panels was assumed to be d. Each panel could see a viscously damped elastic suspension with stiffness and damping coefficients s 1 , s 2 , r 1 , andr 2 , respectively. Assuming that the double panels are in the same medium (air), the pressure amplitude transmission coefficient can be calculated as Furthermore,z 1 andz 2 are each expressed as specific mechanical impedances given as where h 1 and h 2 denote the respective mechanical loss factors, and v 1 and v 2 represent the in-vacuo natural frequencies (Hz). In this case, the impedance of each panel can be divided into an acoustic radiation and an acoustic stiffness term, respectively. If acoustic damping, mechanical damping, and stiffness are neglected, the maximum transmission coefficient can be summarized as At this time, the solution is This is called the ''mass-air-mass resonance frequency,'' and the frequency tends to decrease as the distance between the two panels increases. Based on the above equation, the transmission loss in the silicone rubber for the double structure is as shown in Figure 11 when considering thicknesses of 2.5, 3.0, 3.5, and 4.0 mm. In this case, the equation is affected by the ''mass-air-mass resonance frequency,'' in addition to the resonance frequency. The panel with the highest transmission loss is 4.0 mm, which increases to 8.2 dB in the region above the ''mass-air-mass resonance frequency'' compared to the existing 2.5 mm. In addition, 3.5 and 3.0 mm thick panels were identified as 5.8 and 3.2 dB, respectively. However, when the thickness increased to 4.0, 3.5, and 3.0 mm in the region below the resonance frequency, the noise decreased to 4.2, 2.9, and 1.6 dB, respectively. The improvement in the noise performance of the bellows targeted in this study is more than 3.0 dB over the entire frequency range. To this end, as the thickness of the bellows increases, the manufacturing cost increases, and the resistance to a curved run caused by contraction and expansion may increase. Therefore, the verification was performed by setting the actual thickness of the bellows to 3.5 mm through the transmission loss analysis of the simple model previously performed. However, if the thickness is increased to 3.5 mm, the weight of the bellows can be increased. Therefore, a model in which the number of corrugations was reduced to seven from eight was designed as shown in Figure 12. The natural frequencies and eigenmodes for the proposed model were analyzed. First, the natural frequency was derived (as listed in Table 2) from the lowest frequency region. Then, from the derived results, it was confirmed that a number of natural frequencies exist from 90.1 Hz. Based on the result of sequentially deriving the calculated natural frequencies from the lowfrequency region, it was confirmed that 20 natural frequencies exist up to 1212.7 Hz. Through these results, it could be confirmed that the number of natural frequencies in the low-frequency region of the proposed model were reduced compared to those of the existing model. In addition, because the noise performance in the low-frequency region is sensitive to the vibration mode, a review of the related mode was performed, as shown in Figure 13. First, at 90 Hz -first natural vibration mode -the outer corrugation part vibrates outward. In addition, in the second natural frequency mode, the inner corrugation part vibrates inward at 100 Hz. It was confirmed that the interval between the first and second natural frequencies was a little longer compared to the existing bellows model. In addition, in the region of 109 Hz, which is the third natural frequency, it was confirmed that vibration was generated in the form of a number of wrinkles shaking in the lateral direction. Next, acoustic-structure coupling analysis was performed on the external acoustic incident model based on a simple model to analyze the effect on the transmission loss when an external sound is incident on the bellows. First, as shown in Figure 14, a model for the air layer was added in the space, and the noise transmitted through the inside of the bellows was analyzed in an environment where 1 Pa of plane waves are horizontally incident from the outside. At this time, the number of meshes in the analysis model was 70804, and it is composed of a dense shape caused by the complex shape around the bellows. The transmission loss incident on the bellows was analyzed as shown in Figure 15 in the low-frequency region. The analysis results in the 31.5 Hz region (1/3 Octave band) indicate that the noise incident from the outside of the bellows passes through the inside of the cavity and is transmitted to the interior area of the train. In addition, it can be seen that the noise was reduced step-by-step. When the noise of 31.5 Hz is generated from outside and is incident, the vibration of the bellows shows the shape of the outer corrugation vibrating inward. Next, noise generated from the outside of the bellows is transmitted through the inside of the bellows, as seen in the 31.5 Hz area and even in the 63 Hz area. The vibration at this time shows that outer corrugations vibrated inward. Transmission loss was calculated as shown in Figure 16 based on the incident sound pressure based on the transmitted sound pressure of the bellows. From the measured results, the transmission loss increased in the low-frequency region when the thickness was increased to 3.5 mm. Sound propagation was found in the lowfrequency regions of 31.5 and 63 Hz, and thus, the transmission loss improved as the vibration mode decreased. Performance verification test The noise reduction performance of the improved model was validated by comparing the existing bellows. To compare and examine the noise reduction performance of the improved model, a comparative test was performed with the existing bellows. The noise was measured based on the widely used measurement and analysis criterion 18 for the noise reduction performance of bellows. In this measurement, the transmission loss of the bellows can be obtained as Where L 1 , L 2 , A 0 , and A denote the energy average sound pressure level in the source room, average energy sound level in the receiving room, surface area of the bellows, and equivalent absorption area in the receiving room, respectively. In this measurement, the surface area of the bellows was 3.64 m 2 . The reverberation chamber method was employed, in which the sound absorption rate and transmission loss were measured considering the reverberation time when the acoustic energy density becomes the same diffused sound field according to the space in the reverberation chamber. Furthermore, the test area of the bellows was calculated based on the horizontal incident area. The acoustic attenuation index of the bellows was calculated based on the difference between the average sound pressures in the sound source room and the sound receiving room, the area of the sample, and the sound absorption capacity of the sound receiving room. As shown in Figure 17, a bellows was installed between the two reverberation chamber spaces. In one space, two MicroVee speakers (Velodyne) were placed as the sound source. In addition, five noise sensors of B and K were installed at a height of 1.5 m in each room to measure the generated noise. To measure noise reduction index of the specimen, the sound source room must first be energized with a sound from the speaker. Then, a sound level meter in the source receiving room measures the reverberation time for the decay of sound level. The reverberation time is the time the sound pressure level takes to decrease by 60 dB, after a sound source is abruptly switched off. In railway vehicles, the bellows have a central area passage to facilitate movement between cabins. Therefore, a sound insulation wall, with a height, width, and depth of 2500, 1700, and 95.5 mm, respectively, was implemented ( Figure 18) to ensure that the sound did not propagate through the passage of the bellows. The sound insulation wall was composed of a medium-density fiberboard, concrete plaster, and rock wool. The sound insulation wall applied in this study was primarily made of 2400 mm concrete. The differences in the analyzed transmission loss between the improved and existing bellows were analyzed, as shown in Figure 19. The noise reduction performance was measured based on a widely used measurement and analysis criterion 18 in which, the noise reduction index was between 100 and 5000 Hz. It can be seen that the transmission loss of the improved bellows was increased compared to the existing bellows. Compared to the 30 dB transmission loss of the existing bellows, the transmission loss of the improved bellows was 33 dB. In addition, because the bellows have a large number of corrugations and a complex structure, it was confirmed that a number of valleys exist based on the actual measurement results. In the case of both the improved model and existing models, the ''massair-mass resonance frequency'' was determined to be approximately 400 Hz. This is because both models have the same spacing. In addition, it was confirmed that there was a difference in transmission loss in the low-frequency region because of the difference in the spring constant caused by the thickness change. Furthermore, the result of comparing the frequency domain was analyzed as shown in Figure 20. From the analysis results, it was confirmed that the noise performance was the highest at 5.6 dB at 160 Hz. When the natural frequency analysis was performed above, it was confirmed that if the thickness was increased, the natural frequency increased because of an increase in the spring stiffness, and thus, the transmission loss in the low-frequency region was improved. Furthermore, it was confirmed that the transmission loss of the enhanced model was improved above 4000 Hz. Conclusion The bellows of a railway vehicle are an area vulnerable to noise caused by low transmission loss among the components of the vehicle. Therefore, this study investigated an approach to improve the transmission loss of bellows by changing the thickness. From the analysis of the natural vibration model of the simple model, it was found that in the low-frequency region of the bellows, a number of vibrations occurred in the low-frequency region because of the vibration of the outer and inner parts. Therefore, to improve the transmission loss of the bellows, an analysis was performed based on the transmission loss theory by increasing the thickness from 2.6 to 3.0, 3.5, and 4.0 mm. The results of the analysis confirmed that the noise performance improved to 3.66 dB when the thickness was improved to 3.5 mm in the double-layer structure. In addition, the noise due to transmission significantly decreased in the lowfrequency region after acoustic-structure coupling analysis was performed based on the simple model. A verification test for actual performance verification was performed. For performance verification comparison, a noise comparison test was performed for the existing and improved bellows. The results confirmed that when the thickness was increased to 3.5 mm, the existing noise reduction index improved by 3 dB from 30 to 33 dB. In comparison with the frequency domain, the highest noise performance was improved with an improvement of 5.6 dB at 160 Hz. The bellows developed in this study are meaningful in that the weight was reduced by lowering the number of wrinkles even though it improved the thickness and increased the transmission loss. Declaration of conflicting interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
6,475
2021-09-01T00:00:00.000
[ "Physics" ]
New Insights on Solvent-Induced Changes in Refractivity and Specific Rotation of Poly(propylene oxide) Systems Extracted from Channeled Spectra Investigation of chiroptical polymers in the solution phase is paramount for designing supramolecular architectures for photonic or biomedical devices. This work is devoted to the case study of poly(propylene oxide) (PPO) optical activity in several solvents: benzonitrile, carbon disulfide, chloroform, ethyl acetate, and p-dioxane. To attain information on the interactions in these systems, rheological testing was undertaken, showing distinct variations of the rheological parameters as a function of the solvent type. These aspects are also reflected in the refractive index dispersive behavior, from which linear and non-linear optical properties are extracted. To determine the circular birefringence and specific rotation of the PPO solutions, the alternative method of the channeled spectra was employed. The spectral data were correlated with the molecular modeling of the PPO structural unit in the selected solvents. Density functional theory (DFT) computational data indicated that the torsional potential energy—related to the O1-C2-C3-O4 dihedral angle from the polymer repeating unit—was hindered in solvation environments characterized by high polarity and the ability to interact via hydrogen bonding. This was in agreement with the optical characterization of the samples, which indicated a lower circular birefringence and specific rotation for the solutions of PPO in ethyl acetate and p-dioxane. Also, the shape of optical rotatory dispersion curves was slightly modified for PPO in these solvents compared with the other ones. Introduction Most natural macromolecular compounds display an intriguing structural feature, known as chirality or handedness [1,2].The latter denotes that it is not possible to superimpose a chemical structure on its mirror picture, even if rotation or translation movements are applied.This aspect has tremendous importance in a variety of biological and physiological processes [3].Chiral materials have attracted the scientific community's attention to elucidate the involvement of nature's refined self-assembly procedures to create new functional architectures with unique properties [4,5].After many efforts, it was demonstrated that the chirality of polymers can be generated by stereogenic centers in the backbone or the chirality of the peculiar segments (i.e., nucleotides).In other cases, it was found that certain macromolecules exhibit chiral architectures as a result of induced conformation of the chains, which are built based on achiral subunits [5]. A prevalently encountered chiral architecture is the helical one, and the sense of helicity may be tuned in several ways.At the molecular level, the chiral structure of macromolecular compounds exists in two principal forms, namely configurational and conformational chirality.The first one refers to asymmetric spatial arrangement of atoms/groups in the chains [6].It is worth mentioning that, in this situation, the length and the dimension of the substituents are responsible for potential racemization.Thus, it is essential to monitor the stereoregularity of all macromolecular segments.The changes in the configuration entail the breaking and reorientation of the covalent bonds to produce new ones, resulting in products with similar molecular structures but distinct stereoregularities [4].Regarding the conformational chirality, transitions can be easily attained when rotations of single bonds occur, modifying the spatial disposition of chemical groups [4].Based on these findings, tailoring the handedness by adapting the chemical structure of natural polymers [7] or by preparation of synthetic polymers [8] was attempted.Both categories of materials have practical implications in biomedicine [9], catalysis [10], pharmaceutics [11], and optoelectronics [12]. A particular interest was given to the study of optical radiation interaction with chiral polymers to emphasize the factors that affect the optical activity in connection with the compound's stereochemistry [13].Some reports show that the optical rotation is influenced not only by the chemical structure but also by the insertion of a salt additive in the polymer solution [14,15], pH [16], temperature [17,18], and the solvent type and mixture [19][20][21].Concerning the solvent's implications in the optical activity, it was proved that the solvent's nature has effects on the interactions in the system and also on the conformation (via changes in some dihedral angles) [19,22].Deep knowledge of these aspects is beneficial for designing novel materials with tailored optical activity. Generally, the optical activity is experimentally determined by means of polarimetry, which involves the travelling of polarized light through the examined sample and evaluates the rotation angle of the vibration plane of the radiation electric field upon exiting the chiral medium.Most polarimeters enable experiments at a fixed wavelength, while the devices that allow for the evaluation of the optical rotation dispersion (ORD) are more expensive.An alternative procedure to determine ORD was recently developed, relying on the channeled spectrum [23][24][25].The latter is defined as a spectrum where the intensity presents a periodic modulation function of the wavenumber and is a useful tool for examination of the dispersion properties [26,27].Thus, the spectrum channel can be regarded as being the region of the spectrum between two successive minima.For such measurements, two polarizers are introduced in the measurement beam of the spectrophotometer and the cell with the studied solution placed between them, while in the compensatory beam, two other polarization filters are inserted [24,28].When exiting the anisotropic medium, the projection of the electric field intensity can be described via an expression relating the circular birefringence (∆n), concentration, and thickness of the sample.The radiation emerging from the chiral polymer solution presents components with various degrees of rotation.By depicting the minima and maxima of the emergent flux density (based on recorded channeled spectra) as a function of the wavelength, it is possible to determine the birefringence and its dispersion parameter (δ).Since the specific rotation ([θ]) is linked to birefringence, it is very easy to compute the optical rotation at various wavelengths. Among the optically active polymers, poly(propylene oxide) (PPO) is a synthetic polymer that displays alternating hydrophobic and hydrophilic units in its chains [29].Thus, PPO is characterized by good solubility in many solvents.Also, this macromolecular material lacks toxicity, being used in the preparation of cosmetics, food products, excipients for drugs, and components for non-linear optics or photonics [30,31].In a previous work [32], the ORD of PPO was measured in a benzene medium using channeled spectra technique, and it was found that the specific rotation decreased as the light wavelength became higher. This paper contributes to previous efforts [32] by examining the solvent-induced effects on the ORD of PPO solutions in benzonitrile, carbon disulfide, chloroform, ethyl acetate, and p-dioxane.A deeper investigation was conducted to acquire a better comprehension of the impact of solvent features on variations in PPO optical behavior.Rheological testing was performed to capture information on the correlation among the shear response and PPO-solvent interactions.Density functional theory (DFT) computations performed on the PPO structural unit in the presence of solvent molecules revealed the possible formation of hydrogen bonding depending on the solvent type.The latter also affected the refractive index dispersive properties; thus, the linear and non-linear optical parameters of PPO can be tailored by proper selection of the solvation medium.Further, the channeled spectrum approach was applied to the prepared systems to gain new insights into the solvent's implications on the PPO circular birefringence, its dispersion parameter, and specific rotation at various wavelengths.It was found that polymer-solvent interactions produce significant modifications in the aspect of channeled spectra, including on the ORD curves.This was also supported by the DFT studies on the torsional potential energy for the O1-C2-C3-O4 dihedral angle in PPO, the magnitude of which was smaller when using polar solvents.Such correlation among molecular modeling and rheological and optical data has not yet been reported, and it brings novel insights on the role of the chiral polymer-solvent compatibility on tailoring rheological and optical properties of PPO.Another original aspect of the present paper resides in determining the circular birefringence and optical rotation dispersion curves by an alternative new method involving the channeled spectrum.These new findings contribute to comprehension of the solution behavior of chiral polymer structures in environments having a distinct polarity and variable hydrogen bonding ability in relation to the material optical performance.This lies at the basis of the careful design of macromolecular architectures of practical importance in photonics and biomedical devices. Rheological Behavior Rheological testing of the PPO solutions was carried out to probe their flow behavior under imposed stress.Figure 1a depicts the shear viscosity against the shear rate for the PPO systems.The studied samples displayed a pseudoplastic behavior, and no zero-shear viscosity plateau was recorded, indicating that under shear deformation, the PPO chains aligned.It was found that the sample resistance to flow was different as a function of the solvent peculiarities.At low shear rates, the PPO solutions' viscosity was affected by intermingling effects of solvent viscosity and solubility peculiarities (i.e., polarity and hydrogen bonding ability).As a general tendency, less viscous and low polar solvation media resulted in lower solution viscosity.Further information was extracted from the dependence of the shear stress versus shear rate (shown in Figure 1b), which can be fitted with the power law relation (1): where σ is the shear stress, . γ is the shear rate, n flow is the flow behavior index, and K 0 is the solution consistency index.By linearization of Equation ( 1), it is possible to evaluate the flow behavior index and solution consistency index of each PPO system.The attained values for nflow and K0 are listed in Table 1.The deviation from unitary values of the flow behavior index denotes that the analyzed fluid had no Newtonian character and, instead, it presented shear thinning flow properties (the viscosity was dependent on the applied deformation).Hence, the PPO solutions in benzonitrile, ethyl acetate, and p-dioxane presented values of nflow under 0.5, meaning that the viscosity had a steeper variation with the imposed shear rate.On the other hand, the PPO samples in chloroform and carbon disulfide displayed higher values for the nflow parameter.This was probably due to the smaller size of such low polar By linearization of Equation (1), it is possible to evaluate the flow behavior index and solution consistency index of each PPO system.The attained values for n flow and K 0 are listed in Table 1.The deviation from unitary values of the flow behavior index denotes that the analyzed fluid had no Newtonian character and, instead, it presented shear thinning flow properties (the viscosity was dependent on the applied deformation).Hence, the PPO solutions in benzonitrile, ethyl acetate, and p-dioxane presented values of n flow under 0.5, meaning that the viscosity had a steeper variation with the imposed shear rate.On the other hand, the PPO samples in chloroform and carbon disulfide displayed higher values for the n flow parameter.This was probably due to the smaller size of such low polar solvent molecules, which determine less polymer uncoiling, so that the chain orientation during shearing was reduced, and hence, the sample viscosity changed less abruptly upon shearing.This means that when specific molecular interactions take place between solute and solvent, the PPO chains are better penetrated by the polar molecules (benzonitrile, p-dioxane, or ethyl acetate), and they have a higher ability to uncoil and align under imposed deformation.The formation of specific interactions, like hydrogen bonding, is reflected in lowest n flow values because the macromolecular coils suffered more extensive unwinding, which enabled stronger orientation in the shear field.The specific molecular interactions that may or may not occur in the PPO samples are additionally reflected in the distinct values of the consistency index.As noted in Table 1, K 0 varied as a function of the mixed effects of solvent viscosity and its ability to interact with PPO.When the polymer interacted with the solvent via specific interactions, the sample consistency was higher (e.g., PPO/p-dioxane), while in the case of weaker PPO-solvent interactions, a lower consistency index of the sample was observed (e.g., PPO/carbon disulfide).A deeper comprehension of the solvent's effects can be acquired by accounting for the solubility parameter.The dispersive (δ d ), polar (δ p ), and hydrogen bonding (δ h ) components of the Hansen solubility parameter (HSP) for the chosen solvents were taken from the literature [33] and are plotted in Figure 2a in comparison to the solubility features of the PPO. In the case of the investigated polymer, HSP values were obtained with the help of the new group-contribution approach developed by Stefanis and Panayiotou [34].For the case where the total solubility parameter (δ t ) values were closer, it can be stated that the solvent and polymer were more compatible, as observed for ethyl acetate and p-dioxane in regard to PPO.Based on these data, the dissolvability of the PPO in the analyzed solvents can be assessed using Equation (2): where R 0 is the interaction solute/solvent radius, while the low indices "s" and "p" of the HSP components refer to the solvent and polymer.In the case of the investigated polymer, HSP values were obtained with the help of the new group-contribution approach developed by Stefanis and Panayiotou [34].For the case where the total solubility parameter (δt) values were closer, it can be stated that the solvent and polymer were more compatible, as observed for ethyl acetate and p-dioxane in regard to PPO.Based on these data, the dissolvability of the PPO in the analyzed solvents can be assessed using Equation ( 2): where R0 is the interaction solute/solvent radius, while the low indices "s" and "p" of the HSP components refer to the solvent and polymer. The changes in magnitude of the interaction radius for each PPO/solvent system are presented in Figure 2b.The results indicate that the solubility distance was higher for PPO in carbon disulfide, benzonitrile, and chloroform, while this parameter was lower for PPO in ethyl acetate and p-dioxane.The latter solvents exhibited a polar character, which when combined with hydrogen bonding ability, led to a better interaction with PPO (Figure 2a,b).Also, polymer-solvent interactions were evaluated via the flow activation energy, which was determined by fitting the viscosity data (at shear rates close to zero) at several temperatures with the Arrhenius equation (3): where C is a constant, Ea is the flow activation energy, R is the universal gas constant, and T is the absolute temperature. The flow activation energy denotes the energy demanded by the molecules to acquire mobility against the frictional forces of the adjacent ones.When PPO was dissolved in solvents with low polarity (carbon disulfide, chloroform, benzonitrile), the interactions in The changes in magnitude of the interaction radius for each PPO/solvent system are presented in Figure 2b.The results indicate that the solubility distance was higher for PPO in carbon disulfide, benzonitrile, and chloroform, while this parameter was lower for PPO in ethyl acetate and p-dioxane.The latter solvents exhibited a polar character, which when combined with hydrogen bonding ability, led to a better interaction with PPO (Figure 2a,b).Also, polymer-solvent interactions were evaluated via the flow activation energy, which was determined by fitting the viscosity data (at shear rates close to zero) at several temperatures with the Arrhenius equation ( 3): where C is a constant, E a is the flow activation energy, R is the universal gas constant, and T is the absolute temperature. The flow activation energy denotes the energy demanded by the molecules to acquire mobility against the frictional forces of the adjacent ones.When PPO was dissolved in solvents with low polarity (carbon disulfide, chloroform, benzonitrile), the interactions in such systems were weaker, and this was reflected in smaller values of the E a parameter (Figure 2b).Conversely, the magnitude of E a was higher for the PPO solutions in polar solvents with a hydrogen bonding ability (like p-dioxane and ethyl acetate).This is because when solute-solvent interactions are more powerful, a higher restrictive force exerted by the macromolecules must be overcome for the solution to flow.This is supported by literature data [35,36] and molecular modeling computations for studied PPO structural unit in the presence of two solvent molecules, as shown in Figure 2c. Figure 2c depicts an illustrative scheme based on computational data, revealing the sites from the PPO structure that are available for the formation of the hydrogen bonds with certain solvent molecules.The optimized 3D molecular structures reflect the most stable geometries, having an essential role in the magnitude of the optical rotation.It is expected that the nature of the selected solvent affects the PPO repeating unit conformation as well as the torsion angle or potentially the length of specific bonds.According to the performed simulations, the PPO structural unit prevalently formed hydrogen bonds with the solvents having larger values of δ h (e.g., p-dioxane).When the PPO structural unit was under the influence of a non-polar solvent (e.g., carbon disulfide), the solute-solvent interactions were weak.This is supported by the literature [37], which explains that high solvent polarity is responsible for raising the rotation barrier of certain covalent bonds.In a polar solvent, the cohesion energy of the solute-solvent system is higher (stronger intermolecular forces), so that the overall mobility of the solute molecules in such a medium is diminished (bigger resistance to torsional deformation).Hence, when the PPO structural unit is placed in a polar solvent with a high probability to form hydrogen bonding (p-dioxan, ethyl acetate), the free rotation of certain substituents from the asymmetric carbon of the PPO structural unit is constrained, and a higher rotational barrier must be overcome to adopt different molecular conformations.The literature [38,39] states that the greater mobility of the substituents of the stereogenic centers leads to a wider range of molecular conformation, and each of them contributes to the optical rotation.Moreover, the literature [38] indicates that the solvent's nature has a distinct influence on each substituent from a chiral molecule.For instance, polar substances display powerful intermolecular interactions with each other, producing nonsignificant changes in the contact radius of the substituents.The latter aspect is correlated with small values of the optical rotation [38].It can be assessed that, when optical radiation passes through the PPO solutions in polar solvents, the changes in the rotation angle of the light polarization plane are limited compared to those of the PPO in less polar media.In addition, the solvent molecules having a similar ability to connect via hydrogen bonds with the chiral molecule produce distinct effects on the conformation and consequently on the optical activity [22].This aspect is also reflected in optical anisotropy of the samples since the PPO contains an asymmetrical carbon atom and a chiral plane, which can be twisted as a function of the solvation environment's polarity and hydrogen bonding ability.The literature [40] on molecular modeling of PPO in other solvents reveals that the conformation of this polymer is gauche-like in polar solvents (water and methanol), and a preference for trans conformation is prevalent in non-polar solvents (carbon tetrachloride and n-heptane) as a result of specific interactions via hydrogen bonding. The rheological analyses were continued by performing shear oscillatory experiments, and the obtained data are plotted in Figure 3.A frequency sweep of all samples revealed that at low frequencies, the loss (G ′′ ) modulus was higher in comparison with the storage (G ′ ) one.Thus, the frequency (f ) response of the rheological moduli was different, namely, G ′ ~f 2 and G ′′ ~f 1 .Also, no plateau appearing in the storage modulus curves was noticed, indicating a typical rheological behavior for the viscoelastic fluids [41].As the frequency further increased, they became equal at a specific frequency, and then, the elastic component overcame the viscous one.The crossover frequency (f c ) varied for the PPO solutions as a function of the solvent in the following order: p-dioxane < benzonitrile < chloroform < ethyl acetate < carbon disulfide.This means that more viscous PPO solutions will require a larger relaxation time and, as a result, the rheological moduli will become equal at a smaller crossover frequency.Thus, the balance between the viscous and elastic characteristics of the PPO samples is affected by the type of solvent.Another rheological report [42] on the shear oscillatory properties of PPO revealed that the distinct behavior of this polymer was associated with the polysiloxane cross-linker.Thus, in this case, the elastic modulus became larger than the viscous one for the entire shear frequency domain. equal at a smaller crossover frequency.Thus, the balance between the viscous and elastic characteristics of the PPO samples is affected by the type of solvent.Another rheological report [42] on the shear oscillatory properties of PPO revealed that the distinct behavior of this polymer was associated with the polysiloxane cross-linker.Thus, in this case, the elastic modulus became larger than the viscous one for the entire shear frequency domain. Linear and Non-Linear Optical Parameters The refractivity of a polymer solution can be regarded as a good indicator of the light speed and deflection characteristics upon traveling through it.The refractive index (n) of each PPO system was measured at different wavelengths, as illustrated in Figure 4a.The results prove that the samples exhibit normal dispersion curves.Regardless of the incident light wavelength, the refractive index magnitude of the PPO solutions changed as a function of the solvent, as follows: carbon disulfide > benzonitrile > chloroform > p-dioxane > ethyl acetate.This means that more sudden changes in the direction of light waves in regard to their orientation in the incident medium occurred in PPO/carbon disulfide and less in PPO/ethyl acetate.The refractive index dispersion of PPO and PPO solutions has not been published yet.However, for PPO, a refractive index of 1.457 has been reported [43]. Linear and Non-Linear Optical Parameters The refractivity of a polymer solution can be regarded as a good indicator of the light speed and deflection characteristics upon traveling through it.The refractive index (n) of each PPO system was measured at different wavelengths, as illustrated in Figure 4a.The results prove that the samples exhibit normal dispersion curves.Regardless of the incident light wavelength, the refractive index magnitude of the PPO solutions changed as a function of the solvent, as follows: carbon disulfide > benzonitrile > chloroform > p-dioxane > ethyl acetate.This means that more sudden changes in the direction of light waves in regard to their orientation in the incident medium occurred in PPO/carbon disulfide and less in PPO/ethyl acetate.The refractive index dispersion of PPO and PPO solutions has not been published yet.However, for PPO, a refractive index of 1.457 has been reported [43].From the dispersion data, additional information can be obtained by applying the theory of Wemple and DiDomenico [44], expressed through the relation (4): From the dispersion data, additional information can be obtained by applying the theory of Wemple and DiDomenico [44], expressed through the relation (4): where E is the photon energy, E 0 refers to the average excitation energy for electronic transitions, and E d refers to the dispersion energy.By graphical representation of 1/(n 2 − 1) as a function of the square photon energy, as presented in Figure 4b, and fitting with a linear function, the values of E 0 and E d were obtained.The results are summarized in Table 2. Table 2.The data of the dispersion energy (E d ), single-oscillator energy (E 0 ), band gap energy (E g ), static refractive index (n 0 ), first-order optical susceptibility (χ (1) ), third-order optical susceptibility (χ (3) ), and non-linear refractive index (n nl ) of the PPO solutions in several solvents.These linear optical parameters of PPO denote the influence of the solvent's refractive features.The data indicate that dispersion energy, which is linked to the medium potency of interband photosensitive transitions, was smaller for PPO in p-dioxane and ethyl acetate due to their higher polar and hydrogen bonding features.Moreover, the single-oscillator energy for electronic transitions had lower values for these systems and therefore smaller band gap energy values compared with the case of PPO in less polar solvents. System Starting from E d and E 0 data, it was possible to evaluate the static refractive index (n 0 ) at zero photon energy, as depicted in relation (5): The results for this parameter are also included in Table 2, showing that solvents like chloroform, carbon disulfide, and benzonitrile led to higher values of n 0 , while for the polar solvents, this parameter had slightly smaller values. Optical dispersion data also enabled the estimation of certain non-linear optical properties [45], such as first-and third-order optical susceptibilities (χ (1) and χ (3) ) and the non-linear refractive index (n nl ), as shown in relations (6)-( 8): All the results concerning the non-linear optical properties are listed in Table 2.It was found that the values of the χ (1) parameter were slightly smaller for the polymer solutions in p-dioxane and ethyl acetate in regard to the other studied systems, whereas χ (3) displayed a change with one order of magnitude for benzonitrile and carbon disulfide in comparison to the remaining solvents.The non-linear refractive index of Int.J. Mol.Sci.2024, 25, 4682 9 of 16 PPO samples ranged differently as a function of the solvent characteristics (polarity and molecular symmetry). Circular Birefringence and Specific Rotation via Channeled Spectra Approach The circular birefringence and optical activity of each computed system were evaluated using the channeled spectrum approach.The benefit of this method is that it facilitates acquiring information on these optical parameters in a wide wavelength interval from a visible range during a single experiment.Figure 5 illustrates the achieved channeled spectra for the PPO-solvent mixtures.0 All the results concerning the non-linear optical properties are listed in Table 2.It was found that the values of the χ (1) parameter were slightly smaller for the polymer solutions in p-dioxane and ethyl acetate in regard to the other studied systems, whereas χ (3) displayed a change with one order of magnitude for benzonitrile and carbon disulfide in comparison to the remaining solvents.The non-linear refractive index of PPO samples ranged differently as a function of the solvent characteristics (polarity and molecular symmetry). Circular Birefringence and Specific Rotation via Channeled Spectra Approach The circular birefringence and optical activity of each computed system were evaluated using the channeled spectrum approach.The benefit of this method is that it facilitates acquiring information on these optical parameters in a wide wavelength interval from a visible range during a single experiment.Figure 5 illustrates the achieved channeled spectra for the PPO-solvent mixtures.The resulting spectra had distinct shapes as a function of the solvent characteristics.When introducing PPO in solvation media characterized by a large polarity and medium hydrogen bonding ability, such as p-dioxane and ethyl acetate, the spectrum contained a smaller number of channels, which were wider.On the other hand, solvents characterized by a poor capacity to interact with PPO via hydrogen bonding, like benzonitrile, carbon disulfide, or chloroform, led to spectra with narrow and multiple channels, as noted in Figure 5. Based on the acquired spectral information, the circular birefringence, its dispersion parameter, and specific rotation were evaluated.The resulting data are presented at several wavelengths in Figures 6 and 7.The resulting spectra had distinct shapes as a function of the solvent characteristics.When introducing PPO in solvation media characterized by a large polarity and medium hydrogen bonding ability, such as p-dioxane and ethyl acetate, the spectrum contained a smaller number of channels, which were wider.On the other hand, solvents characterized by a poor capacity to interact with PPO via hydrogen bonding, like benzonitrile, carbon disulfide, or chloroform, led to spectra with narrow and multiple channels, as noted in Figure 5. Based on the acquired spectral information, the circular birefringence, its dispersion parameter, and specific rotation were evaluated.The resulting data are presented at several wavelengths in Figures 6 and 7.The circular birefringence is indicative of the level of optical activity of a medium.As seen in Figure 6a, at about 400 nm, the magnitude of Δn of the samples varied depending on the solvent features.Our data revealed that smaller values of Δn were observed for PPO in p-dioxane, followed by ethyl acetate solutions.Oppositely, larger values of Δn were noticed for the PPO solutions in carbon disulfide, followed by benzonitrile and chloroform solutions.The achieved circular birefringence results are consistent with other works [46][47][48].An advantage of the channeled spectra method is that it allows for the estimation of Δn at several wavelengths.As shown in Figure 6a, the Δn dispersion curves show a smaller rate of variation for the solvents characterized by medium hydrogen bonding ability.Conversely, for the solvents with poor predisposition for such interactions, the birefringence dispersion plots have slightly distinct shape with a bigger rate of variation.The dispersion parameter δ has a linear increase with the optical radiation wavelength, as seen in Figure 6b.This dependence was nonsignificantly affected by the solvent in which The circular birefringence is indicative of the level of optical activity of a medium.As seen in Figure 6a, at about 400 nm, the magnitude of ∆n of the samples varied depending on the solvent features.Our data revealed that smaller values of ∆n were observed for PPO in p-dioxane, followed by ethyl acetate solutions.Oppositely, larger values of ∆n were noticed for the PPO solutions in carbon disulfide, followed by benzonitrile and chloroform solutions.The achieved circular birefringence results are consistent with other works [46][47][48].An advantage of the channeled spectra method is that it allows for the estimation of ∆n at several wavelengths.As shown in Figure 6a, the ∆n dispersion curves show a smaller rate of variation for the solvents characterized by medium hydrogen bonding ability.Conversely, for the solvents with poor predisposition for such interactions, the birefringence dispersion plots have slightly distinct shape with a bigger rate of variation.The dispersion parameter δ has a linear increase with the optical radiation wavelength, as seen in Figure 6b.This dependence was nonsignificantly affected by the solvent in which the PPO was introduced.Moreover, the values of the δ parameters were small for all analyzed samples. The channeled spectrum approach also allows for the evaluation of the optical activity of the PPO solutions.The results for specific rotation are displayed in Figure 7a, and they are in agreement with other works employing polarimetry experiments [49,50].A similar variation of optical rotation with wavelength was reported for PPO dissolved in p-dioxane, ethyl acetate, and chloroform [49].It is known that the specific rotation is directly influenced by the circular birefringence.Thus, analogously to ∆n, the optical activity of PPO systems changed when particular solvent features did.As seen in Figure 7a, at about 400 nm, the values of [θ] of the PPO solutions were as follows: carbon disulfide > benzonitrile > chloroform > ethyl acetate > p-dioxane.Thus, higher values of [θ] were noticed for the studied polymer dissolved in carbon disulfide and benzonitrile.As reported in other studies [46,47], the polarity and hydrogen bonding interactions among the solute and solvent molecules caused the modification of the chiroptical response.Also, it is known that the polymer-solvent hydrogen bonding affects the solute rotation in that environment [48].Therefore, the motional dynamics, assessed via the bond rotational barriers, are limited when the intramolecular interactions in the system components become stronger, so that the probability of rendering different conformations is obviously reduced.The literature [47] additionally shows that, when a solvent tends to interact with the polymer via hydrogen bonding, the value of specific rotation is lower compared to the case when the same polymer is placed in a solvent less able to form such interactions. Additional insights were acquired by performing molecular modeling to extract the torsional potential energy (E p-tors ) profile for the PPO structural unit in different solvation media (Figure 7b).The E p-tors profiles versus the dihedral angle were computed by undertaking scans in the range from 0 • to 360 • , in 10 • increments, considering the rotation of the O1-C2-C3-O4 group from the PPO structure.During variation of the torsional angle, distorted and undistorted configurations appeared.Hence, it is possible to detect the dihedral angle interval when the PPO structural unit is most deformed, which is helpful for understanding the distribution of molecular conformations from a thermodynamic point of view.According to the literature [51], the variation in the torsional barrier might be produced by the supplementary steric repulsion generated by the solvent and by hydrogen bonding.In Figure 7b, the data for E p-tors against the dihedral angle (achieved using the DFT method) are depicted for the PPO structural unit-solvent systems, showing minima and maxima.The lowest values of E p-tors denote the equilibrium state (high stability of the molecules), while the maxima reflect an unstable state of the molecules (tensioned).The plots from Figure 7b indicate that the PPO structural unit presents one or more conformations in distinct positions that can be ascribed to the smallest energy configuration as a function of the particular solvent features. The magnitude of the torsional potential energy was higher for the systems where the polar and/or hydrogen boding interactions were stronger (ethyl acetate, p-dioxane), thus limiting the free rotation of PPO in these solvation media compared with the other ones.To explain this, one must consider that, in this type of solvent, the chiral substances (like PPO) display more powerful intermolecular forces (e.g., hydrogen bonding, dipole-dipole forces), which raise the barrier to rotation of the substituents from the asymmetric carbon (steep maxima "blocking" rotation).As a consequence, the torsional potential energy has larger values.Given the aforementioned restriction, the PPO structural unit in polar solvents has a narrow possibility to adopt a variety of molecular conformations that can contribute to the overall optical rotation.In other words, linear polarized radiations (presumed to consist of two circularly polarized radiations of opposite sign) passing through the chiral system in polar media will encounter fewer conformers that are able to produce nonsymmetric electromagnetic interaction (both circularly polarized beams are slowed down in almost the same manner).This implies a diminished rotation of the light polarization plane, so the expected optical response in such samples is limited.Conversely, if the optically active molecules weakly interact with the solvent, the torsional potential energy will be low (larger mobility of the substituents from the stereogenic center).This favors acquiring a variety of molecular conformations, which enable enhanced non-symmetric electromagnetic interaction (the two circularly polarized rays are differently slowed down), therefore introducing circular birefringence after light exits the chiral medium.As a result, the rotation of the light polarization plane is produced to a larger extent.Thus, molecular modeling supports the specific rotation results, indicating that strong solutesolvent interactions are responsible for diminishing the magnitude of the [θ].The obtained data underline the ability of PPO samples to produce the rotation of the light polarization plane and to modulate it as a function of the solvent nature. Materials Poly(propylene oxide) (PPO) and the utilized solvents were acquired from Sigma Aldrich (now Merck), St. Louis, MO, USA.The chemical structure of the polymer is displayed in Scheme 1. Materials Poly(propylene oxide) (PPO) and the utilized solvents were acquired from Sig drich (now Merck), St. Louis, MO, USA.The chemical structure of the polymer played in Scheme 1.The polymer solutions were prepared by dissolving 2.5 g of PPO powder in of each of the following solvents, acquired from Sigma Aldrich (now Merck): benz (anhydrous, ≥99%), carbon disulfide (anhydrous, ≥99%), chloroform (anhydrous contained 0.5-1.0%ethanol as a stabilizer), ethyl acetate (anhydrous, ≥99.8%), and ane (anhydrous, ≥99.8%).The pH of the polymer solutions did not vary intentiona ing experiments, and the small variations were caused by the combination of PPO 4) with the selected solvents having pH values comprised between 4 and 7. Characterization The rheological behavior of the PPO samples was analyzed on the Bohlin CS50 (Malvern Instruments Ltd., Malvern, UK).The employed measuring system cone/plate geometry (4 cm/4°) with a gap of 150 µm.Shear viscosities were register a 0.1-100 s −1 shear rate range (at 25 °C), and for evaluation of the activation ener temperature varied between 25 and 45 °C.Shear moduli were measured within th viscoelastic regime of the samples, where the rheological moduli were not affected strain amplitude.The frequency sweep experiments involved an application of 0.1 and a stress of 1 Pa. The refractometry testing of each PPO system was carried out at 25 °C on wavelength Abbe equipment (Anton Paar GmbH, Graz, Austria) having a precision The molecular modeling was carried out with Gaussian G16 software (Gaussia Wallingford, CT, USA) [52].The density functional theory (DFT) approach was u the proposed computations.The PBE0/6-31+G(d,p) method was employed for op tion and single point calculations.The scanning of the torsion angle correspondin atoms O1-C2-C3-O4 from the PPO repeating unit was performed to search the con tional landscape.This was done by taking into account the dihedral angle and inc the solvent effect (implicit model).After attaining the global minimum conformati solvent molecules were added in the presence of PPO to test the possible occurr the specific interactions (e.g., hydrogen bonding).The solute-solvent pair was mized to acquire equilibrium of the molecular system in an implicit solvent mode the recalculation of the Hessian matrix (frequency calculation) was done to chec negative frequencies were lacking and thus confirming the equilibrium state of the s The polymer solutions were prepared by dissolving 2.5 g of PPO powder in 100 mL of each of the following solvents, acquired from Sigma Aldrich (now Merck): benzonitrile (anhydrous, ≥99%), carbon disulfide (anhydrous, ≥99%), chloroform (anhydrous, ≥99%, contained 0.5-1.0%ethanol as a stabilizer), ethyl acetate (anhydrous, ≥99.8%), and pdioxane (anhydrous, ≥99.8%).The pH of the polymer solutions did not vary intentionally during experiments, and the small variations were caused by the combination of PPO (pH = 4) with the selected solvents having pH values comprised between 4 and 7. Characterization The rheological behavior of the PPO samples was analyzed on the Bohlin CS50 device (Malvern Instruments Ltd., Malvern, UK).The employed measuring system had a cone/plate geometry (4 cm/4 • ) with a gap of 150 µm.Shear viscosities were registered over a 0.1-100 s −1 shear rate range (at 25 • C), and for evaluation of the activation energy, the temperature varied between 25 and 45 • C. Shear moduli were measured within the linear viscoelastic regime of the samples, where the rheological moduli were not affected by the strain amplitude.The frequency sweep experiments involved an application of 0.1-50 Hz and a stress of 1 Pa. The refractometry testing of each PPO system was carried out at 25 • C on multiwavelength Abbe equipment (Anton Paar GmbH, Graz, Austria) having a precision of 10 −4 . The molecular modeling was carried out with Gaussian G16 software (Gaussian, Inc., Wallingford, CT, USA) [52].The density functional theory (DFT) approach was used for the proposed computations.The PBE0/6-31+G(d,p) method was employed for optimization and single point calculations.The scanning of the torsion angle corresponding to the atoms O1-C2-C3-O4 from the PPO repeating unit was performed to search the conformational landscape.This was done by taking into account the dihedral angle and including the solvent effect (implicit model).After attaining the global minimum conformation, two solvent molecules were added in the presence of PPO to test the possible occurrence of the specific interactions (e.g., hydrogen bonding).The solute-solvent pair was reoptimized to acquire equilibrium of the molecular system in an implicit solvent model.Then, the recalculation of the Hessian matrix (frequency calculation) was done to check if the negative frequencies were lacking and thus confirming the equilibrium state of the system.Based on theoretical analyses, the distance corresponding to hydrogen bonding was measured to be smaller than 3 Å.The functional PBE0 [53] can predict reliable results relating to the intramolecular/intermolecular structural parameters as well as about the conformational effect in organic compounds [54][55][56].The 6-31+G(d,p) basis set employed together with a functional method reported a good estimation of wave functions of the optimized structures [57]. The channeled spectra of all solutions were recorded at ambient temperature (25 • C) using a UV-VIS spectrophotometer (Carl Zeiss Jena, Jena, Germany).The device had a data acquisition system, which enabled the registering of spectra with a resolution of 0.2 nm.A transparent cell of 2.5 dm in size was employed for the measurements.The experimental setup was described in detail in another work [28].Briefly, the double beam spectrophotometer operated using linear polarized light, which was produced by placing polarizers in the device along the sample beam and along the reference beam.More precisely, the polymer solution cell was inserted in the measurement beam between two polarizers with crossed transmission directions, while in the reference beam, the polarizers had parallel orientation (due to absorption compensation).By this procedure, the linear polarized light passed through the chiral polymer solution, and its ordinary and extraordinary components underwent interference.Thus, the recorded spectral signal intensity presented a periodic modulation function of the wavelength.The channeled spectrum was attained with a speed of 2 cm −1 /s.In this way, it was possible to discern the changes in the flux density from a channel to the neighboring one. Theoretical Background The rotation angle of the polarization plane was affected by several parameters, as depicted in Equation ( 9): where [θ] is the specific rotation, C is the concentration, L is the length of anisotropic medium, λ 0 is the radiation wavelength in a vacuum, and ∆n is the rotatory birefringence. The polarization plane remains unchanged or spins with an integer of π in the situation in which the radiation displays the electric field orthogonal to the polarizer transmission direction, generating a null flux density.On the other hand, when the polarization plane is changed upon spinning with an odd number of π/2, the radiations display the electric field parallel to the polarizer transmission direction, producing the biggest flux density (channeled spectrum presents maxima). The transmission factor of the spectrophotometer, denoted here as T, is defined in Equation ( 10): The spectrophotometer uses a quasi-equienergetic source of optical radiation, which passes through the polymer solution, and the emergent beams have components of variable degrees of rotation. Conclusions This work describes the effects of several solvents on the optical activity of PPO solutions.Rheological analyses revealed that in low polar solvents, the interactions of PPO systems were weaker, leading to lower flow activation energy and a higher interaction solute-solvent radius.Molecular modeling emphasized the sites from the PPO structure that are available for the formation of the hydrogen bonds in the presence of certain solvents. The prepared polymer solutions exhibited normal dispersion curves, and the recorded refractivity was affected by the solvent's optical features.The linear optical parameters like dispersion energy and band gap energy presented lower values for samples in more polar solvents.Also, the first-order optical susceptibility and non-linear refractive index were slightly smaller for PPO in such solvents.The data attained from the channeled spectrum enabled evaluation of the circular birefringence, its dispersion parameter, and specific rotation.At about 400 nm, the circular birefringence was higher for the systems where the solvent had low polarity and a poor ability to connect via hydrogen bonds with PPO.The dispersion parameter of birefringence displayed small values, while the circular birefringence dispersion curves had slightly different shapes for the systems prepared in polar solvents having a medium capacity to form hydrogen bonds with the polymer.Similar variation was noted for specific rotation of the samples, which was found to range in the next order: carbon disulfide > benzonitrile > chloroform > ethyl acetate > p-dioxane.This was sustained by molecular modeling, which showed that there is a correlation between the solvent nature, the rotational barrier of the substituents of asymmetric carbon from PPO, and the optical response.The resulting powerful interactions among the chiral molecules and the solvent limited the mobility of substituents from the stereogenic center and thereby restricted the number of conformers that can generate non-symmetric electromagnetic interaction; consequently, the specific rotation was diminished. As a conclusion, this investigation emphasized the role of polymer-solvent interactions on tailoring optical properties, including the optical activity parameter, which has practical implications for designing supramolecular architectures for photonic or biomedical devices. Figure 1 . Figure 1.Dependence of (a) shear viscosity and (b) shear stress on shear rate for PPO solutions in several solvents. Figure 1 . Figure 1.Dependence of (a) shear viscosity and (b) shear stress on shear rate for PPO solutions in several solvents. Figure 2 . Figure 2. Solubility parameter components for the used solvents and those calculated for PPO (a), variation of the interaction radius (continuous line) and flow activation energy (dash line) (b) for all studied PPO solutions, and (c) interaction representation of the PPO structural unit in the presence of two solvent molecules (dashed line shows the hydrogen bonding between the PPO and the solvent). Figure 2 . Figure 2. Solubility parameter components for the used solvents and those calculated for PPO (a), variation of the interaction radius (continuous line) and flow activation energy (dash line) (b) for all studied PPO solutions, and (c) interaction representation of the PPO structural unit in the presence of two solvent molecules (dashed line shows the hydrogen bonding between the PPO and the solvent). Figure 3 . Figure 3. Shear oscillatory tests of the PPO solutions in several solvents. Figure 3 . Figure 3. Shear oscillatory tests of the PPO solutions in several solvents. Figure 5 . Figure 5. Channeled spectra obtained for PPO in several solvents. Figure 5 . Figure 5. Channeled spectra obtained for PPO in several solvents. Figure 6 . Figure 6.Circular birefringence dispersion (a) and birefringence dispersion parameters (b) obtained for PPO in several solvents. Figure 6 . Figure 6.Circular birefringence dispersion (a) and birefringence dispersion parameters (b) obtained for PPO in several solvents. Figure 6 . Figure 6.Circular birefringence dispersion (a) and birefringence dispersion parameters (b) obtained for PPO in several solvents. Figure 7 . Figure 7.The specific rotation obtained for PPO in several solvents (a) and the representation of the optimized configuration of the PPO structural unit with atom order and the torsional potential energy profile versus the O1-C2-C3-O4 dihedral angle from PPO structural unit in the presence of the studied solvents (the numbers from the upper inset index the atoms of the PPO structural unit) (b). Figure 7 . Figure 7.The specific rotation obtained for PPO in several solvents (a) and the representation of the optimized configuration of the PPO structural unit with atom order and the torsional potential energy profile versus the O1-C2-C3-O4 dihedral angle from PPO structural unit in the presence of the studied solvents (the numbers from the upper inset index the atoms of the PPO structural unit) (b). Scheme 1 . Scheme 1.The chemical structure of PPO. Scheme 1 . Scheme 1.The chemical structure of PPO. Table 1 . The values of the flow behavior index and consistency index for PPO in several solvents.
10,641.4
2024-04-25T00:00:00.000
[ "Materials Science", "Chemistry" ]
Mathematical Model for Waste Reduction in Aluminum Fabrication Industry in Kuwait Problem statement: Waste generation in the aluminum industry througho ut the fabrication processes in Kuwait. Approach: A mathematical model has been developed to analyze the fabrication process and a special heuristic is designed for sol ving the model. The model uses actual data presente d from an Aluminum Fabrication Industry (AFI). Results: Reduced the amount of waste generated substantially during the process. Conclusion/Recommendations: Considerable savings in waste generated can be realized by using scientific appro aches through mathematical modeling. INTRODUCTION Aluminum is widely used worldwide in many forms. Houses, buildings and shops use aluminum made windows and doors produced by fabrication industries. These industries use aluminum profiles to manufacture different products. The profiles are produced through aluminum extrusion process and are made in various shapes, sizes and colors. In the Aluminum Fabrication Industry (AFI), profiles are cut into desired lengths to produce various products such as doors and windows. Waste is produced as a by-product; it constitutes 10% of the aluminum used in AFI. Profile cutting for fabrication has been studied thoroughly with the objective of finding ways of reducing the amount of waste generated. A detailed mathematical model was built for this purpose. A heuristic has been developed for solving this model and it was tested on data for 350 windows and found to produce significantly less waste than the current conventional technique in use. The computational study involved in the process is presented. Results and recommendation is included. A step-by-step calculation of the amount of waste generated using the proposed heuristic for a specific profile is given. The amount of waste generated is usually dependent on the profile cutting process used. Stock Cutting Problem (SCP) is discussed thoroughly in the literature. Gilmore and Gomory (1961) discussed the linear programming approach to the cutting stock problem. They suggested that its expression as an integer programming problem, involves a large number of variables, which generally makes computation infeasible. The difficulty presented by the enormous number of columns was overcome by solving a knapsack problem at every pivot step. This approach enabled to compute with a matrix which never has more columns than rows. Gilmore and Gomory (1964) discussed the cutting stock problems involving two or more dimensions and dealt with a wide range of industrial problems, especially those related to multistage cutting. Haessler (1971) described a heuristic procedure for scheduling the production-rolls of study through a finishing operation to cut them down to finished roll sizes. The objective was to minimize the cost of trimloss and reprocessing. The procedure generates cutting patterns and uses levels sequentially until the requirements are satisfied. At each step, the search depends upon the characteristics of the unsatisfied requirements. A maximum number of three solutions are generated for each problem. If none satisfy a predetermined aspiration level, the best of the three is chosen. Coverdale and Wharton (1976) presented a heuristic procedure for a nonlinear cutting stock problem; the article deals with scheduling cutting operations introduced the difficulties associated with selecting a few cutting patterns from a vast number of feasible options such that the total cost is minimized. The problem was solved using the pattern enumeration technique. However, the problem's structure differs from the one presented in this study which solves nonlinearity of product form. Adamowicz and Albano (1976) presented a method for solving aversion of the two-dimensional cutting stock problem. One is given a number of rectangular sheets and an order for a specified number of each rectangular shape. The goal is to cut shapes out of the sheets in such a way so as to minimize the waste, without using excessive amount of computational time. The solution method utilizes a constrained dynamic programming algorithm to lay out groups of rectangular structures, called strips. Christofides and Whitcock (1977) presented a tree search algorithm for a two-dimensional cutting problem, in which there is a constraint on the number of each piece to be produced. His algorithm limits the size of the tree search by deriving and imposing necessary conditions for optimizing the cutting pattern. A dynamic programming approach was used to solve the unconstrained problem and a node-evaluation procedure was used to produce upper bounds during the search. Tokuyama and Uneo (1986) discussed the cutting stock problem for large pieces in the iron and steel industries. The industrial challenge was characterized by the existence of a large variety of criteria, such as maximizing yield and increasing efficiency of production lines and the cutting stock problem is accompanied by an optimal selection dilemma. A two phase algorithm was developed using a heuristic; it gives a near-optimal solution in real time and is applied to both batch-solving and on-lone solving of onedimensional cutting large pieces. Sumichrast (1986) addressed this issue by interpreting a scheduling problem in the woven fiber glass industry as an example of the cutting stock problem, where wasted production capacity rather than wasted material is to be controlled. A heuristic was produced for scheduling the production. Vanderbeck (2000) proposed an integer programming formulation for the problem that involves an exponential number of binary variables and associated columns, each of which corresponds to selecting a fixed number of copies of a specific cutting pattern. The integer program was solved using a column generation approach where the subprogram is a non-linear integer program that can be decomposed into a multiple bounded integer program. Ragsdale and Zobel (2004) identified and discussed a new type of one-dimensional cutting stock problem called the ordered CSP, which explicitly restricts the number of jobs in a production process that can be opened, or processed, at any given point in time. A mathematical formulation is provided for the new CSP model its applicability is discussed with respect to a production problem in the custom door and window manufacturing industry. A Genetic Algorithm (GA) is used for reducing waste levels. Several production scenarios using GA were tested and computational results are provided. Cui and Lu (2009) developed an algorithm that uses both recursive and dynamic programming techniques to solve a rectangular two-dimensional cutting stock problem in a steel bridge construction. Poldi and Arenales (2009) examined the classical onedimensional integer stock cutting problem, they developed a heuristic in order to obtain a integer solution. The objective was to minimize waste generated from cutting the available stocks. Dikili et al. (2007) proposed a novel approach for solving a one dimensional cutting stock problem in ship building. They used cutting patterns obtained by the analytical methods and mathematical modeling stage. By minimizing both the number of different cutting patterns and material waste, they proposed method was able to capture the ideal solution of the analytical methods. Feng et al. (2002) used artificial neural network in metal cutting processes, while Al-Wedyan et al. (2001) used fuzzy modeling techniques for down milling cutting problem. Singh et al. (2002) illustrated the effectives of Taguchi method in stock cutting problem. Aluminum Profile Extrusion (APE): Aluminum profile extrusion process involves several stages such as: • Castings: Pure aluminum ingots, aluminum waste and other additives are mixed in a furnace at specified temperature to produce logs. Some plants forego this stage by importing ready-made billets or logs. • LOG cutting: Each log is cut into standard billets according to demand. Extrusion: Billets are passed through an extrusion machine where profiles of different types and shapes are produced according to orders. • Aging: The extruded aluminum profiles are placed into an aging furnace in order to increase its strength and durability. • Polishing: At this stage profiles are thoroughly polished before being either anodized or painted • Painting: Profiles are painted with the customers desired colors. • Anodizing: Profiles are placed in anodizing tanks and colored according to customer's requests. The coloring must meet specifications, or the profile will be rejected and scrapped. After the extrusion process, profiles are sent to different local or regional fabrication plants. The AFI consumes more than 100 tons/month of the aluminum produced by the local extrusion plant (AEC). At AFI, profiles are cut into different lengths to produce different products such as windows and doors. Large amounts of waste are generated at the fabrication process (Table 1). In order to reduce the amount of waste generated, a mathematical model is developed a heuristic based on a stock cutting problem is produced and used (Fig. 1). [ ] Minimize Z Minimze ML pN = − Subject to : L,S, U 0 ≥ M,X 0 and integers ≥ Where: Decision variables: L = Length of the profile X = Number of pieces of the desired length in each profile M = Number of profiles used Proposed heuristic: The heuristic initially takes the length of the profile to be as long as possible (U) and finds both the number of profiles to be ordered (M) for a specific demand the number of pieces produced by each profile (X) for a given piece-length (p). Keep reducing the profile length calculate X and M and the amount of waste produced (Z) for different profile lengths (L) above the lower bound S (cm) and below the upper bound U (cm). Find the minimum of all waste produce (Z * ) by different L, X, M. The L * , X * , M * produced by the minimum Z value (Z * ) are the best values for the decision variables. A flow chart of the heuristic is given in Fig. 2; a step by step presentation of the heuristic is as follows: Step 1: Let L=U Step 2: M = [pN/U] Step 3: Step 4: L = (pN/M) L< S if yes go to 7, otherwise continue. Step 7: Z * = minimum (Z i ) list L * , M * , X * and stop. [a] largest integer greater or equal to a. The results of applying the above heuristic on the data provided by the AFI are shown in Table 2 and 3 which present the total amount of waste generated by the conventional method currently used by the industry compares it to the waste produced if the special heuristic is used. Computational study: The proposed algorithm for profile cutting was implemented on two batches of 80 and 250 windows, respectively. Several profiles are used in producing each window. The profile type, the desired piece-length the quantity needed are presented in Table 1, along with the waste generated from cutting each profile by fabrication industry's conventional method and by the proposed heuristic. The heuristic obviously generates far less waste than the conventional method. A step-by step calculations of the waste generated when cutting profile number 1440 to provide a piece of 71.7 cm piece cutting profile number 2117 to produce a 23.7 cm piece is given in two examples. CONCLUSION The aluminum fabrication industries generate large amounts of scrap, mainly due to techniques used in cutting. An efficient optimal cutting method would not only minimize the amount of waste produced, but will also result in more efficient usage of time and manpower. In this study, a heuristic was developed that generates less waste than the current procedures. Table 1 demonstrates that the existing conventional procedures should be re-evaluated and replaced by scrapminimization approaches. In profile number 2060 for example, the conventional method produced 11,000 cm of scrap, whereas the heuristic produced 500 cm. The average waste generated per window is around 1.4m, with fabrication industry conventional techniques, but only about 0.196 m using the heuristic. Since the fabrication industry produces an average of around 12,000 windows annually, the amount of waste generated is around 12,480 meters using the existing method about 2,352 meters using the heuristic, which would thus save around 10,000 m annually. Clearly, the fabrication industry's waste level is unnecessarily high the fabrication procedure should be improved. The proposed heuristic could replace the currently used techniques. Given an order for windows in terms of type, size quantity, the proposed procedure can be used to determine: • Profile length from each type required • Number of profiles of each type required • Number of billets of each length required • Number of logs of each length to be cut On the basis of the study presented in this study, considerable savings in waste can be realized by applying mathematical models and computer-based optimization procedures.
2,867.2
2010-06-30T00:00:00.000
[ "Engineering", "Environmental Science", "Mathematics" ]
Option pricing models without probability: a rough paths approach We describe the pricing and hedging of financial options without the use of probability using rough paths. By encoding the volatility of assets in an enhancement of the price trajectory, we give a pathwise presentation of the replication of European options. The continuity properties of rough‐paths allow us to generalize the so‐called fundamental theorem of derivative trading, showing that a small misspecification of the model will yield only a small excess profit or loss of the replication strategy. Our hedging strategy is an enhanced version of classical delta hedging where we use volatility swaps to hedge the second‐order terms arising in rough‐path integrals, resulting in improved robustness. The authors would like to dedicate this paper to their late colleague Mark Davis . His acumen, brilliance, and determination in facing fundamental questions, his disarming laughter and good-natured common sense will be missed. Each of the authors benefited greatly from discussions with Mark over the years and did so, in particular, during the preparation of an early version of this manuscript. One aspect of the presentation below is a perspective on the so-called Fundamental Theorem of Derivative Trading. Mark often stressed the importance of this result to the understanding and effectiveness of real-world derivatives trading; indeed he included a version of it in his entry "Black-Scholes Formula" in the Encyclopedia of Quantitative Finance. INTRODUCTION The theory of rough-paths provides a framework for understanding differen tial systems driven by irregular input signals. An asset-price process arising from a diffusion model may be associated a rough-path. Conversely, we will find a necessary condition for a rough-path to arise from a given diffusion model, and we will call a rough-path satisfying this condition a diffusive rough-path. An investment strategy gives rise to a rough differential equation (RDE) describing the evolution of the profit and loss (P&L) of the strategy under a given asset price signal. Given an option with a smooth payoff function, we will show that the P&L of a modified version of the classical delta hedging strategy replicates the option payoff for any diffusive rough-path. The modification we make to achieve this replication is to augment the delta-hedging strategy with additional trades determined by a particular type of volatility swap. By assuming that the price of these swaps is well controlled we see that, in the continuous time limit, purchasing these swaps will not influence the P&L. A core property of RDE solutions is their continuity with respect to the input rough-path. A first consequence therefore of our rough-path approach is robustness of our proposed hedging strategy: if the true asset price signal is close to a diffusive signal, our hedging strategy will still approximately replicate the option payoff. This relates to the classical Fundamental Theorem of Derivative Trading (Cont, 2010;Ellersgaard et al., 2017;El Karoui et al., 1998), which shows that if one hedges according to a given diffusion model but the actual asset price process is determined by a nearby diffusion model, the error of the delta hedging strategy will be small. Our approach goes beyond this in that it allows for asset price signals that do not arise from any diffusion model at all. Due to phenomena such as market-impact and front-running, any differential equation describing the dynamics of the P&L of an investment strategy in terms of the asset price dynamics is likely to contain some error. A perturbation of the second-order term of the asset price dynamics allows us to model such an error, and hence explain the robustness of hedging strategies in more realistic markets than those given by diffusion models. A second consequence of our rough-path approach is that it demonstrates that a theory of hedging is possible without the need for probability theory, despite the central role of probability in the classical treatment of hedging (Harrison & Kreps, 1979;Harrison & Pliska, 1981). Our work clarifies the use of probability theory in justifying prices by identifying two steps: (i) showing that the asset price paths of a diffusion model satisfy our diffusivity condition; and (ii) deducing the uniqueness of the price of an option from the existence of a replicating strategy via a no-arbitrage argument. In a market with an arbitrage any price is possible, so there is no hope of obtaining uniqueness without invoking a no-arbitrage condition, and hence involving probability theory. We see, therefore, that the correct probability-free analogue of classical pricing is demonstrating the existence of a replicating strategy for a given initial endowment. In this way, we may interpret our theory as giving a probability-free approach to pricing. In diffusion models, the quadratic variation is a well-defined pathwise notion which determines the price. Our definition of a diffusive rough-path identifies the exact property needed for the delta hedging strategy to work in a rough-path context. A continuous pricing signal is enhanced with a specification for its rough bracket to obtain a reduced rough path (see Friz & Hairer, 2014, Chap-ter 5) which we will term an enhanced price path. Our financial model will take the form of a specification for the properties of the rough bracket. Thus, our model specification is tantamount to a choice of enhancer, and it is this rough bracket which provides the appropriate analogue of quadratic variation for our asset pricing model. In our version of the Fundamental Theorem of Derivative Pricing, we will study the effect of a misspecification of the financial model by examining the sensitivity of our strategy to the choice of enhancer. The purely pathwise nature of the enhancer, the price and hence the implied volatility is in marked contrast to the statistical (and therefore probabilistic) notion of historical volatility. This dichotomy between pathwise and probabilistic properties has been noted before. For example, it is exploited in (Brigo & Mercurio, 2000), which partly inspired the present work (see also Brigo, 2019), to give examples of diffusion models which are statistically indistinguishable using samples on a fixed time grid yet which have arbitrarily different option prices. The usage of Foellmer calculus has the caveat that the continuous-time integrals depend upon the discrete-time approximating sequence, which more or less precludes obtaining robustness in their approach. To circumvent this dependence, our proposed strategy is an augmented version of delta-hedging where one also invests in volatility swaps in order to hedge the second-order part of the pricing signal. This yields a robust trading strategy, however at the expense of introducing assumptions on the price of volatility swaps to ensure our strategy is self-financing. Moreover, the above-mentioned pathwise formulations required paths with semimartingale roughness, that is, of finite -variation for all > 2. Using Rough Path Theory we are able to accommodate paths of finite -variation for 2 < < 3, hence showing how the delta hedging can be extended to a wider class of price signals. One additional assumption that we must make in our approach is that the option payoff is differentiable. We will show that for a European call option with strike , one can find diffusive rough-paths for which our strategy fails to replicate the option payoff. However, these rough-paths must have a stock price exactly equal to the strike at maturity. In a probabilistic theory, such paths occur with probability zero, so may be neglected. However, our interpretation is that the existence of such paths demonstrates a genuine lack of robustness of the classical delta hedging strategy. The need for a robust strategy becomes more important towards maturity as classical diffusion models break down and new phenomena occur such as the "pinning" of stock prices around exchange traded strikes (see e.g., Avellaneda & Lipkin, 2003;Avellaneda et al., 2012;Golez & Jackwerth, 2012;Jeannin et al., 2008). The failure of our strategy for certain stock paths indicates that one should switch strategy near maturity to a genuinely probabilistic strategy, such as a buy-and-hold strategy. This reflects actual trading practice, where delta hedging strategies are abandoned and quite different strategies adopted near maturity. The article is organized as follows. In Section 2, we recall the classical theory of hedging and establish our notation. In Section 3, we describe the machinery on reduced rough path integration we will use. In Section 4, we define what is meant by a diffusive rough-path. In Section 5, we demonstrate formally how to obtain a pathwise formulation of the classical formulas of Mathematical Finance in continuous time. Section 6 shows how our continuous time trading strategy can be interpreted as a limit of discrete time trading strategies in volatility swaps. Section 7 demonstrates that our proposed strategy fails in the Black-Scholes model for call options when the stock price terminates at the strike. Section 8 presents our conclusions. NOTATION AND PRELIMINARIES We will develop a rough-path version of a classical diffusion model, and will begin by describing the classical model. Let 0 denote the value of a riskless asset at time , and assume that Let ∈ ℝ denote the price vector of non-dividend paying stocks, representing the risky asset. We let̃be the discounted price of the risky asset at time , namelỹ= ∕ 0 = − . We suppose that each component of the price vector̃displays the following dynamics in the pricing measurẽ= (̃) , = 1, … , ,̃0 = 0 ∈ ℝ , on a stochastic base (Ω, , , ( ) , ( ) ) carrying a standard -dimensional Brownian motion ( ) . We assume in −Ḧl loc (ℝ , ℝ × ), 0 < < 1. Einstein's summation convention on double indices is employed and will be throughout all the paper. Let ( ) be the payoff of a Vanilla option on the underlying . We assume that is a continuous and bounded function on ℝ . Let andh ∶= − ℎ. The payoff is therefore equivalently written as ℎ(̃), and its discounted value is ℎ(̃). The classical theory of (Harrison & Kreps, 1979;Harrison & Pliska, 1981) tells us that the option payoff can be replicated at time for a price,̃satisfying where ( ) is the semigroup on (ℝ ) generated by the infinitesimal operator of the dynamics of̃. We call the volatility operator. To ensure the absence of arbitrage, we must make some assumptions to ensure that the solutions to the Black-Scholes PDE are unique. In this paper will typically assume that the volatility operator is uniformly elliptic. The stochastic process̃is then a deterministic function = ( , ) of time and space applied after ( ,̃) which solves Equation (2) is the discounted version of the Black-Scholes partial differential equation. We will write ( , ) = ( , − ) for the undiscounted value function and will use following notation for the Greeks: In our setup, the pricing PDE is justified by the existence of a replicating strategy for the payoff. An investment strategy may be viewed as a pair ( 0 , ) indicating the quantities to purchase at each time of the riskless and risky asset. By Itô's formula, It follows that the delta hedging strategy = ( 0 , ) given by is such that the undiscounted portfolio process is self-financing, that is, it satisfies and replicates the option payoff, that is, ( ) = ( ). PATHWISE INTEGRALS In this section we review the elements of rough path theory that we will need. The results are standard, or minor variations of standard results and so proofs have been omitted, but may be found in the Arxiv version of this paper. If is defined on the simplex but is additive, then it can be extended to an additive function on [0, ] × [0, ] by setting , ∶= − , . Additivity characterizes those functions on [0, ] × [0, ] that descend from increments of paths, in the following sense. Moreover, if is another path whose increments coincide with , then − is constant. Given a partition of [0, ] and a time instant in [0, ], we adopt the following notational convention: If is additive, this notation is the usual -variation norm of the underlying path. For ≤ ≤ we introduce the symbol , , ∶= , − , − , . Given a partition of [ , ] ⊂ [0, ] we may use a control function to measure the mesh-size. Definition 3.3. The modulus of continuity of on a scale smaller or equal than the mesh-size | | is given by (10) By Proposition 3.5, we can regard the integral as the map By Proposition 3.1, we can unambiguously replace the range { additive functionals } with the space of continuous paths on  starting at 0 ∈ . Let -var ([0, ]; ) be the family of approximately additive functions Ξ ∶ {( , ) ∈ ℝ 2 ∶ 0 ≤ ≤ ≤ } →  that are of finite -variation, ≥ 1. Then, we have: Corollary 3.6. The restriction of the integral map ∫ to -var ([0, ]; ) takes value in the space -var 0 ([0, ]; ) of continuous paths on  that start at the origin 0 ∈  and are of finite -variation. Moreover, Proposition 3.7 ("Young integral"). Let and be Young complementary and set Then, Ξ is approximately additive and of finite -variation. As a consequence, the integral defines a continuous path in of finite -variation. The integral in (11) does not depend on whether Ξ is defined according to Ξ , = , or to Ξ , = , . The continuity of is only used to show that the choice to evaluate at the beginning or at the end of the partition subintervals does not affect the integral. The two choices are respectively referred to as adapted evaluation and terminal evaluation. If is not continuous but of bounded variation, the Young integral is defined (because = 1), but depends on the evaluation choice. If is a partition of [0, ], we set which denotes the piecewise constant caglad approximation of on the grid . We let . be the Young integral of against with adapted evaluation, namely In this way, for continuous and of finite -variation, 1∕ + 1∕ > 1, we can write Compensated integrals When the complementary regularities of integrand and integrator are not sufficient for Young integration, we introduce the enhancement of a path and we will define the integral using compensated Riemann sums. In particular this is the case if and have the same -variation regularity for some greater than 2. As above, let be a continuous path of finite -variation with trajectory in the Banach space . Recall that denotes Hom(; ). We use the identification Hom(, ) ≅ Hom( ⊗ ; ), and we write Hom sym ( ⊗ ; ) for the subset of those in Hom( ⊗ ; ) such that ( ⊗ ) = ( ⊗ ) for all , ∈ . Also, the symbol  ⊙  will denote the symmetric tensor product of the Banach space , so that we can identify Hom sym ( ⊗ ; ) ≅ Hom( ⊙ ; ). We say that a continuous path ∶ [0, ] → admits a symmetric Gubinelli derivative ′ with respect to if there exists a continuous path ′ ∶ [0, ] → Hom sym ( ⊗ ; ) of finite -variation such that 1. and ∕2 are Young complementary; 2. , ∶= , − ′ , is of finite ∕( + )-variation. In this case we say that the pair ( , ′ ) is -controlled of ( , )-variation regularity. Notice that the regularities of and of imply that is of finite -variation. The symbol [ ] will be referred to as volatility enhancer when the financial meaning of it is to be stressed. We say that Notice that does not depend on the enhancer because [ ] is additive; moreover, for all ≤ ≤ the following reduced Chen identity holds , , = , ⊙ , . Lemma 3.10. Let = ( , ) be an enhanced path and let ( , ′ ) be -controlled of ( , )variation regularity, with ′ being symmetric. Then, is approximately additive. As a consequence of Lemma 3.10, the integral given by the compensated Riemann sum is well-defined. Analogously to (12), we write If is a time interval, and are non-negative integers and , are in [0,1), consider the space of ℝ -valued functions that are times continuously differentiable in time with the -th time derivative of local -Hölder regularity, and times continuously differentiable in space with all the -th order space derivatives of local -Hölder regularity. Notice that nothing is assumed about the cross derivatives in time and space of functions in + , + loc . Let Let  be the space Definition 3.11 (" -Moderation"). Let be in  and let be a continuous path on ℝ of finite -variation, with − 2 < < 1. We say that the pair ( , ) is -moderate if 1. the paths can be continuously extended up to [0, ], and ′ is of finite -variation for some 1 − 2∕ < 1∕ < ∕ ; 2. there exists a control function such that for all in the trace [0, ] and all 0 ≤ ≤ ≤ where Conv [0, ] is the convex hull of the trace of . ENHANCED PATHS OF DIFFUSION TYPE We now isolate the pathwise features of price trajectories that affect hedging practice. Until further notice, we adopt the perspective of discounted prices, so that only the secondorder part of is considered, with coefficients thought of as functions of the discounted stock price. Given an -Hölder volatility operator = ( ∇ 2 )∕2 and a continuous path ∶ [0, ] → ℝ of finite -variation, we can consider the ∫ ( ) -enhancement = ( , ) of given be For brevity we will henceforth call this the -enhancement of . Notice that such construction yields a bounded variation enhancement. The converse construction, which starts from a bounded variation enhancement and defines a differential operator, is formalized in the following Definition 4.2 ("Enhanced path of -diffusion type"). Let = ( , ) be an enhanced path of -variation regularity. We say that is of -diffusion type, − 2 < < 1, if by setting Remark 4.3. The ellipticity condition in Equation (15) allows us to apply the theory from (Lorenzi & Bertoldi, 2007, Chapter 2) to the existence and uniqueness of semigroups on (ℝ ) associated with the volatility operator . If the solution to the PDE associated with is known to posses a unique solution, the assumed ellipticity can be removed. This is the case for example of the classical Black-Schoels partial differential equation with volatility operator 2 2 2 ∕2. Remark 4.4. Definition 4.2 is reminiscent of the class of price trajectories considered in (Schied & Voloshchenko, 2016). In both cases, the idea is to define the minimal pathwise requirements that link the dynamics of the underlying to a parabolic PDE. This link hinges on a differential operator. In the theory of Markov diffusions, this differential operator is the generator of the Markov semigroup and characterizes the probability law of the diffusion. However, the class of trajectories of Markov diffusions is strictly contained in the class of trajectories considered in (Schied & Voloshchenko, 2016), which in turn is strictly contained in the class of enhanced paths of diffusion type. Indeed, in (Schied & Voloshchenko, 2016) trajectories are only required to posses a quadratic variation: for example the sum = + of a standard one-dimensional Brownian motion and a fractional Brownian motion with Hurst exponent > 1∕2 is not a Markov diffusion but it is encompassed by (Schied & Voloshchenko, 2016) and by our framework. The case of a fractional Brownian motion with > 1∕3 is encompassed by our framework, but not by (Schied & Voloshchenko, 2016). An enhanced path of -diffusion type is the minimal information that the PDE pricing approach requires from a probabilistic model. Indeed, assume that we wish to use the PDE approach to price a contingent claim ℎ( ), where ℎ is in (ℝ ) and is the terminal value of a continuous price path of finite -variation. Let = ( , ) be an enhancement of of -diffusion type and consider the equation Then, the Cauchy problem (16) for all 0 ≤ ≤ ≤ . See Appendix 4 for the proof. We may use higher order sensitivities and pathwise integration to estimate errors arising from time discretization of integral quantities. Consider the cost of financing of a hedging strategy, defined as where ( 0 , 1 ) ∈ ℝ × ℝ is the strategy and 0 , are respectively the riskless asset and the risky asset. The symbols ( 0 . 0 ) and ( 1 . ) denote the time-marginals of the integral processes of 0 and 1 respectively against 0 and . Thus, the cost of financing in Equation (18) is the difference between the value of the portfolio at time and the cost of rebalancing the portfolio during the time window [0, ] in order to follow the hedging strategy. If continuous hedging were possible and one were able to take ( 0 , 1 ) = ( 0 , ) as defined in (4), then this cost 1 would match 0 = (0, 0 ), the price at time = 0 of the option, on a -full set. We remark that the probability is the measure of the stochastic base on which in the continuous-time case the Itô integral ( 1 . ) would be defined. In practice, the cost of financing has two components: the theoretical price 0 and the cost arising from time discretization, which is ( ) − 0 . For the latter, with replaced by the discretization ( 0 , ) of (4), we now provide a pathwise estimate that relies on integration bounds. Recall that in Proposition 4.5 plays the role of the discounted trajectorỹ = − . Corollary 4.6. Assume the setting of Proposition 4.5. Let be the control function whose (2∕ + 1∕ )-th power asserts the approximate additivity of , + ′ , . Along any partition of [0, ], the discretized strategy ( 0 , ) stemming from (4) with̃= has a cost of financing ( 0 , ) that is bounded as follows: where osc( , | |) is the modulus of continuity of on a scale smaller or equal than the mesh-size of the partition, and −, is the difference between ( , ) =h( ) and the discounted value ( −, − ) of the option at the second last node of the partition. The path-dependent constant appearing in the bound is not greater than where is the ∕( + )-variation control of , − ′ , . Proof. Let be the path ↦ ( , ). Fix a partition of [0, ] and recall the notation in (8). We preliminarily observe that where in the second line we have used summation by parts. Then, By adding and subtracting the compensation, we can apply the Sewing Lemma (Proposition 3.5) to complete the proof. □ So far, we have worked with the identification =̃, i.e. the enhanced path at hand has represented the actual enhanced path of the discounted stock price. In other words, the market models have been [̃]-compatible. This amounts to considering the square = of co-volatilities a true parameter. In Corollary 4.7 below, we no longer do so and we distinguish the modeled enhancer of from the actual enhancer of̃. The only assumption oñis that it is an enhanced path, that is, its tracẽis a continuous path of finite -variation, 2 < < 3, and its second-order process = (̃⊗̃− [̃])∕2 is a continuous two-parameter function of finite ∕2-variation with values in ℝ ⊙ ℝ ; the enhancer [̃] is not required to be of bounded variation and the integrals against it will be interpreted as Young integrals. where the second summand on the right hand side is a well-defined Young integral. As a consequence, if denotes the strategy obtained by discretizing along the -delta hedging, then its cost of financing ( 0 , ) is bounded by where , and | −, | are as in Corollary 4.6 and See Appendix A for the proof. PATHWISE FORMULATION OF FUNDAMENTAL EQUATIONS OF HEDGING By adopting the perspective of undiscounted price paths, we recover the classical formulas of Mathematical Finance within our pathwise setting. Given a price path , we say that a model for has been specified when a choice for the enhancement = ( , ) is made. This means choosing the enhancer [ ], see Section 3. We speak of an -diffusive model specification if the enhancer is given by where , , 1 ≤ , ≤ are the coefficients of an -Hölder volatility operator and is the constant interest rate. In other words, an -diffusive model specification is the undiscounted counterpart to an -enhancement of some discounted price path, where is an -Hölder volatility operator as defined in Definition 4.1. Theorem 5.1. Let ( ) be a contingent claim, where is in (ℝ ) and is the terminal value of a continuous -dimensional price path of finite -variation. Let = ( , ) be an -diffusive model specification, with > − 2, and let = , 2 , ∕2 be the corresponding volatility operator. Then, the Black-Scholes partial differential equation admits a solution in  and this solution is unique. Moreover, there exist a probability space (Ω, , , ( ) ) and a Markov diffusion process̃defined on it, such that for all 0 ≤ ≤ it holds where ℎ( ) ∶= ( ). Proof of Theorem 5.1. The change of variable ∶= − allows us to rewrite Equation (23) as where ( , ) = − ( , ). Therefore, existence, uniqueness and regularity of the solution follow from those of Equation (16). By applying the operator lim | |→0 ∑ ∈ to both sides of this expansion, we obtain (25) since solves the Black-Scholes partial differential Equation (23). □ The pathwise differential equation in (25) coincides with the classical SDE for the portfolio process in the delta hedging. In addition, the definition of the pathwise integral ( , ) .( , ) explicitly expresses the dependence on the gamma sensitivity, which is not captured by the classical stochastic integral. Fundamental theorem of derivative trading The formulas for pricing and hedging heavily depend on the diffusive model specification. In classical terms of Mathematical Finance, such specification amounts to specifying the diffusion coefficient (volatility) in Itô's price dynamics. Volatility is not directly observable and consequently a trader is liable to misspecify volatility and to use coefficients that do not faithfully represent the true price dynamics. The Fundamental Theorem of Derivative Trading addresses such misspecification. It provides a formula that computes the profit&loss that a trader incurs into when hedging with the wrong volatility -see (Cont, 2010;Ellersgaard et al., 2017;Karoui et al., 1998). Proposition 5.3 contributes to the assessment of model misspecification in two ways: on the one hand, it shows the pathwise nature of the P&L formula (this aligns with the unifying theme of the section); on the other hand, it provides a generalization of the classical P&L formula. The generalization consists in removing the assumption that the "true" price evolution is governed by an Itô SDE: we capture the misspecification that arises not just between two diffusive enhancements but between a diffusive enhancement (used by the trader) and a general enhanced path (the "true" dynamics). Remark 5.4. If true arises from a diffusion model then as compensation terms in our integrals vanish in probability, our definition of the value of the portfolio,̂, can be justified as a selffinancing condition. We will justify this definition for general pricing signals in Section 6 below. In order to recognize the extension of the classical Fundamental Theorem of Derivative Trading, we rewrite the Young integral in Equation (26) as ) . In the case where true is a diffusive enhancement, we have that [ true ] = ∫ 0 2 , true ( − ) , so that the integral is turned in the familiar form Proof of Proposition 5.3. We manipulate the Taylor expansion in the proof of Proposition 5.2 and, for 0 ≤ ≤ ≤ , we write where is the solution to the -dimensional Black-Scholes partial differential Equation (23), is a control function and > 1. We sum over the nodes of a partition and then we let the mesh-size shrink to zero, obtaining (26). The good definition of the Young integral of against [ true ] and [ ] holds as in Corollary 4.7. □ ENLARGED HEDGING STRATEGIES Given an enhanced price path = ( , ), we interpreted the pathwise integral ( , ′ ) .( , ) as the portfolio trajectory arising from the position on the risky asset . In this section, we explore the possibility to modify the interpretation of ( , ′ ).( , ). We will not only consider it as representing the values of the position on , but we will give a financial interpretation to the compensation ′ as well. This requires to analyze the mechanics of rebalancing portfolios during hedging periods. Given a (continuous) path in ℝ and a partition we write piecewise constant caglad approximation Classically, given the partition and the discretized strategy ( 0 , ), the cost of rebalancing the portfolio from ( −, ] to ( , ′ ] is where, for 0 ≤ < ≤ and 1 ≤ ≤ ≤ , the amount , ( , ) = , ( , ) denotes the (exogenously-given) price at time of the swap , , with maturity . Notice that, since swap contracts are not primitive financial instruments, in the equation above the payoff −, at time is disentangled from the price ( , ′ ) required at time to take a unit position on the next swap , ′ . We assume that the price ( , ) of the swap contracts , defines a ℝ ⊙ ℝ -valued function on {( , ) ∈ ℝ 2 ∶ 0 ≤ ≤ ≤ }, null and right-continuous on the diagonal, 2 and such that ( , ) is of finite ∕2-variation. Let 2 be a continuous path of finite -variation on Hom(ℝ ⊙ ℝ ; ℝ), where and ∕2 are Young complementary. Then, the integral path ∶= ( 2 . ) 0, exists and represents the accumulated cost in the time interval [0, ] consumed by a continuously rebalanced enlarged strategy in order to adopt the positions 2 on the swap contracts. Definition 6.1. Let ( ) be a contingent claim, where is in (ℝ ) and is the terminal value of a continuous -dimensional price path of finite -variation. Let = ( , ) be an -diffusive model specification, > − 2, and let , , and be as in Proposition 5.2. Let be a continuous real valued function on [0, ]. Then, the -enlarged delta hedging is the enlarged strategy defined as where ∶= ( 2 . ) 0, . A desirable property of a hedging strategy is the self-financing condition, i.e. the fact that the strategy does nor require money to readjust its positions during the hedging period. The following Proposition 6.2 gives the explicit formula for in (28) that guarantees a null rebalancing cost of the -enlarged delta hedging. Proposition 6.2. The continuous real valued function where ∶= ( . ) 0, , is such that the -enlarged delta hedging has zero cost of continuous rebalancing. Proof. We adopt the notation in Definition 6.1. Furthermore, we set We can write The cost of rebalancing along a partition is Hence, summing over ∈ , > 0, we have (25) and (30). □ The classical delta hedging is such that the initial endowment 0 = (0, 0 ) is precisely what the replicating strategy requires in order to yield the amount ( ) at maturity . Therefore, the writer of an option invests 0 in the delta hedging strategy, and such strategy will cover the contingent claim at maturity. Since delta hedging has no additional costs of financing (i.e. rebalancing the portfolio does not consume money) the writer's profit&loss is null. For the -enlarged delta hedging in Proposition 6.2, the self-financing condition holds. Therefore, the option writer's P&L is exclusively given by the cost of replication, namely by the difference between the due payment ( ) and the final value 0 0 + 1 of the portfolio. Notice that the latter does not comprise the payoff of the swaps, because such endowments are consumed in the rebalancing process. Proposition 6.3. The profit&loss of the -enlarged delta hedging with given as in (29) is Proof. The profit&loss is given by the difference & = ( , ) − 0 0 + 1 . Hence, the statement follows immediately from the definitions in Equation (28) with given as in Equation (29). □ NON-SMOOTH OPTION PAYOUTS We now consider the case of call options in the Black-Scholes model. We will see that as a result of the non-smooth payoff function one must employ a different, and truly probabilistic, trading strategy towards maturity. Our setting is the one presented in Section 2, and we take the dimension equal to 1. The volatility operator is where > 0 is the volatility coefficient. Pricing a European option with payoff ( ) requires solving the partial differential Equation (2) where the terminal constrainth = − ℎ appearing in this PDE stands in relation to the payoff function as expressed in Equation (1). The volatility operator in Equation (31) is not locally uniformly elliptic, i.e. it does not satisfy the requirement in Equation (15). However as pointed out in Remark 4.3, we do not require ellipticity itself only the existence and uniqueness of solutions to the equation in (16). Existence and uniqueness of solutions to the Black-Scholes PDE is well-known. In our framework, the classical Black-Scholes model is specified by the following enhancer Under this specification, we now discuss the application of our pathwise framework to the case of European call options, where the payoff is for some fixed strike > 0. This payoff is not bounded, so in principle it is not included in the general discussion above. However despite the fact that the semigroup associated with the PDE pricing equation was defined on the set (ℝ), this semigroup extends to a wider class than (ℝ), hence allowing to treat the European call option. Even if the model specification did not allow for such an extension, pricing European call options could always be reduced to pricing European put options due to put-call parity. In order to be able to apply Proposition 5.2, it remains to discuss the assumption on themoderation of the pair ( , ). Unfortunately, here we see that the non-smoothness of the payoff of the call option (or equivalently of the put option) prevents us from applying directly the results established above. We will discuss this in details now. Recall the three conditions in Definition 3.11. Let and ′ be the delta and the gamma sensitivities namely = = ( , ) = ( 1 ( , )), where and denotes the distribution function of the standard normal distribution. The fulfilment of the three conditions in Definition 3.11 depends on the terminal value ot the price path. Depending on this terminal value we have the following asymptotics as ↑ : Instead, if = , then neither 1 nor 2 have a limit as ↑ . To see this we use the law of iterated logarithm, which gives a precise statement on the small time asymptotics of the Brownian path. We have that the terminal value is Hence, as ↑ we have The first factor on the right hand side is such that the limsup as ↑ is equal to 1, and the liminf is equal to −1. Therefore, if = , then lim sup ↑ 1 ( , ) = lim sup ↑ 2 ( , ) = +∞, lim inf ↑ 1 ( , ) = lim inf ↑ 2 ( , ) = −∞. Because of Equation (38), conditions 1 and 2 in Definition 3.11 will not always be satisfied. Moreover, the singularity at will also impact condition 3. One could circumvent this issue by a smooth approximation of the option payoff that could eliminate the point of non-differentiability. Here instead, we comment on what this says about option trading in practice, and on how these singularities, exposed by our pathwise framework, could be regarded as an underpinning of the practicality of option hedging. The unstable behavior of the sensitivities when time is close to maturity is known in practice, in particular in the case of options that are at-the-money (i.e., the underlying has a price equal or very close to the strike). Because of this, it is common to stop the delta hedging before the actual option maturity, and to continue with a simpler strategy as buy-and-hold. This is described by introducing a time horizon̂smaller than the option maturity ; then the Black-Scholes price at is smooth and so our framework can be applied up to timêsubject to assuming that the option can be sold at this time at the Black-Scholes price. After̂and in the limit as time approaches , the sensitivity in Equation (35) no longer controls of Equation (34) in the sense of Gubinelli. In the case of at-the-money options, the gamma sensitivity diverges to infinity as time approaches . This has an impact on the profit&loss formula of Proposition 5.3, as described in the following proposition. Then, there always exists an arbitrary fine trading grid such that the profit&loss of the delta hedging on this trading grid diverges to −∞ as time approaches the option maturity. Remark 7.2. Proposition 7.1 says that, in the case of at-the-money options, if the misspecification of the Black-Scholes model is such that the volatility is underestimated, then there exist trading times when following the delta hedging will make the trader incur in unbounded losses. Instead, in the cases of in-the-money and out-the-money options ( > and < respectively), the gamma sensitivity has a limit as time approaches maturity and this limit is zero. Therefore, in these two cases, the Young integral describing profit&loss can be bounded relying on the integration bounds of Section 3. Proof of Proposition 7.1. Let be a trading grid up to the option maturity. Consider the approximation of the Young integral in Equation (26) The condition in Equation (39) We see that as ↓ 0 the quantity − 1− Γ − goes to −∞. □ CONCLUSIONS In this work, we have shown that European options may be replicated in a framework that does not use probability. We instead study enhanced price paths defined in the spirit of Rough Path Theory. On the one hand, their enhancements are essential for pathwise integration, as discussed in Section 3. On the other hand, they encapsulate the specification of a model for the valuation of derivatives, carrying the information needed for the hedging (Section 4). Moreover, these enhancements allow to assess model misspecification: a P&L formula for the hedging under "wrong" volatility was proved, generalizing the so-called fundamental theorem of derivative trading (Section 5). We stated the precise assumptions that allow for the application of Gubinelli integrals in the description of hedging strategies. These assumptions are satisfied in the standard Black-Scholes case of European call and put options only up to a timêthat strictly precedes the option maturity . On the one hand, this opens the question about suitable approximations for the limiting case aŝconverges to (without using probability); on the other hand, it provides a mathematical underpinning to some hedging practises linked to unstable option sensitivities, in particular in the at-the-money case. The fact that our enhanced paths extend to trajectories other than semimartingales would make the no-arbitrage arguments suitable for models with transaction costs and other market imperfections. Indeed, in these cases price trajectories are usually less regular than semimartingales. Moreover, we would like to point out that the classical arguments for no-arbitrage under transaction costs is based on consistent price systems, see (Guasoni, 2006;Guasoni et al., 2008). This means that the absence of arbitrage is ultimately based on support theorems, hence presenting the opportunity to apply Rough Path Theory, whose application in support-type arguments has proved to be fruitful (see (Friz & Victoir, 2010, Chapter 19)). In this direction, a recent MSc Thesis at Imperial College London moved the first step (Pei, 2019). A C K N O W L E D G M E N T S Damiano Brigo is grateful to the participants of the conference in (Brigo, 2019) for helpful feedback. The work of Thomas Cass is supported by EPSRC Programme Grant EP/S026347/1 D ATA AVA I L A B I L I T Y S TAT E M E N T Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
9,583
2018-08-28T00:00:00.000
[ "Mathematics", "Business" ]
COMBINED APPROACH FOR SENTIMENT ANALYSIS IN SLOVAK USING A DICTIONARY ANNOTATED BY PARTICLE SWARM OPTIMIZATION Sentiment analysis in the minor languages, such as Slovak, using dictionary approach is a difficult task. It requires a lot of human effort and it is time-consuming to prepare a reliable source of information, especially good dictionary. We propose an approach which uses a biologically inspired algorithm to find optimal polarity values for sentimental words. It applies a swarm intelligence algorithms, standard Particle Swarm Optimization (PSO) and Bare-bones Particle Swarm Optimization (BBPSO), to replace a human annotator at the moment of dictionary creation. We created two dictionaries, which were annotated by the human annotator, PSO and BBPSO. These dictionaries were compared with the result that the versions annotated by PSO and BBPSO outperformed a human annotator. Then a combined approach was used to classify reviews that do not contain words from the dictionary. These reviews decrease the classification performance significantly. The combined approach implements machine learning method to build a model based on the reviews classified by the dictionary approach. The combined approach finally reduced a number of unclassified reviews from 18% and 40.2% to 0.3% and increased the macro-F1 measure from 0.694 and 0.495 to 0.865 and 0.841. INTRODUCTION The social web produces a huge amount of data every day which are very difficult to process manually.Several approaches, including machine learning approach, dictionary-based approaches and deep learning approaches were proposed to process the data automatically.The approaches which are used for sentiment analysis, aim to distinguish between positive or negative (sometimes also neutral [5]) opinions, emotions or wishes towards subjects such as products, movies or people.Machine learning approaches based on well-known machine learning algorithms such as Naïve Bayes classifier, Support Vector Machines, Maximum Entropy or k-Nearest Neighbors [7,17,22] are used to assign a positive or a negative polarity to the reviews.They require a labeled training dataset to learn models, that can be applied on a testing dataset.Deep learning methods use Neural networks to discover new features and new information from the current data [18,23].Lexicon based approaches usually use lexicons, which contain polarity words.The strength of polarity which is assigned to each word from dictionary, indicates how strong is the word correlated with positive or negative polarity. All these methods require any source of external knowledge.Machine learning algorithms require an annotated dataset to train the sentiment classifier and to build the model.Deep learning methods require an annotated dataset too.On the other hand, dictionary-based approaches require an annotated dictionary which contains sentiment words with assigned polarity.These data provide all necessary information for sentiment classification.But this information is very unbalanced across different languages.There are languages such as English, in which a lot of previous work has been done and they provide many resources.However, the annotated sources of information in minor languages are rare and it is not a simple task to create them manually.In this paper, we focus on adapting sentiment analysis from English to Slovak.The Slovak language belongs to the group of Slavic languages (with Czech language and Polish) and it has a rich morphology.A combined approach integrates dictionary approach and machine learning method to create more flexible approach. This paper focuses on adapting sentiment analysis from English to Slovak using automatic dictionary annotation and integration of dictionary-based approach and machine learning method.Particle Swarm Optimization (PSO) [12] is used to find optimal values of the polarity for words in the dictionary.It is a challenging task to adapt a dictionary from a major to a minor language (a language which is not commonly used) such as the Slovak language.It requires a lot of human efforts and it is also time-consuming to translate and label all words in a new dictionary.To assign correct polarity values, more annotators are needed, because the polarity values are often biased by individual preferences of annotator.The automatic translation can not be used directly.Translated words can have more than one sense and they might have a different polarity than the original word.The translated dictionary has to be annotated again.We decided to replace a human annotator by PSO in the annotation stage. In some cases, a new dictionary might not cover all sentiment words in the target language.Hence some reviews in the target language will not be classified.To solve this problem, we implemented a combined approach which consists of the combination of dictionary approach and machine learning.It uses the dictionary to classify all reviews in the dataset.Then dataset is split into two subsets, the subset classified by dictionary approach and unclassified subset.Classified reviews are used as a training dataset for machine learning method.Then the trained model is applied to unclassified reviews.We implemented Naïve Bayes classifier to build a model and classify unlabeled reviews.It is a simple classifier based on probability theorem with good results for sentiment analysis.This paper is organized as follow.Section 2 briefly describe dictionary-based approaches and summarizes the related work in the field of sentiment analysis using different types of methods.Section 3 introduces PSO and its modification, which were used in our approach and section 4 details our combined approach.Section 5 analyzes achieved results and section 6 summarizes the contributions of the paper. Dictionary-based approach Dictionary-based approaches usually apply sentiments dictionaries to classify reviews into positive or negative class respectively.These dictionaries can be generated in three ways, manually, automatically and semiautomatically.Manually generated dictionaries are more accurate and usually involve only single words.They are often translated from another language or collected from the corpus.The value of polarity is copied from the original dictionary or assigned manually.The advantage is that all words in these dictionaries are related to the sentiment.However, this approach requires a lot of human effort and it is time-consuming.Manually created dictionaries separate words into positive and negative groups [14] or provide also additional lists of words such as shifters (words that can change polarity) [20], [10].Warriner lexicon [25] or Mikula lexicon [13] provide a value of polarity to each word.Automatically generated dictionaries require less human effort.They assign values of polarity based on relations between words in existing dictionaries e.g.WordNet1 .SentiWord-Net [1] contains automatically annotated WordNet synsets according to their degrees of positivity, negativity, and neutrality.In the WordNet-Affect [19], an emotional values were added to each WordNet synset.SenticNet [2] includes commonsense knowledge, which provides background information about words.The main weakness of automatically dictionaries is that they might contain words without polarity or incorrectly assigned polarity.For this reason, the semi-automatic generation of dictionaries was introduced.It creates dictionaries automatically and checks them manually. Evolutionary computation There are several works which used evolutionary computation in text classification.Genetic programming was applied to find new term-weighting schemes in work [6].The schemes were used to improve classification performance.The standard term-weighting schemes were combined with new term-weighting schemes which are more discriminative and they were created by the genetic algorithm.In work [8], Particle Swarm Optimization was applied to find the most useful features which were added as an input for the framework based on Conditional Random Field.PSO was also used to select features and combine them with Support Vector Machines to classify reviews in [3].In our paper, PSO learns a group of numbers, which represents the values of polarity for specific words in the dictionary. Standard PSO Particle Swarm Optimization is an optimization algorithm which is inspired by birds flock.PSO faster converges to the final solution than genetic algorithm and each particle stores knowledge about its best solution.The possible solutions are called particles.Particles are parts of the population called swarm.Each particle keeps its best position (evaluated by the fitness function) called pbest.The position of the best particle chosen from the whole swarm is called gbest.The standard PSO consists of two steps: change velocity and update position.In the first step, each particle changes its velocity towards its pbest and gbest [9].In the second step, the particle updates its position.A new position is calculated based on previous position and a new velocity.Each particle is represented as a vector in a D-dimensional space.The i-th particle can be represented as X i = (x i1 , x i2 , . . ., x iD ).The velocity of the i-th particle is represented as V i = (v i1 , v i2 , . . ., v iD ) and the best previous position of the particle is represented as P i = (p i1 , p i2 , . . ., p iD ).The best particle in the swarm is represented by the g particle w is an inertia weight which balances the tends between exploration and exploitation abilities of the particles.The velocity and position are updated using the following equations 1 and 2. where: v t+1 id . . . is a new velocity of the i-th particle in d-th dimension in t+1-th iteration w . . . is an inertia weight v t id . . . is a velocity of the i-th particle in d-th dimension in t-th iteration c 1 . . . is a self-confidence factor r 1 . . . is a uniformly distributed random value in [0,1] p t id . . . is a pbest (personal best) position of the i-th particle in d-th dimension in t-th iteration x t id . . . is a current position of the i-th particle in d-th dimension in t-th iteration c 2 . . . is a swarm confidence factor r 2 . . . is a uniformly distributed random value in [0,1] p t gd . . . is a position of the gbest particle in d-th dimension in t-th iteration where: x t+1 id . . . is a new position of the i-th particle in d-th dimension in t+1-th iteration x t id . . . is a current position of the i-th particle in d-th dimension in t-th iteration v t+1 id . . . is a new velocity of the i-th particle in d-th dimension in t+1-th iteration d = 1, 2, . . ., D, and in our system D represents the number of dimensions which corresponds to the number of words in the dictionary, i = 1, 2, . . ., N, and N is the number of particles in the swarm, t = 1, 2, . . ., denotes the iteration number.Two numbers r 1 , r 2 are uniformly distributed random values in [0,1] which avoid the falling down in local optima.c 1 and c 2 respectively are important parameters, known as the self-confidence factor and the swarm confidence factor respectively.They define a type of trajectory the particle travels, so they control the searching behavior of the particle [4].The stopping criteria of the algorithm often depends on the type of problem.In practice, PSO runs until a fixed number of iterations is done or an error bound is reached. Bare-bones PSO (BBPSO) The standard PSO uses pbest and gbest to update the position of the particle.The impact of these values was studied in [11].In this work, pbest and gbest were set as constants and the trajectories of the particles were investigated.They were plotted and the obtained histogram had a shape of the tidy bell curve with a center between pbest and gbest.From the results, it was suggested that trajectory can be determined by the difference between pbest and gbest.So these positions can determine the particle's movement.Based on the results a new PSO method called Bare-bones PSO (BBPSO) was derived.This model of PSO is based on Gaussian distribution N (µ, σ ) with the mean µ and standard deviation σ as shown in Eq. 3. where: x t+1 id . . . is a new position of the i-th particle in d-th dimension in t+1-th iteration p t id . . . is a pbest (personal best) position of the i-th particle in d-th dimension in t-th iteration µ is the center of pbest and gbest, and σ is the absolute difference between pbest and gbest.The rand() function is used to speed up convergence by retaining the previous best position pbest. PROPOSED METHOD We created two dictionaries to analyze sentiment using dictionaries.The first dictionary (big dictionary) was translated from English.It was manually extended and contains domain depended words (the meaning of the word depends on the domain).For this reason, we decided to create a new dictionary (small dictionary).It is expected that this dictionary is domain independent because it was extracted from six English dictionaries and only domain independent words are included in all dictionaries.Dictionaries were analyzed and only overlapping words from all of them were picked up.The dictionary size is smaller than the size of the big dictionary which is also important.The number of particles in our PSO implementation depends on the size of the dictionary and it influences the time needed to find an optimal solution. For each dictionary, three versions were generated.The first version was annotated manually by a human annotator, the second version was annotated by PSO, and the third using BBSPO.Then, all versions were used for sentiment analysis in the Slovak language. To adapt to the target language, the combined approach was used.It combines dictionary-based approach and machine learning method.In our work, we decided to apply Naïve Bayes classifier, which is a simple probabilistic classifier with good results in sentiment analysis. Big dictionary The big dictionary was derived from an English dictionary.The original dictionary [10] consists of 6789 words, including 13 negations.We translated only positive and negative words.Synonyms and antonyms from the Slovak thesaurus were found for each original word.The thesaurus was also used to determine intensifiers and negations.The big dictionary consists of 598 positive words, 772 negative words, 41 intensifiers and 19 negations.The first version of this dictionary was annotated manually.The range of polarity from -3 (the most negative word) to +3 (the most positive word) was chosen as a polarity values. For each word in the dictionary, the English form was searched by a double translation."Double translation" means that each word was translated into English and then, it was translated back to Slovak.In case, that the word had the same meaning before and after translation, the English form of the word was used. Small dictionary The second dictionary (small dictionary) was derived from the six different English dictionaries used in works [2,10,15,16,21,24].The comparison of these dictionaries can be seen in Table 1.The English dictionaries were analyzed and only overlapping words were picked up to a new dictionary.To translate these words to Slovak, the English translations from the big dictionary were used.The overlapping words were found and their Slovak forms were added to the dictionary.A new lexicon contains 220 words, including 85 positive words and 135 negative words.Intensifiers and negation were not added because they were not included in all original dictionaries.The first version of the dictionary was annotated manually with a range of polarity from -3 to 3. [2] 27405 22595 --AFINN [16] 878 1598 --Sentiment140 [15] 38312 24156 --SentiStrength [24] 399 524 28 17 Annotation by PSO Another two versions of the dictionaries were annotated by PSO and BBPSO, respectively.PSO is an efficient, robust and simple optimization algorithm which has been successfully applied to optimizing various functions. During evolutionary process, each particle represents one sub-version of dictionary and can be encoded as a vector X i = (x i1 , x i2 , . . ., x iD ) where x i j ∈ {−3, 3}, i = 1, 2, . . ., N where N is the number of particles and j = 1, 2, . . ., D where D denotes the number of words in a dictionary.The particle size depends on the size of the dictionary.From the big dictionary, only positive and negative words were used, thus the particle size is 1370 polarity values.The particle that represents the small dictionary has size of 220 polarity values.The designed approach is shown in Figure 1. The standard PSO method The main idea of using PSO is to find an optimal value of polarity for each word.The position of the particle represents one potential solution, which can be represented as a vector with D-dimensions corresponding to the size of the dictionary.The initial population is generated randomly and then PSO algorithm is applied.The algorithm evaluates each particle based on the fitness function, sets up pbest for each particle and searches for the gbest.In the next iteration, a velocity of each particle is calculated based on its pbest and gbest, and the position of the particle is updated.The particle is evaluated again and pbest and gbest are updated.This process runs until a fixed number of iterations is done. For experiments with standard PSO, the following parameters were used: • inertia weight = 0.729844 • number of particles = 15000 • number of iterations = 100 The BBPSO method The main idea of using BBPSO is also to find an optimal value of polarity for each word.In contrast to the standard PSO, Bare-Bones PSO uses pbest and gbest to calculate mean and standard deviation for Gaussian distribution.It also randomly initiates the first population, then evaluates each particle based on the fitness function, sets up pbest for each particle and searches for the gbest.To update the position of the particle, BBPSO uses Gaussian distribution N (µ id , σ id ) with the mean µ id and standard deviation σ id .µ id and σ id are calculated using the following Eq. 4 and Eq.5: 4) where: p t gd . . . is a position of the gbest particle in d-th dimension p t id . . . is a pbest (personal best) position of the i-th particle in d-th dimension d = 1, 2, . . ., D and D represents the number of words in the dictionary, i = 1, 2, . . ., N, and N is the number of particles in the swarm. The particle is evaluated again and pbest and gbest are updated.This process runs until a fixed number of iterations is done. Fitness function The fitness function is based on simple dictionary approach.It combines polarity values generated by PSO/BBPSO with words in the dictionary to create a temporary dictionary.The temporary dictionary classifies reviews in the dataset.Datasets are stored in preprocessed form.The input review is split into sentences and words.Each word is compared with the words in the dictionary and if the word is found in the dictionary, the polarity value of the sentence is updated.If the word is positive, the polarity of the sentence increases and if the word is negative, the sentence polarity decreases.This process can be described by Eq. 6. where: P s . . . is the sentence's polarity pw i . . . is the polarity of i-th word w s . . . is a number of words in the sentence s The polarity of the review is summed from polarities of all sentences.The class is assigned based on the polarity of the review.Precision and recall are calculated based on the comparison of the system assigned classes with the gold standard labels.A precision is calculated by Eq. 7 and it is a ratio of correctly evaluated positive reviews (tp) to all reviews marked by the algorithm as positive (tp + fp).A recall is calculated by Eq. 8 and it is a ratio of correctly evaluated positive reviews (tp) to all reviews labeled as positive (tp + fn).The same method of calculation was used to calculate the precision and recall for negative reviews.They are applied to calculate the F1 measure which is a harmonic mean between precision and recall.It is be calculated by Eq. 9. The final values of the fitness function are derived from the macro-F1 measure which evaluates the performance on the unbalanced dataset.Macro-F1 is an average of F1 measures calculated for each class (Eq.10).Thus, Marco-F1 shows the effectiveness in each class, independently of the size of the class. Combined approach In order to adapt to a new language and classify comments which do not contain words from the current dictionary, the combined approach is applied.It combines dictionary based approach and a machine learning method.Dictionary-based approach classifies all reviews in the dataset.Then the dataset is split into two subsets, reviews classified by dictionary approach and unclassified reviews.The classified reviews are divided into the positive and the negative group and they are sorted from the most positive/negative to the less positive/negative.The positive group of reviews is compared with the negative one to find which group contains fewer comments.All reviews from the smaller group and the same number of reviews from the bigger group are selected to form a new balanced dataset.This dataset is used as a training dataset from the machine learning method.The distribution is very important because, if we had more comments with one polarity, it could influence the results. The training dataset is used to calculate a probability that the word w from the sentence s is connected with class c (positive or negative).Thus if the word is found in the review, the probability P, that this word w is from class c is calculated by the simple probability method described in formula 11.Classes assigned by the dictionary approach are used to build a model. where: P(w c ) . . . the probability that the word is from class c w c . . . the number of occurrences of word w in class c w d . . . the number of occurrences of word w in the whole dataset In case that the word is not assigned to the specific class and the probability would be zero, a method which returns a very low number instead of zero is implemented. After the model is built, it is used to classify the unclassified reviews from the original dataset.The polarity of the review consists of the probabilities of each word connected to the positive and negative class.The probability is computed by 12. where: P sc . . . the probability that sentence is from class c ∑ P(w ic ) . . . the summed probabilities of all words from the sentence s which are from class c w s . . . the number of words in sentence s The reviews are added to the positive class if the final positive probability is higher than the negative probability and vice-versa. Dictionary annotation Our approach was tested on two datasets.A Slovak dataset contains 5242 reviews from different websites.It consists of 2573 positive and 2669 negative comments.Neutral comments were removed.The reviews refer to different domains such as electronics reviews, books reviews, movie reviews and politics.The dataset includes 155 522 words.To compare our approach with other works, a movie dataset [17], which contains 1000 positive and 1000 negative reviews collected from rottentomatos.com is used.The dataset is preprocessed and it is translated to Slovak using Google translator 2 .All datasets are labeled as positive or negative manually. Each dataset was randomly split on ratio 90:10, which means that it is looking for an optimal solution on 90% of the dataset and the optimal solution is validated on the 10% of unseen comments.The same subsets are applied in all experiments, including the manually labeled dictionary.The manually labeled dictionary was evaluated on the same 10% subset. The performance of all versions of dictionaries was compared and the results can be seen in Table 3. Table 3 The comparison of F1-measures achieved by three different versions of two dictionaries. dictionary Slovak dataset movie dataset big dictionary labeled manually 0.767 0.629 big dictionary labeled by PSO 0.698 0.694 big dictionary labeled by BBPSO 0.775 0.743 small dictionary labeled manually 0.501 0.679 small dictionary labeled by PSO 0.509 0.727 small dictionary labeled by BBPSO 0.528 0.738 It can be seen, that standard PSO is able to find better values of polarity than human.Thus it outperforms human annotator in three cases and there is only one experiment in which human annotator achieved better results than standard PSO.The standard PSO labeled dictionary achieved better results in three experiments: the classification of the Slovak dataset using a small dictionary, the classification of movie reviews using big and also a small dictionary.Then BBPSO is applied and it achieved even better results than standard PSO in every experiment.It significantly outperforms both previous types of annotation, annotation by a human and annotation by the standard PSO. Both dictionaries labeled by PSO and BBPSO respectively, outperformed the human labeled dictionary, which is interesting.There are several reasons for this.Human labeling can be biased which means it is based on intuition.It might not work well in some cases.It is not trained in any way and there is not any known data applied during human labeling.Our PSO based approach, which is similar to machine learning methods works well in learning optimal polarity values for words in the dictionary which is contradictive to the normal understanding in this field. The movie dataset is used to compare our approach with a method described in work [21].They achieved performance between 68.05% (a simple sentiment analysis using just words from the dictionary) and 76.37% (a sentiment analysis based on additional features).Our small dictionary labeled by BSO achieved performance 73.67% with the dictionary size 220 words which represent only 3.2% of the size of the dictionary (6793 words) proposed in the work by Taboada [21]. Combined approach The combined approach was tested on the Slovak dataset.In this case, the dataset was split into two parts.The reviews labeled by dictionary approach created a training dataset and unclassified reviews created a testing set. The training dataset was balanced to equal number positive and negative reviews and it was used to build a probability model.The model was applied to testing dataset and the results can be seen in Table 4 and Table 5.These results show that the unclassified comments degrade the performance.The macro F1 measure with unclassified reviews was around 0.694 using the big dictionary and 0.495 using the small dictionary.The original dictionary approach using the big dictionary was not able to classify 18%.The combined approach using the big dictionary classified 99.8%.The original dictionary approach using the small dictionary unclassified 40.2% and the combined approach using the small dictionary classified 99.7%.The results show that the combined approaches achieved better results in every experiment.It can be seen that dictionaries annotated by PSO and BBPSO achieved better results, because they provided slightly better results using dictionary approach. CONCLUSION Sentiment analysis in the minor languages using dictionary approach is not simple and requires a lot of human effort to prepare a good source of information, especially a dictionary.In this paper, an automated method for dictionary annotation is proposed.It uses optimization algorithms, such as Particle swarm optimization (PSO) and Bare-bones particle swarm optimization (BBPSO), to find optimal values of polarity for words in the dictionary.Then the combined approach is applied to adapt the dictionary and classify unclassified reviews from the dataset.Two dictionaries were created, to prove a better performance of the dictionary which was annotated by PSO or BBPSO respectively.The dictionaries annotated by the optimization algorithms outperformed the dictionary annotated by a human and provided better knowledge for the combined approach.In next stage, the combined approach built a probability model which was able to classify the reviews that did not contain words from the dictionary and reduced the number of unclassified reviews from 18% and 40.2%, respectively to 0.3%. Fig. 1 Fig. 1 Overview of the system. Table 1 The comparison of dictionaries used for creation of the small dictionary. Table 2 Contingency table for precision and recall Table 4 The comparison of different approaches for opinion analysis using the big dictionary. Table 5 The comparison of different approaches for opinion analysis using the small dictionary.
6,386.8
2018-06-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Identification and utility of sequence related amplified polymorphism ( SRAP ) markers linked to bacterial wilt resistance genes in potato Bacterial wilt caused by Ralstonia solanacearum is one of the most economically important diseases affecting potato (Solanum tuberosum). It is necessary to develop more molecular markers for potential use in potato genetic research. A highly resistant primitive cultivated species Solanum phureja was employed to generate a F1 mapping population to perform the bulked segregant analysis (BSA) for screening and identifying of sequence related amplified polymorphism (SRAP) markers linked to the potato resistance to bacterial wilt. A linkage map containing 23 DNA markers distributed on three linkage groups, and covering a genetic distance of 111 cm with an average distance of 5.8 cm between two markers was developed. Two SRAP markers, Me2em5 linked in repulsion phase and Me2em2 in coupling phase, flanked the resistance genes at genetic distances of 3.5 and 3.7 cm distance, respectively. These markers and two others were used for early seedling selection in a BC1 population. The results show that this marker system could be used in marker assisted selection (MAS) breeding program. INTRODUCTION Bacterial wilt caused by Ralstonia solanacearum is one of the most widely spread and very destructive plant disease, causing enormous economic losses.The vascular pathogen enter and colonize the plant vascular (xylem) system, disrupting water transport, and causing the characteristic symptoms of wilting, and often vascular discoloration, and death of aerial tissues (He, 1983;Hayward, 1991).The bacterial parasite cause diseases on over 400 plant species, including many crops such as potato, tomato and eggplant.The severity of attacks by this soil-borne vascular parasite is known to vary considerably according to climate, farming practices, soil type and geographic locations (Guidot et al., 2007).Control of wilt diseases is also complicated by the scarcity of sources of disease-resistant host germplasm, and the soil and vascular habitats of the pathogen (Hayward, 1991;Esposito et al., 2008).Despite decades of interests in the pathology, epidemiology and control of bacterial wilt, very little is known about the sources and resistance of the disease and about the genetic or molecular mechanisms under-lying host plant resistance (Qu, 1996).Some nuclear SSR alleles derived from the wild species S. chacoense appeared to be related to bacterial wilt resistance (Chen et al.,2013). The mode of inheritance of resistance in some tetraploid primitively cultivated species is not yet clearly known.Therefore, it is difficult to select bacterial wilt resistant clones or varieties in potato breeding.Although, method of biological control could be used in some situations to combat the pathogens, development of resistant cultivars is usually the best agronomic solution.Disease resistance breeding has traditionally been done by phenotypic selection.Efficiency of phenotypic selec-tion is reduced by variability in the pathogen, infection and disease development.Molecular markers tightly lin-ked to the resistance genes can eliminate these sources of phenotypic variation to enable more efficient breeding strategies in potato improvement (Li et al., 2013;Virupaksh et al., 2012).Marker-assisted selection can be helpful to bacterial wilt resistance breeding.We have identified some molecular markers like AFLP and RAPD makers that linked to potato resistance, these markers were found not to be effective to screen a large population containing up to 200 true potato seeds in the actual application (Gao et al., 2002(Gao et al., , 2005)).To accelerate introgression of the resistance into cultivated lines, it would be desirable to have more and better molecular markers to perform selection of those possible resistance genes. Sequence-related amplified polymorphism (SRAP) technology has been recognized as a new and useful molecular marker system for mapping and gene tagging in many crops plants (Valdez-Ojeda et al., 2008;Cao et al., 2011;Niu et al., 2011;Guo et al., 2012;Deng et al., 2013).They are also useful in positional cloning of genes and in elucidating the genetic mode of complex traits that do not display Mendelian segregation (Levi, 2002).Marker-facilitated selection would be particularly effective for pyramidding more resistance genes to provide effective and potentially stable resistance to disease (Martin et al., 1991). The objectives of this study were to determine the feasibility of using SRAP makers to develop a molecular linkage map of the S. tuberosum, and to identify marker loci associated with bacterial wilt.Both of these objectives were met, and we identified the SRAP markers that proved useful in selecting for the gene conferring bacterial wilt resistance in a BC1 population.A longerterm goal of this project will be to use closely linked molecular markers to introduce these resistant loci into potato cultivars as a potential strategy to control potato bacterial wilt disease in the field.In this study, the set of markers provide an impactful marker combination for use in a marker assisted selection (MAS) breeding program to identify genotypes containing resistance from original potato cultivars.Taken together, this study provides and sheds light into potential directions for development of novel management strategies for molecular breeding to controlling wilt diseases. Plant materials One genotype ED13, derived from the cross hybridization between 772102.37 (resistant) and USW7589.2 (susceptible) collected both from the Department of Plant Breeding, Wageningen University, the Netherlands was used as the resistant parent.The susceptible parent was derived from USW5337.3 × Solanum phureja (provided by Institute of Vegetables and Flowers, Chinese Academy of Agricultural Sciences, China).A segregating F1 population (Figure 4A) composed of 230 seedlings from hybrid true potato seed (TPS) was used for mapping the SRAP markers.Another segregating BC1 population were generated by first backcrossing a CE26 (susceptible, derived from a cross USW 5337.3 × 772102.37)with the parent 772102.37for assessing the utility of candidate SRAP markers in identifying bacterial wilt-resistant BC1 plants (Figure 4B). Wounded root plus soil inoculation Inocula for potato plants at a concentration of 1 × 10 8 cfu/ml were made from cultures grown on BPG plates at 28°C for 48 h using SMM without agar and were incubated at 28°C for 4 h.Potato plants with six to eight fully expanded leaves were inoculated by pouring 50 ml of bacterial suspension into the soil around the base of the stem with wounding the roots with a knife.15 potato seedlings were inoculated with the strain PO41 of the bacterium (provided by the Institute of Plant Protection, Chinese Academy of Agricultural Sciences, China).The virulent, wild colony type of R. solanacearum cultures was selected on 2, 3, 5-tetrazolium chloride medium (TZC) and used for inoculation.Inoculated plants were cultivated at 28°C under a 16/8 h (light/dark) photoperiod in a growth chamber and were not watered five days before and after inoculation.Control plants were mock-inoculated with sterile water.Each treated plant was rated daily for disease for 21 days after inoculation.Symptoms were scored daily on a 0 to 5 disease index, where 0 indicates no disease, 1 indicates 1 to 10% of leaves wilted, 2 indicates 10 to 25% of leaves wilted, 3 indicates 25 to 50% of leaves wilted, 4 indicates 50 to 75% of leaves wilted, and 5 indi-cates 75 to 100% of leaves wilted.Each experiment contained 16 plants per treatment, and experiments were repeated at least three times.Leave tissues of the susceptible were sampled 4 to 7 dpi into liquid nitrogen for DNA extraction, while the resistant samples were collected 8 to 14 dpi due to slower disease development in this resistant host. Sequence-related amplified polymorphism analysis DNA was isolated according to Ducreux et al. (2008).Bulk segregant analysis (BSA) (Michelmore et al., 1991) was used to identify SRAP markers linked to the bacterial wilt resistance gene.For BSA, resistant and susceptible DNA bulks were composed of 10 most resistant (disease index 0) plants and most susceptible (disease index 5), respectively.SRAP analysis was conducted according to previously established protocols with minor modifications (Li et al., 2001).The PCR reaction was set up in a final volume of 20 µL containing 50 ng of DNA, 5.0 pmoL of primer, 200 mM dNTPs, 1.5 mM MgCl 2 , and 0.5U of Taq polymerase (Sangon Biotech Co. Ltd., Shanghai, China) in 1× Taq buffer.The PCR program included an initial denaturing at 94°C for 3 min followed by 8 cycles of 94°C for 30 s, 37°C for 30 s, and at 72°C for 90 s and then 35 cycles of 94°C for 30 s, 48°C for 30 s, and 72°C for 90 s with a final extension of 72°C for 10 min.PCR products were separated using 8% denaturing polyacryamide gel electrophoresis and visualized by fast silver staining (Bassam et al., 1991). Forward primer Reverse primer The primer combinations (Table 1) that generated polymorphic bands between the bulks were tested on the bulked individuals to eliminate false-positive markers. Linkage analysis of markers Several markers obtained previously were employed before the construction of linkage map to increase reliability; those include three AFLP, one RAPD markers.Preliminary screening for candidate seedling was carried out on the two populations using these markers.As expected, more than 80% of the individual genotype was detected to contain all or one relevant marker DNA fragment. Based on the preliminary selection, Linkage analysis was performed with the JoinMap software (Van Ooijen and Vooripps, 2001). Prior to linkage analysis segregation rates, SRAP fragments that ranged in size of 75 to 500 bp were scored for marker.Presence or absence of each polymorphic fragment was coded as "1" and "0", where "1" indicated the presence of a specific allele, and "0" indicated its absence.For each segregating marker, a chi-square test was performed to fit for deviation from the 1:1 expected segregation ratio in a testcross or BC1 population under the 'locus genotypic frequency' command.Markers were sorted on the basis of the chisquares test with a P -value of <0.05; and skewed markers were excluded from the analysis.All markers were analyzed for linkage, and recombination fractions were converted into map distances (centimorgans) while employing the Kosambi mapping function. Logarithm of the odds ratio for linkage (LOD) scores of 2.0 to 6.0 were used for grouping of markers, followed by high threshold LOD scores of 7.0 to 10.0 for final mapping of markers in each linkage group.Loci showing weak or suspect linkages were removed from the analysis. Assessing the utility of a marker set in identifying bacterial wilt-resistant BC1 plants Forty-one (41) BC1 plants were generated by first backcrossing bacterial wilt-susceptible CE26 genotype with the recurrent parent 772102.37.The marker set identified here were used to select for those progeny that presumably contain the corresponding DNA allele conferring bacterial-wilt resistance, and then three times strict phenotypic identification (the disease index was scored for wilt symptoms as described earlier) were employed to exanimate their identical degree.Spearman correlation coefficient was used to analyze the correlation between the molecular markers and the disease parameters statistical software package S-Plus Version 6. Probabilities <0.01 were considered as significant.All p values were based on two-sided tests, and the differences were considered statistically significant when the p value was ≤ 0.05.All molecular detection and phenotypic identification were repeated three times. RESULTS Polymorphism and screening of SRAP markers 100 primer combinations were first tested for selective amplification of DNA fragments from the resistant and susceptible bulks to screening the polymorphism in the diploid mapping population.Prior to linkage analysis segregation rates, SRAP fragments that ranged in size of 75 to 500 bp were scored for marker.Fifty-six (56) of the polymorphic SRAP primer combinations amplified inconsistent band patterns per line.This inconsistency may have been the results of residual heterozygosity or the amplification of similar sequences in two separate genomic regions.It may also result from the use of polyacrylamide gels, which have a higher resolution power than most agarose gels.A total of 314 unambiguous bands were amplified by the 38 of 54 SRAP primer combinations, of which 187 bands were polymorphic (59.55%) and ranged in size from 50 to 1000 bp.The number of polymorphic fragments for each SRAP primer combination varied from 2 to 15, with an average of 6.0 fragments per primer combination.Polymorphisms were amplified with 38 primer pairs (67.8%) resulting in 146 polymorphic bands between the resistant and susceptible parental genotypes.Based on these results, primer pairs that generated polymorphic bands were tested on the resistant and susceptible bulks. SRAP bands present in one pool and absent in the other were regarded as candidate markers linked to bacterial wilt resistance.SRAP fragments that ranged in size of 75 to 500 bp were scored for marker.A relatively small number of these primer combinations (4 of 54 pairs; 7.41%) were not suitable for the mapping experiment within the tested population because of the lack of polymorphism in size of 75 to 500 bp.Hence, out of the original 56 primer pairs, only 30 combinations were used.However, three (5.56%) of these combinations were skewed from the expected 1:1 segregation ratio and had to be excluded from the final linkage analysis.Twentyseven (27) primer combinations produced 86 SRAP candidate markers that were used in the next linkage analysis. Identification of SRAP markers linkage map In order to obtain more closely linked markers and to avoid any possible mapping errors, the stringent linkage analysis criteria used with the JoinMap analysis (Van Ooijen et al., 2001) resulted in linkage groups.We compared for each primer combination, all primer pairs that generated polymorphic bands were tested on the resistant and susceptible bulks.SRAP bands present in one pool and absent in the other were regarded as candidate markers linked to bacterial wilt resistance.These SRAP markers were uniquely present in one of the donor parents and in the F1 individual genotypes.Although, a total of eighty-six SRAP markers were suitable for the mapping analysis using the first filial generation population, 23 (26.7%) of these markers were skewed from the expected 1:1 segregation ratio and had to be excluded from the final linkage analysis.Sixty-three (63) SRAP markers were used in the final linkage analysis.Five markers (5.8%) did not show strong linkage to our aim, and they were excluded from the analysis.A total of 58 SRAP markers were analyzed for linkage.Of these, 23 could be mapped with high confidence on the linkage map, four markers were clustered on linkage group 1, and seven markers were on linkage group 2 and twelve on linkage group 3 (Figures 1 and 2). Markers linked in coupling or repulsion phase Among the twelve polymorphic bands on linkage group 3, we found four markers tightly linked to the resistance locus, two of them linked in coupling phase and the others linked in repulsion phase.The primer combinations me2em5 and me5em2 (Table 1) were respectively detected, a band polymorphic between the resistant bulk and the susceptible bulk, since the marker band named as Me2em5 and Me5em2 was present in all of the susceptible plants in the bulks.They were screened out from the F1 population segregating for bacterial wilt resistance.When the phenotype identification was performed, this marker also co-segregated with disease reaction in the same fashion.And the linkage was further tested in the BC1 plants.The band was present in most of the susceptible genotypes and absent in the resistant ones.These results indicated a tight linkage between the marker me1em2 and the dominant susceptibility allele in the repulsion phase.The markers linked in repulsion phase exhibited similar electrophoretic images (Figures 2 and 3); While the primer combinations me2em2 and me5em1 detected approximately two close-migrating concomitant polymorphic bands between the resistant and susceptible individuals (Figure 2).These doublebanded marker linked in coupling phase were designated Me2em2 and Me5em1, respectively. The two polymorphic bands appeared in most of the resistant genotypes of the F1 population, and of which this proportion basically fits the expected 1:1 ratio, after Yanping et al. 1317 all, as there are some medium-resistant or medium-susceptible genotypes in the population detected by phenoltype identification.As expected, the band types in the gel were present reproducibly in the BC1 population (Figure 3). Practical utility of the markers In previous studies, several markers including AFLP, RAPD and SSR markers were described to link with bacterial wilt resistance loci, but did not determine their usefulness well in MAS programs.Here, we tested 41 BC1 plants inoculated with the pathogen for the presence or absence of the new SRAP markers.Four markers were used to select for those progeny that presumably contain the corresponding DNA allele conferring bacterial-wilt resistance following three times strict phenotypic identification (Figure 3).As shown in Figure 3A, of the 17 plants whose disease index were rated as 0 (highly resistant), 11 were detected to contain all four markers as homozygous genotypes, six plants were heterozygous for the four markers, in which five plants were detected to contain three markers, while one plant contained only one SRAP marker.Of the 15 plants whose disease indexes were rated as 5 (highly susceptible), 14 were detected to contain the repulsion marker.These results suggest that there is at least one major locus in the four marker alleles, because there were one or two markers detected in other nine plants whose disease indexes showed mildly resistant or mildly susceptible.These results were consistent with the expectation. DISCUSSION Although, SRAP markers are feasible to generate polymorphic (Levi et al., 2006;Poczai et al., 2013), a relatively small number of these primer combinations were not suitable for the mapping experiment within the tested population because of the lack of polymorphism in size of 75 to 500 bp.Hence, out of the original 56 primer pairs, only 30 combinations were used.However, three (5.56%) of these combinations were skewed from the expected 1:1 segregation ratio and had to be excluded from the final linkage analysis.These SRAP markers proved to be enough efficient and reliable in the mapping analysis in this study.We initially used bulk segregant analysis (BSA) strategy (Michelmore et al., 1991) to identify SRAPs.BSA has been widely used in many crop species for detecting markers linked to genes conferring disease resistance (Hyten et al., 2009) and is a powerful method for identifying molecular markers that show association with a gene of interest or a specific region of the genome (Ren et al., 2012;Salinas et al., 2013).Segregation distortion is widespread in plant populations and is a common feature of plant genetic linkage maps; it is frequent in progeny derived from interspecific crosses and distortion tends to increase with increasing numbers of meioses (Zhang et al., 2012). In this study, 16.8% of the total loci showed segregation distortion (P < 0.05), which is larger than other reports on cotton, wheat and Aegilops tauschii (Faris et al., 1998;Lin et al., 2005;Guo et al., 2007;Kumar et al., 2007;Yu et al., 2007).Skewed segregation frequently occurs in populations derived from interspecific crosses and may be influenced by many factors, such as the differential genes controlling the reproduction processes, meiotic drive controlling unique structural features and genetic properties rendering selective advantage or disadvantage to its respective gametes or zygotes (Lyttle, 1991;Buckler et al., 1999).Both biological factors and technical problems potentially contribute to segregation distortion.Integration of distorted segregation markers in linkage construction possibly lead to untrue distance between the adjacent markers in linkage groups (Weber et al., 2003;Lu et al., 2012).Therefore, in order to increase accuracy of the genetic map constructed, the distortion segregation markers were ignored in this study.It has been proven that highly skewed markers may contribute to overestimation of recombination frequency and to (Qu, 1996). loose linkages between markers while they may cause the merging of two linkage groups (Saliba-Colombani et al., 2000).Thus, in this study, the skewed markers had to be excluded from the mapping analysis. Five markers (5.8%) did not show strong linkage to our aim, and they were excluded from the analysis.Conversely, they may cause the merging of two linkage groups.Although, this phenomenon was manifested in others (Levi et al., 2006;Saliba-Colombani et al., 2000) and our results, we speculate that these molecular markers in group 1 or 2 may represent additional resistance genes; they are more likely to link within other different chromosomes.There may be of different origin derived from those remoter ancestors.This result suggested that the bacterial wilt resistance may be controlled by quantitative trait loci, or there may be more loci exist in potato genome.A recent report on resistance to R. solanacearum in eggplant (Lebeau et al., 2013) seems to be able to support this speculation.It is worthy that a more in-depth genetic analysis of bacterial wilt resistance in potato, especially in tetrapoid potato, needs to be considered.The markers Me2em2, Me5em1, Me2em5 and Me5em2 found associated with the locus can be used readily for marker assisted selection, helping to introgress the recessive resistance allele of this gene into cultivated lines.Although, these are not codominant markers, the linkage in coupling phase of Me2em2, Me5em1 and in repulsion phase of marker Me2em5 and Me5em2 to the resistance allele, makes it possible to identify almost all R. solanacearum isolates. Use of these markers might circumvent in many cases progeny testing of resistant plants, thereby reducing in half the time required to develop bacterial wilt resistant lines.Since the BC1 plants generated did not contain the same level of resistance found in the mapping population, as the smaller population was more likely to exhibit a larger segregation distortion and different genetic recombination; it was possible that a certain deviation of the matching degree occurred between the symptomatic phe-notypic identification and molecular marker detection.It was not surprising that two moderately susceptible genotypes were detected to contain a molecular marker.Similar results were confirmed in others of similar experiments in eggplant (Lebeau et al., 2013).Using spearman correlation for analysis we found that there was not very strong correlation existed between the SRAP markers (4 primer recombination) and bacterial wilt disease scores.On the interpretation of the results, we inferred that the symptom and disease scores were not significantly correlated with the numbers of the SRAP markers used here.In general, the correlation between symptom scores and gene functional status measures should be stronger than the correlation between disease scores and gene number measures.After all, the other markers in linkage groups 1 and 2 were not used for correlation analysis.It is worth paying more attentions to that those markers should become the object of special consideration in MAS breeding program because the contribution from the gene loci linked with the markers is not inconsiderable. It has been demonstrated that SRAP markers have good coverage of the genome and was able to rapidly detect markers linked to the resistance gene (Guo et al., 2012;Lu et al., 2012;Zhao et al., 2012).The results in this study suggest that the resistance to bacterial wilt is not simply inherited, but possibly controlled by a series of genes.Here, we identified the SRAP markers that proved useful in selecting for the gene conferring bacterial wilt resistance in a BC1 potato population.The set of markers provide a robust marker combination for use in MAS breeding program to identify genotypes containing the relative allele conferring bacterial wilt resistance in potato cultivars. Conclusion In this study, based on an F1 segregating generation, we identified the SRAP markers that proved useful in selecting for the gene conferring bacterial wilt resistance in a BC1 potato population.The set of markers provide a robust marker combination for use in MAS breeding program to identify genotypes containing the relative allele conferring bacterial wilt resistance in potato cultivars. Figure 1 . Figure 1.Genetic linkage groups of F1 population constructed with SRAP markers using the JoinMap software.Distances in centiMorgans are indicated to the left and marker names to the right followed by the fragment size in nucleotide bases.Me2em5 and Me2em2 flanked the resistance gene locus (Rbw) at genetic distances of 3.5 and 3.7 cm distance, respectively. Figure 2 . Figure 2. Samples of markers detected by primer combination Me2em2, Me5em1, Me2em5 and Me5em2 in the resistant and susceptible DNA bulks.Lanes: 1 to 10, most resistant (disease index 0) genotypes.11 to 20, most susceptible (disease index 5) genotypes.Standard size markers are given on left side. Figure 3 . Figure 3. Assessment of the utility of 4 markers in a segregating BC1 population.A) Spearman correlation coefficient analysis for degree of coincidence between the molecular markers and the disease parameters.For disease index, 0 = no disease, 1 = 1 to 10% of leaves wilted, 2 = 10 to 25% of leaves wilted, 3 = 25 to 50% of leaves wilted, 4 = 50 to 75% of leaves wilted, and 5 = 75 to 100% of leaves wilted, or plant death 21 dpi; B) Distribution of markers genotype data from bacterial wilt resistant polymorphism and phenotype data from potato BC1 population derived from the bacterial wilt-susceptible CE26 genotype with the recurrent parent 772102.37.The results of one of three independent experiments were shown; C) Electrophoresis patterns of polymerase chain reaction-amplified with genomic DNA of 20 genotypes.M, molecular marker showed by arrows followed by base number; Lanes 1 to 10, F1 plants; lanes 11 to 20, susceptible F1 plants.Primer combinations are given on left side. Table 1 . The primer sequences of SRAP used in this study. Genetic background of the plant materials used in this study.A) The mapping population composed of 230 genotypes; B) The testing population composed of 47 genotypes.*Pedigree from S. phurejia and S. vernei involved
5,742.8
2014-03-19T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
State Consensus Analysis and Design for High-Order Discrete-Time Linear Multiagent Systems The paper deals with the state consensus problem of high-order discrete-time linear multiagent systems (DLMASs) with fixed information topologies. We consider three aspects of the consensus analysis and design problem: (1) the convergence criteria of global state consensus, (2) the calculation of the state consensus function, and (3) the determination of the weighted matrix and the feedback gain matrix in the consensus protocol. We solve the consensus problem by proposing a linear transformation to translate it into a partial stability problem. Based on the approach, we obtain necessary and sufficient criteria in terms of Schur stability of matrices and present an analytical expression of the state consensus function. We also propose a design process to determine the feedback gain matrix in the consensus protocol. Finally, we extend the state consensus to the formation control. The results are explained by several numerical examples. Introduction In recent years, the consensus problem of multiagent systems (MASs) has been becoming a significant research topic because of its broad practical applications, including the work load balance in a network of parallel computers [1], the clock synchronization [2], distributed decision [3], consensus filtering and estimation in sensor networks [4][5][6], rendezvous, and the formation of various moving objects [7][8][9][10][11] such as underwater vehicles, aircrafts, satellites, mobile robots, and intelligent vehicles in automated highway systems, to name only a few.Hence, its study has captured attention of the researchers from different disciplines. MASs are comprised of locally interacting agents equipped with dedicated sensing, computing, and communication devices.The consensus problem of MASs is to design a distributed control law for each agent, using only information from itself and its neighbors, such that all agents achieve an agreement on some quantities of interest.To design and analyse this class of systems, one needs to consider three essential elements: (1) a dynamic model describing the states of the agents, which can be either continuous time or discrete time, linear or nonlinear, homogeneous or heterogeneous, time varying or time invariant, low order or high order; (2) an information topology describing communication network between the agents, which can be either undirected or directed, fixed or switched; (3) a protocol (control input) for each of the agents describing how the agents interact on each other according to the given information topology, which can be synchronous or asynchronous, with or without time delay. Up to now, numerous researches have been done for continuous-time MASs in different settings from the above cases [10,[12][13][14][15][16][17][18].This paper focuses on the study of highorder discrete-time linear multiagent systems (DLMASs) by proposing a linear transformation to translate the consensus problem into a partial stability problem.Although this approach can be extended to any setting from the above cases, we pay our attention only to the case of fixed information topology and in the absence of time delay for giving prominence to the trait of the approach.Here we give an overview mainly to the DLMASs. Reference [19] first proposed an interesting model for selfpropelled particle systems, where all agents move in a plane with the same speed but different headings, and showed that in the model all agents might eventually move in the same direction despite the absence of centralized coordination.Reference [20] further gave a mathematically rigorous qualitative analysis.Then, a theoretical explanation was given for the consensus behavior of Vicsek model on the basis of graph theory in [21,22].A necessary and sufficient condition was given for the average consensus criterion in [23].Reference [24] further considered the case of switching network topologies for the average consensus.The average consensus was investigated for the systems with uncertain communication environments and time-varying topologies in [25] and with communication constraints in [26].Reference [27] presented convergence results for the time-varying protocol in the absence or presence of communication delays.Reference [28] proposed an asynchronous time-varying consensus protocol.Reference [29] further discussed nonlinear systems with time-dependent communication links.References [30,31] addressed the case with both time-varying delays and switching information topologies and provided a class of effective consensus protocols by repeatedly using the same state information at two time steps. The researches mentioned above were limited to firstorder systems.The extension to second-order systems was done for the systems with time-varying delays and timevarying interaction topology in [32], and for the systems with nonuniform time-delays and dynamically changing topologies in [33]. Recent researches were turned to the high-order DLMASs in [34][35][36][37][38][39].Reference [34] studied a class of dynamic average consensus algorithms that allow a group of agents to track the average of their reference inputs.Reference [35] proposed an observer-type protocol based on the relative outputs of neighboring agents.Reference [36] studied the convergence speed for the high-order systems with random networks and arbitrary weights.Reference [37] addressed the high-order systems with or without delays.These researches were focused on the consensus convergence criteria for the proposed protocols.Another significant topic is the design of the gain matrices of the protocols in [38,39]. This paper deals with both analysis and design problems of the state consensus for general high-order DLMASs.Compared with the existing works, the contributions of the paper are summarized as follows.Firstly, motivated by [12], we improve the protocol by adding a self-feedback of the agent to achieve the expected consensus dynamics, whereas [13] introduced the internal model to change the given dynamic to achieve the expected consensus dynamics and [14,15] introduced the virtual leader to guide the multiagent systems to achieve the expected consensus dynamics.Secondly, we propose a state linear transformation to translate the consensus problem into a partial stability problem.The approach is motivated by the error variable method or the state space decomposition method in [12,16].However, our improvement can more spontaneously and conveniently deal with various settings of the consensus problems.Based on the partial stability theory, we educe new necessary and sufficient consensus convergence criteria in terms of stability of matrices and moreover give an explicit analytical expression of the state consensus function based on the different contributions of the initial states of the agents and the protocols.Thirdly, based on stability theorem, we give a design procedure to determine the gain matrices in the protocol on the basis of algebraic Riccati inequality similarly to [38,39].Fourthly, we extend the state consensus results to the formation control problem. The remainder of the paper is organized as follows.Section 2 introduces some basic concepts and notations, and formulates the problem under investigation.Section 3 firstly introduces a linear transformation which translates the consensus problem of the multiagent systems into a partial stability problem of the corresponding transformed system, and then educes a new necessary and sufficient condition for the multiagent system to achieve global state consensus and presents an analytical expression of the state consensus function.Section 4 shows a design procedure to determine the gain matrices in the state consensus protocol.Section 5 extends the approach for the analysis and design of the state consensus to the formation control problem.Section 6 gives numerical examples to explain the theoretical results.Section 7 concludes the paper.All the proofs of the results are deposited in the appendix for the sake of reading. Problem Description Before stating the consensus problem, we give some basic concepts and notations.Let R × and C × be the sets of × real matrices and complex matrices, respectively.Matrices, if not explicitly stated, have appropriate dimensions in all settings.The superscript "" means transpose for real matrices, and the superscript "" means conjugate transpose for complex matrices. presents the identity matrix of dimension , and sometimes is used for simplicity. 1 denotes the vector of dimension with all entries equal to one.0 is applied to denote zero matrices/vectors of any size, with zero components.A matrix ∈ C × is said to be Schur stable if all of its eigenvalues have magnitude less than 1.The Kronecker product is denoted by ⊗ and the Hadamard product by ∘ in [40].The following properties of the Kronecker product will be used: We consider DLMASs with homogeneous agents and assume they are described by where ∈ R × , ∈ R × , (, ) is assumed to be stabilizable, = () ∈ R is the state of the current time , + = ( + 1) denotes the state at the next time + 1, and = () ∈ R is the control input of the current time . The control input will be constructed based on the available information of the agent .Let N denote the index set of the agents which can send their state information to the agent .We call the set N = {N : = 1, . . ., } the information topology of the DLMASs (1).It is well known that one can use a digraph = (, ) to express the information topology N, where = {1, . . ., } is the index set of agents, ⊆ × is the set of directed edges to describe the information interaction between agents; that is, (, ) ∈ ⇔ ∈ N .Based on the directed edges, one can construct an adjacency matrix = [ ] × , whose entries are defined as = 0 for = , = 1 for ∈ N , and = 0 for ∉ N .The corresponding in-degree matrix and graph Laplacian are defined as = diag{deg 1 , . . ., deg } and = − , respectively, where deg = ∑ =1 is the in-degree of the vertex .A directed spanning tree of the digraph is a tree covering all the vertices of the digraph.The following results are well known. Lemma 1 (see [22]).The Laplacian matrix ∈ R × has the following properties: (1) all of the eigenvalues of are either in the open right half complex plan or equal to 0; (2) 0 is a simple eigenvalue of if and only if the digraph contains a directed spanning tree. Given the information topology N, we construct the following linear consensus protocol: where 1 , 2 ∈ R × are feedback gain matrices to be determined, which are relative to the consensus state and the convergence rate, respectively. =: [ ] × is a weighted matrix associated with the information topology N.For the sake of expression, we also define the weighted adjacency matrix = ∘ by using Hadamard product of matrices and the weighted Laplacian = − with weights , where = diag{deg 1 , . . ., deg } is the corresponding weighted in-degree matrix with weighted in-degrees deg = ∑ ∈N .Definition 2. For the given information topology N, the DLMASs (1) are said to achieve global state consensus via the protocol (2) if for any given initial state (0), = 1, . . ., , there exists an -dimensional vector function () depending on the initial states such that lim → ∞ ‖ () − ()‖ = 0.The function () is called a state consensus function. In this paper, we will address the following three aspects of the state consensus problem: (i) to give criteria of global state consensus, that is, for any given information topology N, weighted matrix and feedback gain matrices 1 and 2 to find the conditions of the DLMASs (1) achieving global state consensus via the protocol (2); (ii) to calculate the state consensus function () if the DLMASs (1) achieve global state consensus via the protocol (2); (iii) to determine the matrices 1 and 2 such that the DLMASs (1) achieve global state consensus via the protocol (2). First of all, we transform the state consensus problem to the partial stability problem.Then, based on the partial stability theorem framework, we educe new necessary and sufficient consensus convergence criteria and state a procedure to determine the gain matrices in the protocol on the basis of algebraic Riccati inequality.We also give an explicit analytical expression of the state consensus function based on the respective contributions of the initial states and the protocols.Finally, we extend the results to formation control. State Consensus Analysis In this section, we first introduce a linear transformation which translates the consensus problem of the multiagent systems into a partial stability problem of the corresponding transformed system.Then, we educe a necessary and sufficient condition for the DLMASs (1) to achieve global state consensus via the protocol (2), and present an analysis expression of the state consensus function.Finally, we discuss some interesting remarks and corollaries based on the result. Using the linear transformation (5), we transform the linear system (3) into the following system: or the form of two equations where = [ , ] , = [ 1 , . . ., −1 ] , and = .We show that the state consensus problem of the DLMASs (1) with the protocol (2) can be transformed into a partial stability problem. Definition 4 (see [41]).The linear system ( 8) is said to be asymptotically stable with respect to (or asymptotically stable in short) if lim → ∞ () = 0 for any bounded initial state (0) of the system (8). Lemma 5.Under the given information topology N, the DLMASs (1) achieve global state consensus via the protocol (2) if and only if the equilibrium point = 0 of the linear system (8) is asymptotically -stable.Moreover, the state consensus function of the agents is Lemma 5 builds a bridge between the consensus problem and the partial stability problem.Now we focus on the asymptotical -stability of the linear system (8).We can verify the following lemma.Lemma 6.The system (9) is of the following form: where = T( ⊗ ( + 1 ) − ⊗ 2 ) T, = −(1 T ⊗ 2 ) T, and = + 1 . Combining Lemma 5 with Lemma 6, we directly get the following theorem.Theorem 7.Under the given information topology N, the DLMASs (1) achieve global state consensus via the protocol (2) if and only if matrix in (10) is Schur stable.Moreover, the state consensus function is Subsequently, we give some interesting remarks and corollaries based on the result.Remark 8. Since T T = ( − −1 1 1 ) ⊗ , the result of Theorem 7 is in fact independent of the choice of the matrix although both and formula (11) in Theorem 7 contain T and T. Hence, for simplicity, we take it in the following form: The corresponding inverse matrix is Thus, we can write and into where satisfies = 0 and 1 = 1. One can verify that as → ∞ the state consensus functions in formulas (11) and ( 15) are the same. Remark 10.From Schur stability of in the formula (14), we can conclude that if + 1 is not Schur stable, it is a necessary condition of the consensus that the digraph expressing the information topology N has a directed spanning tree.In fact, since the condition of directed spanning tree is equivalent to Hurwitz stability of − T0 T0 , a lack of directed spanning tree means that − T0 T0 has a zero eigenvalue.In this case, we transform into its Jordan form via the matrix ⊗ , where is the matrix such that −1 T0 T0 = is the Jordan form, and thus we have One can verify that the eigenvalues of + 1 are the members of the eigenvalues of , and thus is not Schur stable if + 1 is not Schur stable.On the other hand, if + 1 is Schur stable, one can take 2 = 0 to make the DLMASs (1) achieve global consensus, which implies that for any initial states all the agents always converge to the equilibrium point 0. Hence, from the formula (16) 11) is a constant vector equal to the average of the initial states of all the agents, the consensus is called the average consensus.From the formula (11) we educe the following result on the average consensus. Corollary 13. The DLMASs (1) achieve global average consensus via the protocol (2) if and only the matrix 𝐴 is Schur stable and (∑ If + 1 = , then the last condition in Corollary 13 becomes 1 = 0, or equivalently, the digraph is either undirected connected or directed strong connected and balanced.More specially, if is a symmetric matrix (equivalently, the digraph becomes undirected connected), the condition 1 = 0 is satisfied and thus the average consensus is achieved.Remark 14.When = , 1 = 0, and = , the DLMASs (1) are called a single-integrator one.In this case, = (−1) − T0 T0 ⊗ 2 and = −1 T0 ⊗ 2 .We educe the following result. Corollary 15. Under the given information topology N, the single-integrator DLMASs (1) achieve global state consensus via the protocol (2) if and only if the following two conditions are held simultaneously: (1) the matrix − T0 T0 is Hurwitz stable; that is, the digraph admits a directed spanning tree; (2) the products , = 1, . . ., − 1, = 1, . . ., , are in the open unit circle of the complex plane with the centre at (1, 0), where , = 1, . . ., − 1, are the eigenvalues of the matrix T0 T0 and , = 1, . . ., , are the eigenvalues of the matrix 2 .The corresponding consensus function (11) becomes a constant vector Moreover, the single-integrator DLMASs (1) achieve global average consensus via the protocol (2) if and only both of the above conditions are satisfied and in addition 1 = 0. Remark 16.When = [ 1 1 0 1 ]⊗ , = [ 0 1 ]⊗ , and 1 = 0, the DLMASs (1) are called a double-integrator one, whose state vector can be seen as consisting of the position and velocity in the dimensional space R .Corollary 17.Under the given information topology N, the double-integrator DLMASs (1) achieve global state consensus via the protocol (2) if and only if is Schur stable.Moreover, the consensus function is The consensus function above can be decomposed into the position consensus function which is a linear function of discrete time , and the constant velocity consensus function is as follows: Similarly, we can define the velocity average consensus, that is, if the DLMASs (1) achieve global consensus via the protocol (2), and the velocity consensus function is a constant vector equal to the average of the initial velocities of all the agents. It is obvious that if 1 𝑇 = 0, then = 0; that is, the last condition in Corollary 18 is satisfied, and thus, the state consensus function becomes Design of Gain Matrices In this section, we discuss the third problem, that is, how to determine the weighted matrix and the gain matrices 1 and 2 , such that the DLMASs (1) achieve global state consensus via the protocol (2).Theorem 7 shows that the matrices , 1 , and 2 should be taken to ensure that the matrix is Schur stable.Furthermore, from Corollary 11 we see that if the matrix with respect to the information topology N has been given, we need only to design the gain matrices 1 and 2 to ensure that the matrices + 1 − 2 are Schur stable, where , = 1, . . ., − 1, are the eigenvalues of the matrix T0 T0 .The matrix 1 is often taken to obtain an expected consensus dynamics.The matrix 2 is designed to achieve state consensus and expected convergence rate.Its design needs the following lemma. Theorem 20.Supposing that the matrix (, ) is stabilizable, the gain matrix 1 has been taken such that the expected consensus dynamic matrix + 1 is not Schur stable, and the weighted matrix with respect to the information topology N is given such that − T0 T0 is Hurwitz stable with − 1 eigenvalues − , = 1, . . ., − 1; then, for the DLMASs (1) to achieve state consensus via the protocol (2), the matrix 2 can be designed as 2 = ( ) −1 ( + 1 ), where is an arbitrary constant satisfying = () < , ∈ (0, 1] is a critical value which depends on the unstable eigenvalues of the matrix + 1 , and = > 0 is a solution of the algebraic Riccati inequality (22). Based on Theorem 20, we give the following algorithm of determining the feedback gain matrices 1 and 2 in the protocol (2).Algorithm 21.Design procedure of the gain matrices 1 and 2 . Step 1. Verify the stabilizability condition of (, ) and the spanning tree condition of the information topology N. If neither of them is satisfied, then stop.Otherwise, design the weighted Laplacian such that − T0 T0 is Hurwitz stable with − 1 eigenvalues − , = 1, . . ., − 1. Step 2. Design 1 such that  = + 1 is the matrix of the expected consensus dynamics of the DLMASs (1) and is not Schur stable. Step 4. Calculate the critical value Otherwise, apply Wonham decomposition to the unstable part ( , ) of ( Â, ) to convert the multiple input system to single input subsystems, where is the number of the Jordan blocks of matrix .Specifically, there is a nonsingular real matrix with a compatible dimension such that à = −1 and B = −1 take the form where the symbol * denotes possibly nonzero parts and ( , ) with ∈ R × and ∈ R for all ∈ {1, . . ., } is controllable and , where the index * is defined by * = arg max ={1,...,} (∏ | ( )|) and is the Jordan block of the unstable part of matrix Â. Application to Formation Control In this section, the consensus approach is modified to solve the formation control problem of the DLMASs (1). ∈ R describe a constant formation of the agent network in a reference coordinate frame, where ℎ ∈ R is the formation variable corresponding to the agent . Theorem 23.Under the given information topology N, the DLMASs (1) achieve the formation ℎ via the protocol (24) if and only if the matrix −1 ⊗ ( + 1 ) − T0 T0 ⊗ 2 is Schur stable and ( T0 ⊗ ( + 1 − ))ℎ = 0.Moreover, the reference state consensus function is Similarly to Corollary 11, we get the following corollary for the formation control. Numerical Examples In this section, we give some illustrative examples. Conclusions We considered the state consensus problem of high-order discrete-time linear multiagent systems with fixed directed information topology.A linear transformation approach was proposed to translate the consensus problem of multiagent systems into a partial stability problem of the corresponding transformed systems.We have shown that the approach is powerful in dealing with the three aspects of the consensus problem: (1) the criteria of global state consensus, (2) the calculation of the state consensus function, and (3) the determination the weighted matrix and the feedback gain matrix.Precisely, we have educed new necessary and sufficient consensus criteria in terms of Schur stability of a matrix related to the weighted Laplacian matrix and presented an analytical expression of the state consensus function.In addition, we have stated a design process of determining the feedback gain matrix under the condition of each agent being stabilizable.The consensus algorithm has been further applied to solve the formation control problem of multiagent systems. Though the work in this paper focuses on the highorder discrete-time linear multiagent systems with fixed information topology and without time delay, it is undoubted that the approach can be easily extended to more complex cases, which will be dealt with in the future works. Figure 2 : Figure 2: Position and velocity trajectories of mobile robots.
5,472.6
2013-07-17T00:00:00.000
[ "Mathematics" ]
THE TWO-STEP APPROACH FOR WHOLE-CORE RESONANCE SELF-SHIELDING CALCULATION A two-step approach is proposed to accomplish high-fidelity whole-core resonance selfshielding calculation. Direct slowing-down equation solving based on the pin-cell scale is performed as the first step to simulate different operating conditions of the reactor. Resonance database is fitted using the results from the pin-cell calculation. Several techniques are used in the generation of the resonance database to estimate multiple types of resonance effects. The second step is the calculation of practical whole-core problem using the resonance database obtained from the first step. The transport solver is embedded both at the first step and the second step to establish the equivalence relationship between the fuel rod in the practical problem and the pin-cell at the first step. The numerical results show that the new approach have capability to perform high-fidelity resonance calculations for practical problem. INTRODUCTION The computing accuracy of the resonance self-shielding calculation is crucial in the high-fidelity direct whole-core calculation since the application of the method of characteristics (MOC) [1] is capable to give an acceptable error in the evaluation of spatial flux. Some advanced methods are proposed by the fusion of the equivalence theory [2] and the pin-based ultra-fine-group (UFG) [3] calculations to perform highaccuracy resonance self-shielding calculation [4][5][6]. The approach for the integration of the pin-based UFG calculation into the equivalence theory including two categories. The first one is the online calculation. In Ref. 4 and 5, the local resonance effects including the resonance interference effect and the intra-pin self-shielding are resolved by the online UFG calculations based on the quasi-1D cylindrical geometry [4] or the 2D pin-cell [5], along with the shadow effect is treated through the embedded selfshielding method (ESSM) [7] or the neutron current method (NCM) [8]. The computing accuracy of this type of approach is fully verified but it is reported in Ref. 9 that it suffers from efficiency loss in the depleted calculations for the whole-core due to the online pin-based UFG calculations. The second one is the heterogeneous resonance integral (RI) established by the pre-calculated pin-based UFG calculations [6]. The adoption of the heterogeneous RI in the depleted whole-core calculation can avoid low-efficiency online UFG calculations. However, the intra-pin radial self-shielded XS is washed out in the heterogeneous RI, and the extra storage and the complicated multi-dimension interpolation will be brought by the consideration of the resonance interference in the heterogeneous RI. In this study, a two-step approach is proposed to perform accurate and efficient resonance self-shielding calculations for whole-core problems. At the first step, the pin-based UFG calculations considering different operating conditions of the target core are performed to fit the resonance database (RD) instead of the heterogeneous RI. The polynomial fitting is adopted in the RD generation. And the accurate pinlevel self-shielded XSs are evaluated directly by the fitted RD without excessive treatment. The modified ESSM equation is solved at both the two steps to establish the equivalence relationship between the actual fuel rod and the pin-cell case of the first step. The number density-perturbation technique [9] is adopted to resolve the resonance interference during the depletion, and a distributed parameter approach is developed to evaluate the intra-pin self-shielding. The methodology is introduced in Section 2 and the numerical tests are given in Section 3. The conclusions are given in Section 4. Generation Procedure of Resonance Database In the generation of the RD, the pitch of the pin-cell is changed to cover the variation range of the background XSs of the actual rod in the target whole-core problem, and the ESSM is adopted to evaluate the background XS of designed pin-cell in the UFG calculations. The ESSM equation as Eq. (1) is established for each resonant group based on the designed pin-cell. The fixed source is defined as the regional potential scattering XS and it is corrected by the IR parameter O . , , , ) ) r r r r (1) Where , p g 6 and , a g 6 are the potential scattering XS and the absorption XS for group g, respectively. g I is the spatial flux. Different from the original ESSM, the black assumption is adopted in the ESSM to enhance the computing accuracy of the two-step approach for the lattice with different types of fuel. The black assumption can be implemented by setting , Where N r is the number density of resonant isotopes and f I is the flux in the fuel region. The iterative process of original ESSM is eliminated in the black ESSM since the , , a g p g r r O 6 6 r r is set to a fixed value, and it is beneficial to improve the computing efficiency, especially for the whole-core calculation. The number density-perturbation technique [9] is adopted as an efficient approach to treat the resonance interference effect in the depleted fuel. In the number density-perturbation technique, the assumption that the overall impact on the self-shielded XS of resonant isotopes can be decomposed into the product of each impact brought by the individual perturbation of each resonant isotope is introduced and its validity is proven in Ref. 9. Based on this assumption, if we set the fuel composition under a certain burnup step as the base step, the self-shielded XS of isotope iso under any burnup can be written as: Where N is the number of resonant isotopes in the mixture and R k represents the ratio of the self-shielded XS of iso between the base state and the state that only perturb isotope k in the resonant mixture. Based on the description above, the procedure for the generation of RD is described below: (1) The pin-based depletion calculation is performed for each type of fuel to determine the base state and the perturbation state; (2) Under the base state, a series of pin-based UFG calculation by changing temperatures and backgrounds (pin pitches) to obtain the accurate pin-level self-shielded XSs. Eq. (1) and (2) are solved by MOC on the design pin-cell geometry to evaluate the background XSs. (3) For each grid of temperature and pin pitches (background XSs), the pin-based UFG calculations with perturbed composition are performed to determine the ratios of the self-shielded XS variations. (4) The fitting procedure is performed using the results from the step (2) and (3). Polynomial fitting is adopted and the reference fitting form for the base state and the perturbation state are shown in Eq. ratio AT A T A B A B A B A D A D A BT A DT A BD Step (2)-(4) are performed for all fuel types and the fitting coefficients are stored in the RD. At the second step, the black ESSM equation is established for each group on the target problem with the base state of the fuel composition to evaluate the background XSs, and available for the calculation of self-shielded XSs in the base state using Eq. (4). Then the XSs are corrected by the perturbation of each resonant isotope using Eq. (5) to obtain the final self-shielded XSs. Distributed Parameter Approach A distributed parameter approach is proposed in this study to calculate the sub-ring self-shielded XSs which are washed out in the generation of RD. In fact, 238 U is the major resonant isotope in the LWR analysis and it has the most prominent rim effect, thus the treatment of the two-step approach for the intra-pin self-shielding mainly focuses on the 238 U. Ref. 10 has given an exponential-form radial distribution function for the 238 U RI in the fuel rod: Where R is the outer radius of the fuel rod, and 0 1 2 3 , , , a a a a are called as the distributed (DRI) parameter. The value of the DRI parameter given in Ref. 10 In this study, the non-line least-square fitting is adopted to evaluate the problem-dependent DRI parameters. Firstly, the distribution function is defined as: Where r V represents the self-shielded XS of 238 U when the radius is r; a V represents the self-shielded XS of 238 U of the whole fuel rod. In the generation of the RD, a certain number of the grids are selected for the radius, and the pin-based UFG calculation is performed to evaluate the self-shielded XS of these radius grids. The self-shielded XS of a certain radius grid is obtained by calculating the average XS of some differential element at this grid, that is, the self-shielded XS of radius r 1 could be evaluated using: And r ' must be very small, for example, 0.000001cm. The least-square problem can be established after obtaining the numerical set of the radius-XSs to calculate the problem-dependent DRI parameters: Where n is the number of selected grids for the radius. The Levenberg-Marquardt (LM) method [12] is adopted for the non-line least-square fitting. The selfshielded XS for a subring i can be written as Eq. (10) based on the assumption that the flux of the ring i can be regarded as a constant since its modest variation compared to the radial variation of the selfshielded XSs. Where i I is the flux in ring i. To have a more intimate coupling with the generation of RD, the basic assumption is introduced in the distributed parameter approach: the ratio of the 238 U self-shielded XSs between the sub-ring and the whole fuel rod is constant along with the depletion. Therefore, it only needs the calculation of the DRI parameters for the base state. This assumption is tested on the UO 2 cell and MOX cell from the JAERI benchmark [13]. The direct UFG slowing-down calculation is performed to evaluate the self-shielded XSs with the fuel rod is divided into 10 rings with equal volume. Select a certain burnup step as the base state, and perform three different MOC transport calculation under each burnup step: 1) All the XSs are from the accurate sub-ring XSs obtained by the UFG calculations; 2) At base state, calculate the ratio of the XS between the sub-ring and the whole fuel rod, and the XSs of the sub-ring at other burnup steps are calculated by multiplying this ratio and the whole-rod XS at this burnup step; 3) At base state, calculate the ratio of the XS of 238 U between the sub-ring and the whole fuel rod, and the 238 U XSs of the sub-ring at other burnup steps are calculated by multiplying this ratio and the whole-rod XS at this burnup step. The XSs of the other resonant isotopes in the sub-ring are directly from its whole-rod XS. The k-error is shown in Fig. 1 and two conclusions can be obtained: the first is that regardless of which burnup step is selected as the base state, the introduced k-error by the basic assumption of the distributed parameter approach is less than 20 pcm; the second is that the simplification that only consider the radial self-shielding of 238 U is reasonable, which brings less than 1.5/7 pcm of k-error for the UO 2 /MOX cell. NUMERICAL RESULTS In this section, the assembly problems, a multi-assembly problem and a multicell lattice are tested for verification. The direct UFG slowing-down calculation and the MCNP 5 [14] tallying are adopted for the reference calculations. The wims-69 group is employed in the calculation and group 15 th -27 th is the resont groups. The CPU for the calculation is Intel®, Core™ i7-7700HQ, 2.8GHz. PWR Assembly The two-step approach calculations are performed for the UO 2 and the MOX assembly from the JAERI benchmark [13]. The ray conditions for the self-shielding calculations of the UO 2 /MOX assembly are 32/8 azimuthal angles in the first quadrant, 80/20 rays in each direction and 8/8 polar angles with GL quadrature set. The reference results are obtained by the direct UFG calculations. The burnup calculation is also performed by HELIOS 1.7 code [15]. The fuel composition under 15 GWd/t is specified as the base state, and the setting of the perturbation state covers the variation of the fuel compositions from 0 to 70 GWd/t. The identical 69-group MOC transport calculation is performed using the self-shielded XSs evaluated by the two-step approach and the reference calculation, respectively. The MOC ray conditions in the transport calculation are 8 azimuthal angles, 20 rays in each direction and 8 polar angles with GL quadrature set for both the UO 2 assembly and the MOX assembly. Fig. 2 is the distribution of relative errors over burnup steps for the self-shielded XSs of the UO 2 assembly and MOX assembly. Table I is the error of k-infinity during burnup. It can be seen form Fig. 2 that higher errors are introduced in the Gd bearing fuel rod and the UO 2 rod around the Gd rod. These results are attributed to it that the inconsistency between the pin-cell in uniform lattice for the RD generation and the fuel rod in the assembly is enhanced because the strong neutron absorption of Gd. However, the self-shielded XSs evaluated by the two-step approach still have a good agreement with the reference results. The RMS errors of 238 U self-shielded XSs in the UO 2 fuel pin, the Gd fuel pin and MOX fuel pin are less than 0.67%, 1.51% and 0.25%, respectively. The k-error brought by the self-shielded XSs are less than 63.0 and 73.3 pcm for the UO 2 and MOX assembly, respectively. Multi-Assembly Problem A multi-assembly problem designed on C5G7 geometry [16] is used to test the ability of the two-step approach for the whole-core calculation. The material specification is specific in Ref. 17 The MOC ray conditions for the self-shielding calculation are 6 azimuthal angles in the first quadrant, 30 rays in each direction and 16 polar angles with GL quadrature set. The MCNP 5 tallying with 300 batches (50 inactive batches) and 10 000 000 particles per batch is performed for reference. Fig. 3 is the distribution of RMS relative errors of self-shielded XSs. From the data in Fig. 3, it can be seen that the maximum RMS error is less than 1%. The identical 69-group MOC calculations are performed using the self-shielded XSs from the two-step approach and the MCNP tallying. The MOC ray conditions for the transport calculation are 8 azimuthal angles in the first quadrant, 20 rays in each direction and 8 polar angles with GL quadrature set. Table II gives the k-infinity comparison and error introduced by the selfshielding calculation is -30.6 pcm. Test for Intra-Pin Self-Shielding A self-design 3 3 u multicell lattice shown in Fig. 4 is used to test the accuracy of the distributed parameter approach on the evaluation of intra-pin self-shielding. The material specifications are from the UO 2 cell in JAERI benchmark and the fuel rod is divided into 10 rings with equal volume. 300 batches (50 inactive batches) with 3 000 000 particles per batch is used for MCNP 5 tallying to calculate the reference results. Figure. 4 Geometric Configuration And Parameter of Multicell Lattice The pin-level self-shielded XSs used for the evaluation of intra-pin self-shielding are obtained by the twostep approach, and the conditions for MOC ray generation are: 32 azimuthal angles in the first quadrant; 80 rays in each direction; 8 polar angles with GL quadrature set. Fig. 5 and 6 give the comparison of intra-pin self-shielded XSs of group 24 th and 27 th in the fuel rods marked in Fig. 4. It can be observed that the maximum relative error is less than 1%. The identical 69 group MOC transport calculation for the lattice is performed using the self-shielded XSs obtained by the two-step scheme and MCNP tallying, respectively. The comparison of k-infinity is tabulated in Table III and it can be seen that the k-error introduce by the self-shielding calculation is 16.7 pcm CONCLUSIONS A new resonance self-shielding method named the two-step approach is proposed to accomplish the highfidelity whole-core resonance self-shielding calculation. At the first step, the pin-based UFG calculations are performed to fit the RD. At the second step, the self-shielding calculation of the target whole-core problem is performed using the RD generated before. The black-assumption ESSM is adopted to establish the equivalence relationship between the designed pin-cell and the actual fuel rod in the target problem. And the number density-perturbation technique is adopted to resolve the resonance interference in depleted fuel. A distributed parameter approach is proposed to evaluate the intra-pin sub-ring self-shielded XS which is washed out in the original RD. Multiple problems are calculated to test the ability of the twostep approach, and the numerical results show that the two-step approach can achieve the same level of computing accuracy as the direct UFG calculation and the MCNP tallying.
3,882.2
2021-01-01T00:00:00.000
[ "Physics" ]
On the structure of spectra of travelling waves The linear stability of the travelling wave solutions of a general reaction-diusion system is investigated. The spectrum of the corresponding second order dieren tial operator L is studied. The problem is reduced to an asymptotically autonomous rst order linear system. The relation between the spectrum of L and the corresponding rst order system is dealt with in detail. The rst order system is investigated using exponential dichotomies. A self-contained short presentation is shown for the study of the spectrum, with elementary proofs. An algorithm is given for the determination of the exact position of the essential spectrum. The Evans function method for determining the isolated eigenvalues of L is also presented. The theory is illustrated by three examples: a single travelling wave equation, a three variable combustion model and the generalized KdV equation. Introduction We investigate the stability of the travelling wave solutions of the system where u : R + × R → R m , D is a diagonal matrix with positive diagonal elements and f : R m → R m is a continuously differentiable function.The travelling wave solution of this equation has the form u(τ, x) = U (x − cτ ), where U : R → R m .For this function we have The stability of U can be determined by linearization.Putting u(τ, x) = U (x − cτ ) + v(τ, x − cτ ) in (1) the linearized equation for v takes the form According to well known stability theorems (see e.g.[21]) the stability of the travelling wave solution U is determined by the spectrum of the second order differential operator EJQTDE, 2003 No. 15, p. 1 endowed with the supremum norm V = max R |V (t)|.It is assumed that Q : R → C m×m is continuous, and the limits Q ± = lim t→±∞ Q(t) exist.The complex number λ ∈ C is called a regular value of L if the operator L−λI has bounded inverse that is defined in the whole space C 0 (R, C m ).That is for any W ∈ C 0 (R, C m ) there exists a unique solution of LV − λV = W in C 0 (R, C m ) ∩ C 2 (R, C m ), and there exists M > 0 such that for any W ∈ C 0 (R, C m ) we have V ≤ M W .The spectrum σ(L) of L consists of non-regular values.The number λ is called an eigenvalue if L − λI has no inverse.The essential spectrum of L consists of those points of the spectrum which are not isolated eigenvalues with finite multiplicity.It is useful to introduce the first order system corresponding to equation LV − λV = W .Let x = (V, V ) T , y = (0, W ) T , then the first order system is ẋ(t) = A λ (t)x(t) + y(t), where The spectrum of L has been widely investigated.The position of the essential spectrum has been estimated, see e.g.[14], and Weyl's lemma in [21].This can be done using exponential dichotomies for the first order system (5), [5,18].This approach was also generalized to infinite dimensional systems, when A is a bounded operator on a Banach space [6] and also for unbounded operators [4].Fredholm properties are also relevant when determining the spectrum of L [11,18].The relation between these two concepts is dealt with in [16,18].The determination of the isolated eigenvalues requires to solve a linear system with non-constant coefficients, which can be done in general only numerically.For the investigation of the isolated eigenvalues two concepts were introduced.The first was the Evans function [10], which is an analytic function on the complex plane, the zeros of which are the isolated eigenvalues of L. Later a topological invariant was introduced for the study of the spectrum [1].These methods were applied for several systems in physics [3,12,17], chemistry [2,7,13,19] and biology [10,15]. The aim of the paper is to present a self-contained detailed study of the spectrum of L, and to fill the gap between the abstract results (on exponential dichotomies and on topological invariants) and the applications.The novelties of the paper are as follows. • An algorithm is given for the determination of the exact position of the essential spectrum.The statements concerning the essential spectrum are proved by elementary methods.(Most of the known results give only sufficient conditions for the essential spectrum to lie in the left half plane.) • All the theorems are proved in the finite dimensional case.The presentation does not need abstract techniques, hence for those applying the theory a self-contained method is shown.(According to the author's knowledge a self-contained explanation, including the proofs, is only given for the case of unbounded operators [4].The proof of the finite dimensional case must be compiled from different sources, e.g.[5,8,14,16].) • The relation between the spectrum of L and the corresponding first order system is dealt with in detail.(The standard reference in this context is [14] but the relation is not proved in that book.) 2 Relation between the spectrum of L and the first order system Lemma 1 connects the determination of the spectrum of L with the study of system (5).In order to prove the lemma we will need the following Proposition. Proof We will show that V → 0 at +∞, it can be verified similarly, that V → 0 at −∞.Let us denote by z the real or imaginary part of the k-th coordinate of V , (k is arbitrary), and let d = D kk .We will prove that lim t→+∞ ż(t) = 0 that implies the desired statement.First we prove in the case c = 0. Since V and W tend to 0 at +∞, for any ε > 0 there exists t 0 such that for all t > t 0 . Integrating in [t 0 , t] we get In the case c > 0, there exists t 1 > t 0 for which yielding lim +∞ ż = 0, what we wanted to prove. In the case c < 0 we prove by contradiction.Assume that there exists α > 0 and a sequence t n → ∞, such that | ż(t n )| = α.Let ε = −cα/2 and apply (7) for t 0 = t n when n is large enough.If ż(t n ) = α, then the inequality in the left hand side of (7) ) → −∞ as t → +∞.This contradicts to the boundedness of z. Finally, let us consider the case c = 0. Then from the differential equation we get that z tends to zero at infinity.According to the Landau-Kolmogorov inequality (see e.g.[9]) ż ≤ 4 z z .Defining • as the supremum norm on [T, +∞) for T large enough, we get that ż → 0 at infinity.Lemma 1 (i) If system (5) has a unique solution x ∈ C 0 (R, C 2m ) for any y ∈ C 0 (R, C 2m ) and x depends continuously on y, (i.e.there exists M > 0, such that x ≤ M ( y for all y ∈ C 0 (R, C 2m )), then λ is a regular value of L. (ii) If λ is a regular value of L, then for any differentiable function y ∈ C 0 (R, C 2m ), for which ẏ ∈ C 0 (R, C 2m ), there exists a unique solution x ∈ C 0 (R, C 2m ) of system (5); and there exists M > 0 such that for any y satisfying the above conditions the corresponding unique solution x satisfies x ≤ M ( y + ẏ ). Then there exists a unique solution x ∈ C 0 (R, C 2m ) of (5).Let V = (x 1 , . . ., x m ) T , U = (x m+1 , . . ., x 2m ) T , then V ∈ C 0 (R, C m ) and U = V , hence V is twice differentiable, and LV − λV = W .The continuity follows from x ≤ M y , namely First we show that for any y ∈ C 0 (R, C 2m ) which is differentiable and ẏ ∈ C 0 (R, C 2m ) there exists a unique solution x ∈ C 0 (R, C 2m ) of (5).Let x = (V, U ) T , y = (y 1 , y 2 ) T , where V, U, y 1 , y 2 : R → C m , then system (5) takes the form The differentiability of y 1 implies that V is twice differentiable and V = U + ẏ1 , hence from the second equation Since λ is a regular value of L, this equation has a unique solution V ∈ C 0 (R, C m ), and there exists Moreover, according to Proposition 1 ) what we wanted to prove.Now we show the continuous dependence of x on y.Let us denote the maximum point of Vk (for some k = 1, . . ., m) by t k .Since Vk → 0 at ±∞, the maximum point is not at infinity, hence Vk (t k ) = 0. Therefore using (8) we get that there exists M 2 > 0 for which Similar inequality can be proved for the minimum of Vk , and for all coordinates of V , hence there exists M 3 > 0 for which V ≤ M 3 ( ẏ1 + y 1 + y 2 ).Since U = V − y 1 , there exists M 4 > 0 for which U ≤ M 4 ( ẏ1 + y 1 + y 2 ).Thus with M = 2(M 1 + M 4 ) we have x ≤ M ( y + ẏ ).Now we turn to the study of general first order systems, the special case of which is system (5). First order systems Now for short let C 0 = C 0 (R, C n ) endowed with the supremum norm x = max R |x(t)|, and let A : R → C n×n be a continuous function for which the limits exist.Let us consider the first order system Our aim is to give necessary and sufficient condition for the existence and uniqueness of a solution x ∈ C 0 of (9) for any y ∈ C 0 , and for the continuous dependence of x on y.Since A is continuous, system (9) has solutions for any y ∈ C 0 , that can be given by the variation of constants formula as where Ψ is the fundamental system of solutions of the homogeneous equation satisfying Ψ(0) = I, i.e. the n columns of the matrix Ψ(t) are n independent solutions of the homogeneous system Hence the question is that for a given y ∈ C 0 does there exist a unique x 0 ∈ C n , such that x ∈ C 0 (x is given by ( 10)), and that does x depend continuously on y. The dimension of the stable, unstable and central subspaces of the matrices A ± play important role.Let us denote the number of eigenvalues (with multiplicity) of A + with positive, negative, zero real part by n + u , n + s , n + c , respectively.We define n − u , n − s , n − c similarly using A − .First we show that the continuous dependence is violated when n + c > 0 or n − c > 0. The main point in these cases is to prove the existence of a bounded solution of the homogeneous equation, which does not tend to zero. Theorem 1 Let us assume that at least one of the following two conditions holds: Then the solution of ( 9) does not depend continuously on y, in the sense that there is no M > 0 for which x ≤ M ( y + ẏ ) holds for any differentiable function y ∈ C 0 , for which ẏ ∈ C 0 . Proof We prove in case (a), the other case is similar.According to Theorem 1.10.1 of [8] there exists z 0 ∈ C n , such that the solution t → Ψ(t)z 0 of ( 11) is bounded in [0, +∞) but does not tend to zero as t → +∞.Then there exist a > 0 and a sequence t k → +∞, such that |Ψ(t k )z 0 | = a for all k = 1, 2, . ... Let h k : R → R be continuously differentiable functions satisfying the following conditions Hence ẏk → 0 at +∞, since h k → 0, ḣk → 0 and A(t) is bounded.On the other hand, there exists Then x k is a solution of ( 9) belonging to y k , and In the further considerations we will assume n + c = 0 or n − c = 0.In this case we will use exponential dichotomies to answer the above question. Definition 1 System (11) possesses an exponential dichotomy in the interval J if there exist a projection P and positive numbers K, L, α, β, such that for s ≥ t, t, s ∈ J We will show that the exponential dichotomy on R implies the existence, uniqueness and continuous dependence of the solution of ( 9).It can be shown that system (11) possesses an exponential dichotomy on R if A(t) is constant and it has no eigenvalues on the imaginary axis, that is when n + c = n − c = 0.For the time dependent case there is no exponential dichotomy on R in general when n + c = n − c = 0.However, we will show that system (11) possesses exponential dichotomies on R + and on R − .If the projections of these two dichotomies are the same, then system (11) possesses an exponential dichotomy on R as well, and the existence, uniqueness and continuous dependence follows. We will use the following properties of exponential dichotomies. Lemma 2 (i) Let P 1 and P 2 be projections, for which Im P 1 = Im P 2 .If system (11) possesses an exponential dichotomy in the interval J with the projection P 1 , then it possesses an exponential dichotomy in the interval J with the projection P 2 as well. (ii) System (11) possesses an exponential dichotomy in any closed and bounded interval [a, b], with any projection. (iii) If system (11) possesses an exponential dichotomy in the intervals (a, b] and [b, c) with the same projection P , then it possesses an exponential dichotomy in the interval (a, c) (here a can be −∞, and c can be +∞). Proof (i) This statement is proved in [5] The following Lemma can be proved using the results in [5]. Lemma 3 (i) If n + c = 0, then system (11) possesses an exponential dichotomy in [0, +∞), the projection of which is denoted by P + .Moreover, dim(ker (11) possesses an exponential dichotomy in (−∞, 0], the projection of which is denoted by P − .Moreover, dim(ker Proof We prove only (i), the second statement can be verified similarly.Since n + c = 0, system ẋ = A + x with constant coefficients possesses an exponential dichotomy in [0, +∞) the projection of which, denoted by P + , has an n + s dimensional range and n + u dimensional kernel, see [5] pp.10.For any δ > 0 there exists t 0 > 0, such that A(t) − A + < δ for t > t 0 .Hence according to proposition 1 of [5] on pp.34 system (11) possesses an exponential dichotomy in [t 0 , +∞) with projection P + .Applying statements (ii) and (iii) of Lemma 2 we can complete the proof. Let us introduce the following subspaces. According to the next Proposition the subspace E + s consists of those initial conditions x 0 , for which the solution of the homogeneous equation tend to 0 as t → +∞.Similarly, the subspace E − u consists of those initial conditions x 0 , for which the solution of the homogeneous equation tend to 0 as t → −∞. We will need the following three lemmas.The first is a generalization of Proposition 2 to the inhomogeneous equation. (i) Assume n + c = 0. Then lim +∞ x = 0 if and only if (ii) Assume n − c = 0. Then lim −∞ x = 0 if and only if Proof First we show that lim t→+∞ Applying ( 12) for P = P + and s = 0 we obtain Now we prove the convergence of the second term.Let ε > 0 be an arbitrary positive number.Since lim +∞ y = 0, there exists t 1 > 0 such that, for t > t Now we show that ( 15) is equivalent to Applying (13) for P = P + and t = 0 we can see that the integral in (15) We have seen that the second term tends to zero as t → +∞, hence lim t→+∞ Ψ(t)(I − P + )(x 0 + x * ) = 0 implying x 0 + x * = 0 according to Proposition 2, hence x 0 = −x * , yielding (15). According to (17) the first and second terms tend to zero as t → +∞.Hence lim +∞ x = 0 if and only if the sum of the third and fourth term tends to zero as t → +∞.According to (19) this sum tends to zero if and only if that (15) holds, what we had to prove.The proof of statement (ii) of the Lemma follows similarly from (18) and (20). Lemma 5 Let us assume n + c = n − c = 0.The following two statements are equivalent.(i) For any a ∈ E + u and any b ∈ E − s there exists a unique solution x 0 of u , then (I − P + )z = 0 and P − z = 0, hence applying (i) with a = b = 0 we get z = 0. Now we show that for any z ∈ C n there exist Let z 1 be the solution of (i) with a = 0, b = P − z, that is Let z 2 be the solution of (i) with a = (I − P + )z, b = 0, that is Then (I − P + )z = (I − P + )(z 1 + z 2 ) and P − z = P − (z 1 + z 2 ), hence the uniqueness implies z = z 1 + z 2 . The following lemma is proved in [16] Prop.2.1 and in [5] p. 19, but in order to make the paper self-contained we present a short proof. Lemma 6 Let us assume n Then system (11) possesses an exponential dichotomy in R with a projection P * , for which Im P * = E + s , ker P * = E − u . Proof Assumption E + s ⊕ E − u = C n implies that there exists a projection P * , for which Im P * = E + s , ker P * = E − u .Namely, for an arbitrary vector z ∈ C n we can define P * z as Since Im P * = Im P + , Lemma 2 and Lemma 3 imply that system (11) possesses an exponential dichotomy in [0, +∞) with the projection P * .On the other hand Im (I − P * ) = Im (I − P − ), hence Lemma 2 and Lemma 3 imply that system (11) possesses an exponential dichotomy in (−∞, 0] with the projection P * .Since system (11) possesses an exponential dichotomy both in [0, +∞) and in (−∞, 0] with the projection P * , according to Lemma 2 it possesses an exponential dichotomy in R. Theorem 2 Let us assume n (i) If for any differentiable function y ∈ C 0 , for which ẏ ∈ C 0 there exists a unique solution x ∈ C 0 of (9), then 9) has a unique solution x ∈ C 0 for any y ∈ C 0 , and there exists M > 0, such that for any y ∈ C 0 and for the corresponding solution x ∈ C 0 the inequality x ≤ M y holds. Proof (i) Let us assume that system (9) has a unique solution x ∈ C 0 for any differentiable function y ∈ C 0 , for which ẏ ∈ C 0 .Let a ∈ E + u and b ∈ E − s be arbitrary vectors.We will show that there exists a differentiable function y ∈ C 0 , with ẏ ∈ C 0 , such that Namely, let h : R → R be a continuously differentiable function satisfying with some r > 0, and where q ∈ R is chosen to satisfy A(t) ≤ q for all t ∈ R. It can be easily shown that there exists k > 0, such that Ψ(t) ≤ ke qt for all t ∈ R. Let Then y and ẏ are continuous (also in zero, because their limits are zero from left and from right), and y, ẏ ∈ C 0 , because for t > 1 we have Let x ∈ C 0 be the solution of ( 9) belonging to y.Then according to Lemma 4 for x 0 = x(0) we have given by ( 21).According to Lemma 5 there exists a unique x 0 ∈ C n satisfying (22), hence Lemma 4 implies x ∈ C 0 .If x * ∈ C 0 is another solution of ( 9), then x * − x ∈ C 0 is a solution of (11).However, according to Corollary 1 x * − x ≡ 0, that is x * ≡ x. Finally we prove the continuous dependence.According to Lemma 6 there exist a projection P * , and positive numbers K, L, α, β, for which Repeating the proof of Lemma 4 replacing P + and P − with P * we get Therefore from the variation of constant formula ( 10) The spectrum of L In Section 1 we have introduced the matrix functions A λ , see (6).Since function Q tends to a limit at ±∞, the limits exist.We have seen in Section 2 that the dimension of the stable, unstable and central subspaces of the matrices A ± λ play important role.Let us denote the number of eigenvalues (with multiplicity) of A + λ with positive, negative, zero real part by n + u (λ), n + s (λ), n + c (λ), respectively.We define EJQTDE, 2003 No. 15, p. 10 Theorem 3 Let us assume that at least one of the following two conditions holds: Proof Let us assume the contrary, i.e. that λ is a regular value of L. Then according to (ii) of Lemma 1 for any differentiable function y ∈ C 0 (R, C 2m ), for which ẏ ∈ C 0 (R, C 2m ), there exists a unique solution x ∈ C 0 (R, C 2m ) of system (5); and there exists M > 0 such that for any y the inequality x ≤ M ( y + ẏ ) holds.However, Theorem 1 yields that this M cannot exist, which is a contradiction. In the further considerations we deal with the case n + c (λ) = 0 = n − c (λ).In this case we introduce the subspaces in the same way as in Section 2. (Now they depend also on λ, because A depends on λ.) (ii) λ is a regular value of L if and only if 0 (otherwise U ≡ 0 and x ≡ 0), and LV = λV .(ii) If λ is a regular value of L, then according to (ii) of Lemma 1 for any differentiable y ∈ C 0 (R, C 2m ) for which ẏ ∈ C 0 (R, C 2m ) there exists a unique solution x ∈ C 0 (R, C 2m ) of (5).Then Theorem 2 implies , then according to Theorem 2 for any y ∈ C 0 (R, C 2m ) there exists a unique solution x ∈ C 0 (R, C 2m ) of ( 5) and it depends continuously on y.Hence by (i) of Lemma 1 λ is a regular value of L. EJQTDE, 2003 No. 15, p. 11 The dimension of E + s (λ) and E − u (λ) can be determined explicitly, because only the eigenvalues of the matrices A ± λ have to be determined to get these dimensions.However, to get dim(E + s (λ) ∩ E − u (λ)) the time dependent system must be solved numerically.This leads to the definition of the Evans function., and the base of the subspace The assumption dim(E + s (λ) ∩ E − u (λ)) > 0 means that the two bases together give a linearly dependent system of vectors.The Evans function is defined as the determinant formed by these 2m vectors.That is the determinant is zero if λ is an eigenvalue. We have proved that the eigenvalues are the zeros of the Evans function.It can be also shown that the multiplicity of an eigenvalue is equal to the multiplicity of the zero of the Evans function, and that the Evans function is an analytic function on the domain Ω [1].Hence the zeros of D are isolated, that is in the domain where dim E + s (λ) + dim E − u (λ) = 2m there can be only isolated eigenvalues.This statement together with Corollary 2 enables us to determine the essential spectrum explicitly. Corollary 3 The essential spectrum of L is The bases of the stable and unstable subspaces can be determined numerically in the following way.We calculate the eigenvalues of A + λ with negative real part, and its corresponding eigenvectors.Let us denote these eigenvalues by µ 1 , . . ., µ k , and the eigenvectors by u 1 , . . ., u k (for short we used the notation k = n + s (λ)).Similarly, let us denote the eigenvalues of A − λ with positive real part by ν 1 , . . ., ν l , and the corresponding eigenvectors by v 1 , . . ., v l (for short we used the notation l = n − u (λ)).Then choosing a sufficiently large number we solve the homogeneous equation ẋ(t) = A λ (t)x(t) in [0, ] starting from the right end point with initial condition x( ) = u i e µi for i = 1, . . ., k. Hence we get k = n + s (λ) linearly independent (approximating) solutions of the differential equations, therefore their values at 0 give a base of E + s (λ).Similarly, solving the differential equation in [− , 0] we get a base of E − u (λ), and the determinant defining the Evans function can be computed.We note that if is very large and there is a significant difference between the real parts of the eigenvalues µ 1 , . . ., µ k , then the solution belonging to the eigenvalue with largest real part will dominate and the solutions starting from linearly independent initial conditions will be practically linearly dependent at zero.(Similar case can occur in [− , 0] as well.)To overcome this difficulty the problem can be extended to a wedge product space of higher dimension [3].Now we show a method to determine the eigenvalues and eigenvectors of A ± λ , which determine the dimensions of E + s (λ) and E − u (λ).We will deal with the two cases together, therefore for short we introduce where Q can be Q + or Q − .Let us denote an eigenvalue of A λ by µ and an eigenvector by u Thus we have proved the following proposition. Thus the eigenvalues of A ± λ are determined by equation (26) of degree 2m.In the special case when Q is an upper or lower triangular matrix the l.h.s. of the equation is a product of m second degree polynomials, hence the solutions can be computed explicitly [20]. Case of a single equation, m = 1 Now let U : R → R be a solution of where f : R → R is a continuously differentiable function, U − , U + ∈ R and it is assumed that c ≥ 0. The stability of U is determined by the spectrum of the operator The function q(t) = f (U (t)) is continuous and has limits at ±∞, If U tends to the limits U + and U − exponentially, then the integrals in the assumptions of Theorem 3 are convergent.Now A ± λ are 2-by-2 matrices and according to Proposition 3 their eigenvalues (µ 1,2 ) are determined by the equation The essential spectrum can be determined by calculating the dimensions of E + s (λ) and E − u (λ).These dimensions can be easily determined from the sets where n + c (λ) ≥ 1 and n − c (λ) ≥ 1.These sets are formed by those values of λ to which µ = iω is a solution of (31).Hence the set of λ values where n + c (λ) ≥ 1 is the parabola It is easy to show that on the left hand side of the parabola dim E + s (λ) = 2, and on the right hand side dim E + s (λ) = 1, see Figure 1.Similarly, the set of λ values where n − c (λ) ≥ 1 is the parabola It is easy to show that on the left hand side of the parabola dim E − u (λ) = 0, and on the right hand side dim E − u (λ) = 1, see Figure 1.The parabolas given by ( 32) and (33) for q + > q − in (a) and for q − > q + in (b).The dimensions of the subspaces E + s (λ) and E − u (λ) are shown, the upper numbers correspond to the former and the lower numbers correspond to the latter.Now applying Theorem 3 and Corollary 2 we have the following results concerning the spectrum of L. • Both parabolas belong to the essential spectrum of L. • The domain lying on the left hand side of both parabolas consists of regular values of L. • The domain lying on the right hand side of both parabolas contains all the isolated eigenvalues of L the remaining points of this domain (which are not isolated eigenvalues) are regular values of L. • If q + > q − , then the domain between the two parabolas is filled with eigenvalues. • If q + < q − , then the domain between the two parabolas is filled with points belonging to the essential spectrum, but they are not eigenvalues. In this special case of m = 1 the location of the isolated eigenvalues with respect to the imaginary axis can also be determined.It can be shown that zero is a simple eigenvalue and all other isolated eigenvalues of L are negative (real) if and only if U is strictly monotone and f (U − ) < 0, f (U + ) < 0.Here a is the concentration of the fuel, w is the concentration of an inhibitor species and b is scaled temperature.The travelling wave solution describes a flame propagating with velocity c.The number and stability of travelling waves of this system was investigated in [19].We proved that the solutions a, w and b have limits at +∞, that are denoted by a + , w + and b + .If b + = 0, then we refer to the travelling wave solution as a pulse solution.If b + > 0 then we call it a front solution.In the latter case a + = 0 = w + .It was also shown that a saddle-node bifurcation may occur and there can be 1, 2 or 3 travelling wave solutions.The stability of these solutions can also change through Hopf bifurcation. Flame propagation in a three variable model The saddle-node and Hopf bifurcation curves were determined numerically.Here we only show how the method described in the previous Section works for this system to determine the essential spectrum of the corresponding linearizdimen ed operator.The results obtained by the Evans function method will be only cited from [19]. Corollary 1 Let us assume n + c = n − c = 0.The homogeneous equation (11) has a unique solution in C 0 (it is x ≡ 0) if and only if Definition 2 The Evans function belonging to the operator L is D : Ω EJQTDE, 2003 Figure 1 . Figure1.The parabolas given by (32) and (33) for q + > q − in (a) and for q − > q + in (b).The dimensions of the subspaces E + s (λ) and E − u (λ) are shown, the upper numbers correspond to the former and the lower numbers correspond to the latter.
7,820.4
2003-10-19T00:00:00.000
[ "Mathematics" ]
Fabrication of Progesterone-Loaded Nanofibers for the Drug Delivery Applications in Bovine Progesterone is a potent drug for synchronization of the estrus and ovulation cycles in bovine. At present, the estrus cycle of bovine is controlled by the insertion of progesterone-embedded silicone bands. The disadvantage of nondegradable polymer inserts is to require for disposal of these bands after their use. The study currently focuses on preparation of biodegradable progesterone-incorporated nanofiber for estrus synchronization. Three different concentrations (1.2, 1.9, and 2.5 g) of progesterone-impregnated nanofibers were fabricated using electrospinning. The spun membrane were characterized by scanning electron microscopy, X-ray diffraction, differential scanning calorimetry, thermogravimetric analysis, and Fourier transform infrared spectroscopy. Uniform surface morphology, narrow size distribution, and interaction between progesterone and zein were confirmed by SEM. FTIR spectroscopy indicated miscibility and interaction between zein and progesterone. X-ray analysis indicated that the size of zein crystallites increased with progesterone content in nanofibers. Significant differences in thermal behavior of progesterone-impregnated nanofiber were observed by DSC. Cell viability studies of progesterone-loaded nanofiber were examined using MTT assay. In vitro release experiment is to identify the suitable progesterone concentration for estrus synchronization. This study confirms that progesterone-impregnated nanofibers are an ideal vehicle for progesterone delivery for estrus synchronization of bovines. Background Electrospinning is a technique used to form nanoscale fibers. It is quite versatile for fabricating nanofibers from various synthetic or natural polymers [1]. In literature [2], reported functional electrospun nanofibrous composite structures can also be produced by incorporating functional additives in the fiber matrix or on the fiber surface. The development of nanostructured systems for the delivery and sustained release of molecules towards specific targets represents a frontier area of nanoscience and nanotechnology, with the possibility of contributing substantially to advances in animal reproduction [3]. Improving delivery techniques that minimize toxicity of drug has a significant effect on its efficacy. Overall, nanosized delivery systems enhance the therapeutic efficacy of several bioactive molecules, including reproductive hormones, by simply improving their pharmacokinetic and pharmacodynamic properties [4]. These systems are able to carry a wide variety of molecules enhancing their sustained release, showing low systemic toxicity, allowing targeted treatment, and avoiding premature inactivation [5,6]. Electrospun polymer-based fibers have been investigated for providing different types of controlled drug release profiles, such as immediate, delayed, sustained, and biphasic releases [7,8]. Among them, sustained drug release is gaining considerable attention as a method of administering and maintaining desired drug concentrations in the blood within a specified therapeutic window [9][10][11]. Zein is a mixture of proteins with different molecular weights in corn gluten. Apart from biodegradability and biocompatibility, zein has low hydrophilicity, high elasticity, and film-forming capabilities, and it is considered a potential raw material for bioengineering application [12]. The artificial induction and synchronization of estrus in production animals is critical to ensure a positive balance of the cost-benefit equation of the artificial insemination related activities. The usual administration of hormones must be very precise. The controlled hormone release is a current technological challenge. One interesting agent to be tested in such delivery system is the progesterone, a steroid hormone naturally produced by the corpus luteum of the ovaries of mammals and involved in their pregnancy. In veterinary medicine, exogenous progesterone is used as a potent drug for suppression of estrus and ovulation, making possible the synchronization of the estrus and ovulation cycles in livestock animals [13]. In this sense, the present study aims to investigate the release characteristics of progesteroneimpregnated zein nanofiber obtained by electrospinning process. In addition, the ability of progesterone-loaded zein nanofibers to provide sustained drug release was studied. Materials Zein from corn and progesterone were purchased from Sigma-Aldrich (USA). Ethanol 99.7% purity was supplied by Merck Progesterone-Loaded Nanofiber Fabrication Zein was dissolved in ethanol and kept under vigorous stirring overnight at room temperature. Various concentrations (1.2, 1.9, and 2.5 g) of progesterone were dissolved in ethanol for an hour at room temperature. Both solutions were mixed for an hour. Progesterone-loaded zein fibers prepared by electrospinning were spun using a voltage of 24 kV, working distance of 12 cm, and feed rate of 2 μL min −1 . Electrospinning processes were carried out under ambient conditions (24 ± 3°C with relative humidity 57 ± 4%) [14]. Characterization of Progesterone Loaded Zein Nanofiber Scanning Electron Microscopy Scanning electron microscopy is used to check the surface morphology of three different concentrations (1.2, 1.9, and 2.5 g) of progesterone-incorporated nanofiber. The SEM characterization of electrospun nanofiber was performed using JEOL JSM-6480 V (accelerative voltage 20 kV) scanning electron microscopy at the Nanotechnology Department of SRM University, Chennai. The nanofiber samples collected on the aluminum foil was peeled out and then mounted on SEM sample holder using graphite-impregnated adhesive conductive black carbon tape, coated with platinum, and visualized under SEM at various magnifications. X-ray Diffraction XRD patterns were generated from nonwoven fibrous mat using a Rigaku D/Max ULTIMA 11 X-ray diffractometer (Japan). The X-rays are generated by a cathode ray tube filtered to produce monochromatic radiation directed towards the sample. The interaction of the incident rays with the sample produces constructive interference (and diffracted rays). The diffracted intensity were recorded from 0 to 1400 at 2θ angle and the pattern was recorded by Cu K radiation with 1.5418 Å and graphite monochromatic filtering wave at a tube voltage of 40 kV and tube current of 30 mA, and scanning in the region of 0 to 70 at 6 min −1 with incident beam. Differential Scanning Calorimetry Differential scanning calorimetry (DSC) measurements (Mettler Toledo DSC 821e, Schwerzenbach, Switzerland) were performed on samples of 5 mg in the range of −100 to 200°C at a heating rate of 10°C/min (N 2 atmosphere 80 L/min). The glass transition temperature (Tg) was evaluated with the Stare-software version 6.01 (Mettler Toledo, Schwerzenbach, Switzerland; calibration with indium and zinc). Zein films were stored over silica gel or at different relative humidities for 5 days prior to measurement to achieve different water contents. The relative humidity (r.h.) was controlled by saturated salt solutions (KCH 3C OO 22% r.h.; NaCl 75% r.h.; ZnSO 4 85% r.h.; pure water 100% r.h.) The predicted Tg values were calculated with the Gordon-Taylor-equation. Fourier Transform Infrared Spectroscopy Nanofiber functional groups were analyzed using FTIR spectroscopy. A pinch of the sample was placed into the sample holder and FT-IR spectra (Spectrum Rx1, Perkin Elmer) were recorded in the range 4000-400 cm −1a to a resolution of 4 cm −1 . MTT Viability Assay Vero cells from ATCC are used for the MTT assay. One hundred-microliter Vero cells at the concentration of 3 × 10 3 cells/well were seeded in 96-well plates containing DMEM and incubated in 5% CO 2 at 37°C for 24 h. The medium was changed after 1 h and 100 μL of different concentrations (20,000, 10,000, 1000, 500, 250, and 100 μg/ml) of the 1.2 g progesterone-loaded nanofiber dissolved with PBS was added to the wells and incubated for 24 h at 37°C in the CO 2 incubator. One hundred microliter of MTT (5 mg/mL) was added to the wells containing cells and nanofibers of different concentrations. It was incubated at 37°C for 4 h. The medium was then removed and 20 μL of DMSO was added to the wells. It was then shaken and incubated at 37°C for 15 min and the absorbance was measured at 575 nm. In Vitro Release of Progesterone from Nanofiber The in vitro release studies were performed at three different progesterone concentrations (1.2, 1.9, and 2.5 g) of nanofibers in a shaker at 37°C. A weighed quantity of the fibers (20 mg) was suspended in PBS of pH 7.4. Then, it was kept in a shaker for seven days at 37°C. The sample was withdrawn at regular one day intervals up to 7 days and replaced with the same volume of freshly prepared PBS pH 7.4. The withdrawn samples were used for OD measurement at 237 nm by a UVvisible spectrophotometer (Shimadzu, model UV-2601). Results and Discussion In this, the electrospinning of zein nanofibers was mostly carried out by using ethanol system which resulted in ribbon-like fiber morphology due to the rapid mat formation of the fiber core because of the very fast evaporation of the solvent [15,16]. Characterization of Progesterone Loaded Zein Nanofiber Scanning Electron Microscopy The fibers were spun from the same polymer solution and under the same spinning conditions with different concentrations of progesterone and were characterized by SEM. Zein nanofiber without progesterone had ribbon morphology with smooth surfaces and uniform structures (Fig. 1a) compared to the progesterone-impregnated nanofibers [17][18][19]. Progesterone was successfully impregnated on zein nanofiber and formed beads in the fiber mesh; it is clearly depicted in Fig. 1b. Among the three different concentrations, 1.2 g of progesterone-impregnated nanofiber entraps hormone both within their polymeric structures and within the minute interstitial spaces due to surface adsorption. When the concentration of progesterone increases, the nanofiber surface morphology was disrupted due to the increase in polymer solution viscosity creating difficulty in fiber formation; it is clearly depicted in Fig. 1c, d. However, 1.2 g of progesterone-interlocked fibers is suitable for sustained release of progesterone. The diameter of nanofiber without progesterone was around 170 nm. The average size of fiber diameter ranged from 180 ± 12 to 278 ± 16 nm for 1.2 g progesteroneimpregnated zein nanofiber. X-ray Diffraction Method The XRD patterns of electrospun zein nanofibers and different concentration of progesterone loaded nanofibers are shown in Fig. 2. Nanofibers without progesterone have shown two broad peaks having the maximum at 2θ = 64.3441 (1.44787 Å) and at 77.3335 (1.23391 Å) (Fig. 2a). Various concentrations of progesterone-loaded nanofiber show similar diffraction patterns (Fig. 2b-d). Red arrows indicate the smooth and uniform surface morphology of nanofibers. Its shows not containing progesterone; b 1.2 g progesterone. Yellow arrows indicate the progesterone impregnated in the nanofibers; c 1.9 g progesterone, d 2.5 g progesterone. Blue arrows indicate the high concentration of progesterone disrupting the nanofiber structure concentrations did not affect the structural integrity of the nanofibers as no major shifts were seen. Formation of crystals is caused by the different extent of deformation of the polymer molecules during fiber formation by electrospinning [20][21][22]. The XRD pattern showed the character of zein and there is no new peak which can be confirmed that no chemical interaction between zein and progesterone in the formed nanofiber. Differential Scanning Calorimetry DSC was done on the electrospun nanofiber and progesterone incorporated nanofibers in order to determine the thermal behavior of the nanofibers. The glass transition temperature (Tg) of the different samples are presented in Fig. 3. The Tg of the nanofiber without progesterone was observed at around 151°C, which is in close agreement with the Tg value reported in the literature for zein [23]. Different concentration of progesterone loaded zein fiber exhibited a sharp endothermic peak corresponding to the melting point of 121°C for 1.2 g of progesterone loaded fiber, 121°C for 1.9 g of progesterone loaded fiber, and 118°C for 2.5 g progesterone loaded fiber (Fig. 3b-d). The addition of progesterone in the zein nanofibers caused a decrease in the Tg values which are possibly due to the plasticizing effect of the incorporated component [24][25][26][27][28]. Progesterone was integrated nicely with the zein molecules and displayed a plasticizing effect that increased the mobility of zein molecular chains. FTIR FTIR spectra of electrospun zein nanofibers and progesterone loaded nanofibers were shown in Fig. 4 (Fig. 4a). These bands correspond to the amide I and amide C-H deformation and bond vibration, trisubstituted aromatic ring, carboxylic acid, aromatic ring, C-O stretching, and acetylated lignin, respectively, for pure zein nanofibers. Different concentrations of progesterone-loaded nanofiber showed similar bands at 633.34, 1103.54, 1284.21, 1448.78, 1653.10, 2928.51, and 3402.89 cm −1 (Fig. 4b-d). These bands correspond to the amine N-H stretch, heterocyclic amine and NH stretch, alkenes, ketones, isocyanate aromatic functional group C-N stretch, and skeletal C-C vibration functional group [29]. Meanwhile, numerous peak sizes were reduced and some peaks totally disappeared from the spectra of progesterone loaded nanofibers compare to zein nanofiber spectra. These phenomena verify the speculation that hydrogen bonding had taken their roles in the formation of homogeneous composite fibers [30][31][32]. MTT Assay MTT assay was the most efficient, due to a less experimental error. The graph in Fig. 5 depicts the results of an MTT assay for different concentrations (20,000, 10,000, 1000, 500, 250, and 100 μg/mL) of the 1.2 g progesterone-loaded nanofiber added into Vero cells. Percent viability in the control sample and nearly 80% viability in the 100 μg/mL fiber added cells. In each high concentration, there was less reduction in the percentage of viability of cells. Sixty percent of cells are viable in the higher concentration of 20,000 μg/mL. Zein is one of the best-understood biomacromolecules and classified as Generally Recognized as Safe (GRAS) by the US Food and Drug Administration [33]. In Vitro Release of Progesterone from Nanofiber The percentage release of progesterone was also estimated for different concentrations of progesterone loaded and expressed in Fig. 6. Nanofibers loaded with 1.2 g of progesterone could release 87.28% of progesterone by 7 days and 50% release was achieved by 2.8 days. As the amount of progesterone loaded into nanofibers increased, the half life also increased correspondingly. This study confirms that 1.2 g progesterone loaded zein nanofibers can be potentially used in controlled delivering of progesterone, in livestock animals for estrus synchronization. Conclusions The results of the current study confirm some miscibility of progesterone on hydrophobic biopolymers according to SEM, XRD, DSC, TGA, and FTIR. The electrospinning can be appropriately used to encapsulate active agents in biodegradable and biocompatible polymers, providing a hormone release sustainably. The increases in the concentration of progesterone affect the nanofiber size and morphology was confirmed by SEM. Progesterone at various concentrations did not affect the structural integrity of the nanofibers. Progesterone was found to have the effect of plasticizer when added to zein polymer. The electrospinning can be appropriated used to encapsulate active agents in biodegradable and biocompatible polymers, providing a hormone release sustainably. This study clearly indicated that 1.2 g progesterone-loaded zein nanofibers can be potentially used in controlled delivering of progesterone, in livestock animals for estrus synchronization. Authors' contributions CK carried out the experiments. MS participated in the sequence alignment and drafted the manuscript. GKJ performed the characterization work. RS and GJ conceived of the study and helped to draft the manuscript. GDR participated in the design of the study. All authors read and approved the final manuscript. Competing interests The authors declare that they have no competing interests.
3,378.2
2017-02-14T00:00:00.000
[ "Engineering", "Materials Science", "Medicine" ]
ADMM and Spectral Proximity Operators in Hyperspectral Broadband Phase Retrieval for Quantitative Phase Imaging A novel formulation of the hyperspectral broadband phase retrieval is developed for the scenario where both object and modulation phase masks are spectrally varying. The proposed algorithm is based on a complex domain version of the alternating direction method of multipliers (ADMM) and Spectral Proximity Operators (SPO) derived for Gaussian and Poissonian observations. Computations for these operators are reduced to the solution of sets of cubic (for Gaussian) and quadratic (for Poissonian) algebraic equations. These proximity operators resolve two problems. Firstly, the complex domain spectral components of signals are extracted from the total intensity observations calculated as sums of the signal spectral intensities. In this way, the spectral analysis of the total intensities is achieved. Secondly, the noisy observations are filtered, compromising noisy intensity observations and their predicted counterparts. The ability to resolve the hyperspectral broadband phase retrieval problem and to find the spectrum varying object are essentially defined by the spectral properties of object and image formation operators. The simulation tests demonstrate that the phase retrieval in this formulation can be successfully resolved. Introduction Multispectral (MS) and hyperspectral (HS) images differ in amount and width of spectral channels. The number of spectral channels varies from a few for MS imaging and goes from tens up to hundreds for HS imaging. We do not make a difference between these two scenarios and use the word 'hyperspectral' addressing to both MS and HS ones. HS imaging is much more informative as compared with conventional RGB imaging and indispensable in many applications such as remote sensing [1] , biology [2] , medicine [3] , quality material and food characterization [4] , control of ocean [5] and earth pollution [6] , etc. A flow of publications on computational methods developed for various applications of HS imaging is growing fast (e.g., [7][8][9] ). Conventionally, spectral information is extracted from observations of two different modes: (1) registered for each spectral channel separately, channel-by-channel, or (2) registered as the total power of entire spectral channels with subsequent spectral analysis. In this paper, we follow the latter scenario with a broadband illumination of an object of interest and a broadband sensor registering the total power of the impinging light beam. single [25] ) intensity measurements (squared linear projections) Y t = | A t U o | 2 , t = 1 , . . . , T , where Y t ∈ R M , and A t ∈ C M×N is an image formation operator. Here and in what follows t stays for experiment number, and T is a maximum number of experiments. The latter is used also as notation for a set of experiments. The heuristic and semi-heuristic iterative algorithms with alternative projections to the object and measurement planes are well known and studied starting from the works of Gerchberg and Saxton [26] and Fienup [27] . These techniques are proven to be efficient for various optical applications. In recent development, the setup for phase retrieval assumes a modulation of the object U o by random phase masks M t ∈ C N with the observation model where ' •' stands for the Hadamard, element-by-element, product of two vectors. In general, the phase retrieval is a non-convex inverse imaging problem. Theoretical studies formulating restrictions on object and image formation models leading to uniqueness and convergence of the iterative algorithms are of special interest. The model (1) with the operator A t given as the Fourier transform (FT) is common for many theoretical works [28][29][30][31] . Optimization of random phase coding modulation masks M t is a hot topic in phase retrieval. It is used in order to improve the algorithm's convergence and spatial resolution of phase estimates [14,32,33] . In this paper, random phase masks are essential instruments enabling hyperspectral phase retrieval. The extended review of the recent developments in the mathematical foundation of the phase retrieval problems can be seen in [34][35][36] . In this paper, we present a novel phase retrieval formulation that provides prospective powerful instrumentation for broadband HS complex domain (phase/amplitude) imaging. In what follows, we use a conventional vectorized representation of 2 D images as vectors. For the object of interest, it is the vector U o,k ∈ C N , N = nm , where n and m are width and height of 2 D image; integer k stays for a number of the spectral component of the hyperspectral object, k ∈ K , where K is a set of the spectral components. For the several-bands case, straight forward approach is to expand the model (1) for the number of used wavelengths by simply getting observations separately on each wavelength (e.g., [37] ) by the analogy of placing waveband filers after a light source. However, we consider a more complex task where spectral observations are superimposed. Therefore, we introduce the HS broadband phase retrieval as a reconstruction of a complex-valued object U o,k ∈ C N , k ∈ K, from the intensity measurements: For the noisy intensity observations with additive noise ε t , Y t is replaced by Z t : Here, A t,k ∈ C M×N are linear operators modeling a propagation of 2D object images from the object plane to the sensor, and t is the index of experiment. Total intensity measurements Y t ∈ R M + are calculated over the spectrum K as the sum of spectral intensities | A t,k U o,k | 2 . It is essential in this paper that the object U o,k and the operators A t,k are spectrum dependent, varying in k . In optics, it means that the reflective and transmissive properties of the object (specimen) as well as the light propagation operators depend on the wavelength of light. There is a small number of publications relevant to the HS broadband phase retrieval from the total intensity measurements considered in this paper. We briefly review these works. The HS phase retrieval in the paper [38] is formulated as object reconstruction from the intensity observa- where the object U o is spectrally invariant, i.e., does not depends on k . This assumption differs [38] essentially from the setup considered in this paper. With the introduced observation model (2) , the interferometric HS methods also can be interpreted as phase retrieval problems. Indeed, the HS digital holography uses the observation model where R t.k stays for the harmonic reference signal varying from experiment-to-experiment [39][40][41] . For the shearing HS digital holography, we can introduce the observations as where the reference beam R t,k is not additive to the object as in (4) but propagates through the object U o,k [42][43][44] . The harmonic modulation of the signals in (4) and (5) is a special feature in computational holography. The solution for (4) can be given by FT of the observations combined with complex domain filtering [41] . The solution for (5) is obtained by FT of the observations included in the phase retrieval iterations [44] . Note that the measurements Y t in (2) with arbitrary A t,k are very different from those in (4) and (5) , where the harmonic modulation is used for spectral analysis of observations typical for interferometry and holography. The observations (2) -(3) do not include any spectral analysis signals or spectral analysis instruments. The ability to resolve the HS phase retrieval problem is completely defined by the developed algorithm and the image formation operators A t,k what includes a design of encoding phase masks. The main novelty of this paper is the formulation of the HS phase retrieval for a spectrally varying object from intensity observations (2) -(3) with spectrally varying image formation operators. The developed algorithm is derived from the variational formalization of the problem based on the alternating direction method of multipliers (ADMM) for complex-valued variables and the novel Spectral Proximity Operators obtained for Gaussian and Poissonian observations. Computations for these operators are reduced to the solution of the sets of cubic (for Gaussian observations) and quadratic (for Poissonian observations) algebraic equations. The model of the object U o,k is unknown and is not exploited in the algorithm iterations. Pragmatically, we target on the HS phase retrieval formulation and the algorithm's development. The numerical and physical tests prove that the HS phase retrieval in this general formulation can be resolved. The proposed setup allows a simple optical implementation, possibly lensless, which is much simpler than implementations used in interferometry and holography imaging. The rest of the paper is organized as follows. The algorithm development is a topic of Section 2 . From a simple variational formulation of the problem, one fidelity term, and one prior term, we go to ADMM considering the features concerning the formulation for complex-valued variables. The spectral proximity operators (SPO) are introduced as solutions of optimization problems with quadratic penalization. A complex-domain block matching filter is used as a regularization tool for the HS phase retrieval. The results of simulation tests are demonstrated in Section 3 . The setups and results of physical experiments are presented in Section 4 . Conclusions and discussions are given in Section 5 . Approach Let l({ Z t } , k ∈ K | U t,k | 2 ) be a neg-log-likelihood of the observed { Z t } , t ∈ T , and the complex-valued images at the sensor plane are Here and in what follows, the curved brackets are used as an indication of a set of variables. The neg-log-likelihood is a fidelity term measuring the misfit between the observations and the prediction of the intensities of U t,k summarized over the spectral interval. Various inverse imaging computational methods have been developed under variational optimization formulation using one fidelity term and one prior term. We start from this tradition and introduce an unconstrained maximum likelihood optimization to reconstruct the object 3D cube { U o,k } , k ∈ K, from the criterion of the form: where a second summand f reg is an object prior (regularization function). The straightforward optimization in (6) is too complex. The alternating direction method of multipliers (ADMM) provides a valuable alternative to this sort of problem [45][46][47][48] . The following logic leads from (6) to ADMM. Let us reformulate (6) as a constrained optimization on The problems (6) and (7) are equivalent with an advantage introduced by U t,k which can be treated as a splitting variable such that optimization can be arranged as sequential on U t,k and U o,k of the two loss functions l and f reg . The concept of splitting variables is at the root of many modern optimization methods (e.g., [46,47] ). One of the popular ideas to resolve (7) is to replace it by the unconstrained formulation with the parameter γ > 0 : (8) Here and in what follows k,t means summation over k ∈ K and t ∈ T , and k summation over k ∈ K. The second summand is the quadratic penalty for the difference between A t,k U o,k and the split- It is recommended in these iterations to take γ varying and going to smaller values γ → 0 . The performance of the algorithm depends on this parameter. A valuable counterpart to (8) with a decreasing sensitivity to γ is a reformulation of (7) with the Lagrange multipliers t,k and the augmented Lagrange loss function. For the complex-valued variables, this counterpart is of the form [49,50] : Here the Lagrange multipliers are complex-valued, t,k ∈ C M , and the superscript H stays for the Hermitian transpose. The complexvalued variables introduce specific features to this formulation, differing it from the conventional real-valued one. The alternating direction algorithm of multipliers (ADMM) for the Lagrangian (10) is composed of iterations [50] : The last equation updates the Lagrange multipliers. It can be verified that As || H t,k || 2 2 does not depend on U t,k and U o,k , the iterations (11) can be rewritten for (10) as We exploit the ADMM algorithm in this form for the considered HS phase retrieval problem. The convergence of the complexvalued ADMM algorithm obtained recently in [49] can be treated as a valuable argument in favor of this type of the algorithms, despite the fact that these results are shown for the different phase retrieval problem. In the following sections, we derive the solutions for the optimizations in (12) . Gaussian observations The loss function in the first row of (12) is of the form For minimization min U t,k J, we calculate the derivatives ∂ J/∂ U * t,k and consider the necessary minimum conditions ∂ J/∂ U * t,k (r) = 0 . These calculations lead to a set of complex-valued cubic equations with respect to U t,k (r) : Here, r stays for the spacial coordinates instead of (x, y ) in 2 D image representations. Note that these equations are separated on r as well as with respect to t. The following manipulations resolve the set (14) . Calculating the squared absolute values for both sides of these equations and producing summation on l ∈ K, we arrive to With the notations: we rewrite (15) in a compact form as a cubic Cardano's equation with respect to x : The Cardano's formulas give the solutions for (17) . The coefficients of this equation are real-valued. It has three roots, which, depending on the discriminant D , are real-valued for D ≤ 0 and one root is real, and two others are complex-valued for D > 0 . We are looking for real-valued nonnegative solutions x ≥ 0 . If there are several such solutions, we select the one which provides the smallest value to the criterion (13) . If such ˆ x is selected, the solution for U t,k (r) according to (14) is as follows: Equation (17) should be solved for each r and t separately. It follows that the criterion for the selection of the best ˆ x t (r) and ˆ U t,k (r) should be pixel-wise: where ˆ U t,k (r) is given by (18) . Thus, the solution for ˆ U t,k (r) is obtained by the following three stage procedure: (1) Solution of (17) ; (2) Analysis of the roots of the Cardano's equation selecting the one minimizing (19) ; The solution ˆ U t,k (r) is a crucial point of the developed algorithm as it allows extracting spectral information from the total observations summarized over entire spectral components. This solution can be treated as a synchronous detector with a modulating where ˆ x t (r) is an estimate of Z t (r) . The spectral resolution of this detector is restricted by differences in the spectral properties of in the spectral properties of the image formation operator A t,k and in the spectral properties of the object U o,k (r) . Poissonian observations The observations take random integer values, which can be interpreted as a counted number of photons detected by the sensor. This discrete distribution has a single parameter μ and is defined Here p(Z t (r) = n ) is the probability that a random Z t (r) takes value n , where n ≥ 0 is an integer. The parameter μ is the intensity flow of Poissonian random events. The parameter μ is different for different experiment t and r with values μ = Y t (r) . The probabilistic Poissonian observation model is given by the formula where χ > 0 is introduced as a scaling parameter for photon flow. According to the properties of the Poisson distribution, we have for the mean value and the variance of the observed Z t (r) , In these formulas, χ defines the level of the random component in the signal and can be interpreted as an exposure time. A larger χ means a larger exposure time, and a larger number of the photons with the intensity The necessary minimum condition on U t,k can be calculated as To resolve the problem, we use the methodology applied to the Gaussian data. We calculate the sums over l for the squared magnitude values of the left and right sides of (22) and using the notations (16) obtain the set of the quadratic equations for x : As b 2 − 4 ac = q 2 + 4(1 + γ χ ) γ Z t (r) ≥ 0 , the quadratic equations have two real roots. The estimate for U k,t (r) corresponding to the solution ˆ x according to (22) is as follows As there are two real-valued roots, to select the proper root, which is nonnegative and the best minimizer for J, we use pixelated summands of this criterion with fixed t and r : Both solutions for min U k,t J (Gaussian and Poissonian) can be interpreted as the proximity operators being obtained minimizing the likelihood items (first item in (8) ) regularized by the quadratic penalty (second item in (8) ). With the standard compact notation for the proximity operators [38,51] , we denote these solutions as: where f stays for the neg-log-likelihood part of J and γ > 0 is a parameter. These proximity operators resolve two problems. Firstly, the complex domain spectral components U t,k (r) are extracted from the observations where all of them are measured jointly as the total power of the signal. Thus, we obtained the spectral analysis of the observed signals. Secondly, the noisy observations are filtered with the power controlled by the parameter γ compromising the noisy intensity observations Z t and the power of the predicted signal A t,k U o,k at the sensor plane. We introduce a term 'Spectral Proximity Operator' (SPO) for (27) . The proximity operators have already been introduced for the phase retrieval in its monochromatic version (single wavelength, k = 1 ) in [52,53] . A generalization of these proximity operators to a sum of intensity measurements is given in [52] , where it was exploited for phase retrieval from sub-sampled data. However, this generalization is not sufficient to deal with the considered setup of the HS phase retrieval. The proposed Spectral Proximity Operator is new. The cubic and quadratic equations have appeared for Gaussian and Poissonian observations, respectively, but they are scalar equations for the monochromatic phase retrieval, while for the considered HS problem, we need to resolve the sets of cubic and quadratic equations. The spectral resolution properties of these proximity operators define their unique peculiarity. In our MATLAB implementation of SPO the calculations for all pixels r are done in parallel using analytical representations for solutions of the quadratic and cubic Equations (17) and (24) . Minimization on U o,k Minimizing (12) on U o,k we drop the regularization term and replace its effects by joint filtering of { U o,k , k ∈ K} . This approach is in line with a recent tendency to use efficient filters for the regularization of ill-posed solutions without formalization of the regularization penalty in variational setups of imaging problems (e.g., [54] ). As a replacement for the prior, we use the Complex Cube Filter (CCF) [41] . Then, the solution for U o,k is of the form where the regularization parameter β reg > 0 is included if t A H t,k A t,k is singular or ill-conditioned. HS complex domain phase retrieval (HSPhR) algorithm Let us present the iterations of the algorithm developed based on the above solutions. (5) Backward propagation and preliminary object estimation: The complex domain initialization (Step 1) is required for the considered spectral domain k ∈ K. In our experiments, we assume 2D random white-noise Gaussian distribution for phase and a uniform 2D positive distribution on (0,1] for amplitude, which are independent for each k . The algorithm robustness to the randomness of initialization was tested on 100 different initializations which resulted in successful reconstructions with a standard deviation of only 0.001 in relative error value. The Lagrange multipliers are initialized by zero values, k,t = 0 . The forward propagation is produced for all k ∈ K and t ∈ T (Step 2). The update of the wavefront at the sensor plane (Step 3) is produced by the proximal operators. For the Gaussian observations, this operator is defined by (17) - (18) and for the Poissonian observations by (24) - (25) . It requires solving the polynomial equations, cubic or quadratic for the Gaussian and Poissonian cases, respectively. The initial iterations are produced with zeroed Lagrange variables. In Step 4, the Lagrange variables are updated starting from s > s 1 , where s 1 is given. This delay in involving the Lagrange variables can be used to shorten the calculation time and in order to improve the convergence of the algorithm. A similar delay with similar reasons is used in the application of the CCF filter in Step 6. The backward propagation of the wavefront from the sensor plane to the object plane is combined with an update of the spectral object estimate in Step 5. The sparsity-based regularization by Complex Cube Filter (CCF) is relaxed by the weight-parameter 0 < β s < 1 at Step 6. The iteration number n is fixed (Step 6) in this algorithm implementation. However, the stop condition might be estimated by the stagnation of the reconstruction by calculating relative error for consecutive reconstructions of U s o,k and U s −1 o,k and stop the iterations when it becomes lower than a threshold. The CCF algorithm is designed to deal with 3D complex-valued cube data and is introduced in detail in [41] . Examples of its application and features of this algorithm can be seen in [44,55,56] . A few notes on this algorithm. The CCF algorithm is based on SVD analysis of the HS complex-valued cube. It identifies an optimal subspace for the HS image representation, including both the dimension of the eigenspace and eigenimages in this space. The Complex-Domain Block-Matching 3D (CDBM3D) algorithm [57,58] filters these eigenimages. Going from the eigenimage space back to the original image space, we obtain the reconstruction of the object cube. The CDBM3D is developed as an extension to the complex domain of the popular BM3D algorithm [59] . CDBM3D is implemented in MATLAB as Complex Domain Image Denoising (CDID) Toolbox [58] . A sparsity modeling in CDBM3D is based on patch-wise 3D/4D Block-Matching grouping and 3D/4D High-Order Singular Decomposition (HOSVD) is exploited for block-wise spectrum design, analysis, and filtering (thresholding and Wiener filtering). Three types of sparsity are developed in CDID, corresponding to the following representations of complex-valued variables: I. Complex variables (3D grouping); II. Real and imaginary parts of complex variables (4D grouping); III. Phase and amplitude (4D grouping). In the forthcoming simulation tests, we use the joint real/imaginary sparsity (Type II). Numerical experiments We present the results of numerical experiments in order to evaluate the empirical performance of the developed algorithms. Phase and amplitude imaging by the lensless computational microscopy system is demonstrated. The optical setup illustrating the image formation is shown in Fig. 1 . A broadband coherent laser beam, wavelength range = [400 : 700] nm, of the uniform intensity distribution impinges on a transparent thin object O . The Fig. 1. Schematic optical setup corresponding to our tests and data formation model: 'Laser' is a broadband coherent light source; the object 'O' and the phase mask (implemented by SLM) are flat located in the same plane; 'Sensor' is a registration camera, and d is a distance from the object/mask plane to the sensor. phase coding modulation mask M t is attached to the object. The phase encoding by the mask is implemented by a spatial light modulator (SLM), which is a pixel-wise programmable device allowing to change a mask phase-delay M t (phase coding) from experiment to experiment. The laser beam propagates through the object, goes through the phase mask M t , and freely propagates in the air to the sensor. After that, the intensity of the beam is registered by the sensor as a coded broadband diffraction pattern. The free space propagation linking U o,k and U t,k is modeled by the angular spectrum operator calculated using the forward F{·} and inverse F −1 {·} Fourier transforms: where the angular spectrum transfer function in the 2 D Fourier domain ( f x , f y ) is defined according to [60] as: The λ k in Eq. (29) are the wavelengths corresponding to the spectral components from K, and d is a distance from the object/mask plane to the sensor plane (see Fig. 1 ), d = 2 mm in our tests. The Eqs. (29) and (30) are written for 2 D variables, thus the coding phase mask M t,k and the object U o,k are 2 D complex-valued functions. Comparing (29) with (2) , the linear operator A t,k as function of k is defined by the spectrum varying AS λ k and the spectrum varying phase-mask M t,k . The variations of the operator A t,k from experiment-to-experiment (index t) is governed by the masks M t,k . The developed HS phase retrieval algorithm can be treated as a nonlinear comb-filter. The independent random phase masks M t,k are crucial instruments to enable both spatial and spectral resolution of this comb-filter. A larger number K of the spectral channels requires a larger number of experiments T and corresponding T different phase masks (see tests in Section 3.1 ). It can be shown that the matrices t A H t,k A t,k in Step 5 of HSPhR can be well conditioned with a dominated main diagonal provided large T and properly selected randomness for M t . In this case, a high spatial and spectral resolution is guaranteed for HSPhR. The HS wavefronts and measurements are modeled by propagating the broadband laser beam through the object and the phase modulation masks. Both of them are modeled by the complexvalued transfer function [60] : where h is the thickness of the object/mask, n λ is a refractive index depending on λ. The phase delay of the wavefront propagating through g is defined by the argument of the exponential function in (31) . It is assumed in our tests that the amplitude a and the thickness h are spatially varying on (x, y ) but invariant with respect to the spectral (wavelength) arguments k and λ. The model (31) shows that the properties of the object and the masks are spectrally varying due to λ in the argument and to dependence on λ of the refractive index n λ . The set of the wavenumbers λ k , k ∈ K, used in the algorithm defines a spectral sampling of the object spectrum and the wavelengths for which HS phase retrieval is produced. The spatial sampling corresponds to the camera pixel size equal to 3 . 45 μm. We assume that the amplitude and phase of the object are 64 × 64 images: ' peppers ' for amplitude and ' cameraman ' for phase. We take these images quite different to test how far phase and amplitude can be separated by the proposed algorithm. The phase is scaled in such a way that the object phase delay for all λ ∈ would be in the range [0, π ] . The amplitude a is scaled to the interval (0,1]. For the coding mask, a = 1 , and h is pixel-wise random with equal probabilities taking one of the following five val- For each of the T experiments, the masks are generated independently. Thus, overall we have T different masks. The forward propagation (image formation) operator A t,k in (2) is defined by the propagation model (29) and the modulation masks. It is clear from the model (29) that the propagation model is wavelength varying. The same is true for the modulation mask, as it is fixed for each experiment, but its spectral properties vary with λ according to (31) . As the phase mask is included in the operator A t,k , the complex-valued image at the sensor plane is defined as U t,k = A t,k U o,k , where the object is also spectrally varying according to (31) . The measured intensities are Y t = k ∈ K | U t,k | 2 . In our simulation, we assume that the refractive index in (31) as a function of λ is known and calculated according to Cauchy's equation [61] with parameters taken for the glass BK7 [62] . The input laser beam is uniform in both phase and amplitude, the amplitude is equal to 1, and the phase is equal to 0. We formulate the phase retrieval as the reconstruction of U o,k . The transfer function (31) is used for the calculation of the observations Y t and the phase delay in the masks but is not used in the algorithm iterations. We evaluate the accuracy of the complex-valued reconstruction by the relative error criterion introduced in [28] : where x and ˆ x are the true signal and its estimate respectively. The noise level in observations is characterized by the signal-tonoise ratio (SNR) in dB. For the Gaussian noise case, SNR is defined as where Y t is noiseless and Z t is noisy observations. For the Poissonian obserations: SNR Poisson = 10 log 10 where χ following (20) The efficiency of the developed algorithm is demonstrated in simulation tests with Gaussian and Poissonian observations. In these experiments, we present the results achieved by the developed algorithms using both the Lagrange multipliers (Step 4) and the CCF filtering (Step 6) as it is in the HSPhR algorithm, Subsection 2.4 . In order to evaluate the value of these two key components of the algorithm, we also show results obtained when these components are disabled. Accuracy as a function of K and N A dependence of the algorithm accuracy on the number of wavelengths K and the number of observations T is of special interest. We calculate the accuracy criterion ERROR rel for K = [2 , 4 , 8 , 12] and T = [2 , 6 , 12 , 24 , 36] . The wavelengths for the varying K (spectral channels) are defined as uniformly covering the interval [400 : 700] nm. The number of iterations is fixed to n = 300 . The relative errors ERROR rel obtained in these experiments are shown in Table 1 . A larger value of T for a fixed K leads to a more accurate reconstruction with smaller ERROR rel . We found that visually a good phase imaging is achieved provided ERROR rel < 0 . 1 . This threshold for K = 2 is overcome for T = 6 . For K = 4 , 8 , 12 , it happens for T = 6 , 12 , 24 , respectively. We may conclude that T ≥ 3 K is sufficient for good accuracy HS phase imaging. The results in Table 1 are given for nearly noiseless data with SNR = 54 dB, but the conclusion that the inequality T ≥ 3 K is sufficient for good accuracy phase imaging holds for noisy data also. This conclusion is one of the reasons to exploit in the forthcoming tests T = 18 for K = 6 . Gaussian observations The relative error maps for the HSPhR algorithm are shown in Fig. 2 for SNR taking values [14 : 55] dB. The left image Fig. 2 (a) demonstrates the accuracy as a function of SNR and λ, while the right image Fig. 2 (b) provides the accuracy as a function of SNR and a number of iterations. In this latter image, the relative errors are averaged over λ, K = 6 . We may conclude from the left image that the acceptable quality imaging, ERROR rel < 0 . 1 , is achieved for all wavelengths provided that SNR is larger than 28 dB. The accuracy for smaller wavelengths is higher than the accuracy for larger values of wavelengths. The convergence rate can be evaluated from the right image. After 200 iterations and for SNR > 28 dB the accuracy becomes acceptable, ERROR rel < 0 . 1 . The bend in the accuracy map in Fig. 2 (b) well seen at iteration 50 is of special interest. The algorithm's iterations on CCF and the Lagrange variables are disabled in this experiment up to this 50th iteration. Thus, we can observe the dramatic improvement in the convergence rate due to CCF and the Lagrange variables. In Fig. 3 , we show the relative error maps provided that both the Lagrange variables and CCF are disabled in the HSPhR algorithm completely. The degradation of the algorithm in values of ERROR rel is clear in this case. In particular, it is demonstrated by comparing the error maps in Fig. 2 and Fig. 3 . The relative errors in Fig. 3 are always larger than 0.1, i.e., the imaging accuracy is unacceptable. Both the Lagrange variables and CCF can be activated from the first iterations, which may improve the convergence rate but at the price of more expensive calculations. The accuracy of the ADMM algorithm as a function of s 1 is demonstrated in Fig. 4 . This curve shows that the best accuracy for 100 iterations is achieved for s 1 = 50 . This result supports our choice for s 1 = 50 in our experiments. Poissonian observations For the scenario identical to the considered for the case of Gaussian noise, we introduce results obtained for Poissonian observations. The level of the noise is controlled by the parameter χ, which takes values corresponding to SNR in the interval [14 : 44] dB. The maps of the relative errors are shown in Fig 5 . The left image Fig. 5 (a) demonstrates the accuracy as a function Fig. 3. Relative error maps for the HSPhR algorithm with the disabled Lagrange variables and the CCF filter, Gaussian noise: a) ERROR rel as a function of SNR and wavelength; b) ERROR rel as a function of SNR and iteration number. In b) ERROR rel is averaged over the wavelengths. The achieved accuracy is much worse than that for the HSPhR algorithm with enabled iterations on the Lagrange variables and the CCF filter shown in Fig. 2 . of SNR and λ, while the right image Fig. 5 (b) provides the accuracy as a function of SNR and a number of iterations. In this latter image, the relative errors are averaged over λ, K = 6 . We may conclude from the left image that the acceptable quality imaging, ERROR rel < 0 . 1 , is achieved for all wavelengths provided that SNR is larger than 32 dB. The convergence rate is well seen from the right image. After 150 iterations and for SNR > 32 dB the accuracy becomes acceptable. A visual bend in the accuracy map in Fig. 5 (b) happened at the 50th iteration demonstrates the effects of CCF and the Lagrange variables, which are disabled up to the 50th iteration. We can observe the dramatic improvement in the convergence rate due to CCF and the Lagrange variables. In Fig. 6 we demonstrate the results obtained by the algorithm where the Lagrange iterations and CCF filtering are disabled completely. Similar to what was discussed for the Gaussian case, we may note a serious degradation in the algorithm performance with ERROR rel > 0 . 2 , which confirms the essential role of both the Lagrange iterations and the CCF filtering for the algorithm performance. Imaging In this subsection, we evaluate the visual quality of reconstructions. The algorithm provides the HS broadband imaging, i.e., reconstruction of 3D complex-valued cubes. In Fig. 7 , we show 2D images of amplitude and phase for the middle wavelength of the interval , λ = 640 nm. It is a nearly noiseless case as SNR = 54 dB. nian data. In all cases, the images of the amplitude and phase are of high visual quality. Relative errors are low, equal to 0.019 and 0.018 for Gaussian and Poissonian data, respectively. The images are shown in square frames in order to emphasize that the size and location of the object support are assumed to be unknown and reconstructed automatically by the HSPhR algorithm. The true image's support is used only for computation of observations produced for the zero-padded object and is not exploited in the algorithm's iterations. For the amplitude images, the estimates in these frames have nearly ideal zero values, while the phase estimates take random values in the frame areas as the phase cannot be defined for the amplitude equal to zero. Practically, these variations of the phase do not influence the calculation of ERROR rel as the amplitude is estimated quite accurately in these areas and close to zero. Nevertheless, ERROR rel shown in the images are calculated for the central parts of images corresponding to the true location of the image support. Fig. 8 shows the results for the noisy cases of SNR = 34 dB. With this level of SNR, the noise effects are essential. The relative errors are much higher with values 0.058 and 0.08 for Gaussian and Poissonian data, respectively. In Fig. 9 we show the images obtained by the HSPhR algorithm for the same noisy scenario, SNR = 34 dB, with the Lagrange iterations and CCF filtering disabled. The obtained amplitude/phase images are very noisy, and the visual quality of imaging is low, as compared with the results in Fig. 8 . The relative errors take higher values equal to 0.39 and 0.36 for Gaussian and Poissonian observations, respectively. These experiments provide a visual and numer-ical confirmation of the discussed above role of the Lagrange iterations and CCF filtering as the key components of the HSPhR algorithm. The complexity of the algorithm is illustrated by the computational time per iteration required for the test images shown, K = 6 , T = 18 . This time is equal to 0.8 sec. for calculations without CCF and equal to 21.5 sec. for calculations with CCF. All calculations are done in MATLAB R2019b on a computer with 32 Gb RAM and CPU with a 3.40 GHz Intel® Core TM i7-3770 processor. the MATLAB demo code of the developed algorithm is publicly available Supplementary Code 1. Optical setup of hyperspectral QPI In this section, we demonstrate the performance of the developed algorithms in a real-life scenario. The phase object and modulation by the masks M t are realized by a spatial light modulator (SLM). The optical setup implemented in the physical experiments is shown in Fig. 10 . The SLM is a GAEA-2 Holoeye with a resolution of 4160 × 2464 pixels and a pixel size of 3 . 74 μm. The super-continuum laser, = 550 − 2400 nm (YSL photonics CS-5), is an illumination source. To work in the spectral range of the sensor, we limit the laser's spectral bandwidth to = 550 − 10 0 0 nm by a bandpass filter. The camera is monochrome Blackfly S, model BFS-U3-200S6M, FLIR, with a pixel size of 2 . 4 μm. In Fig. 10 , the illumination wavefront expanded by lenses ' L 1 ' and ' L 2 ' propagates to SLM through the beamsplitter (BS), where SLM changes the wavefront phase distribution according to the object and modulation mask phases. After, by the achromatic doublet lenses ' L 3 ' and ' L 4 ' (with diameter 12.7 mm and focal length 50 mm), the wavefront is transferred to the 'Object' plane. Further, it propagates freely to the registration camera 'CMOS. ' The free propagation distance is d = 2 mm. The normalized laser's spectrum multiplied by the camera's sensor spectral response is shown as an insert in Fig. 10 . Experiment organization The object to be imaged is a phase object, i.e. the amplitude is invariant, and the phase is spatially varying. For the distribution of the object phase variation, we selected the 'cameraman ' test image. The phase variation is scaled to the interval [0 , π ] for λ = 520 nm, corresponding to our simulation tests. In a similar way, encoding phase mask M t is random with pixel-wise phase values taking with equal probabilities the following five values [0 , 1 , −1 , 1 / 2 , −1 / 2] · π / 4 , for λ = 520 nm. In each measurement t, the phase shift implemented by SLM is a sum of the object phase and the encoding mask phase. Thus, both the object and the encoding masks are implemented by SLM. In the shown results, the number of measurements T = 243 (intensity observations Z t ) with a reconstruction of K = 81 spectral channels. It is the ratio T /K = 3 , i.e., the number of spectral channels is three times less than the number of observations. The physical distance d between the 'object' plane and camera was fixed to 2 mm as it is in the simulation tests. In order to improve the focusing sharpness of the back-propagation in AS calculations, we tested various values of d. It was found that the best results are achieved with spectrally varying distances of d calculated as d k = 2 . 3 − 0 . 5 80 (k − 1) mm, k = 1 : K. This modification of the algorithm compensates the supercontinuum laser's phase curvature for separate wavelengths. To take into consideration the irregularity of the wavefront from the laser, following to [65] , as the preliminary tests, we perform experiments without the object and use these observations to reconstruct the corresponding HS wavefront amplitude and phase distributions at the object plane. Let us denote these complexvalued distributions calculated for all spectral bands as U k (n +1) . We use these distributions for corrections of the results obtained for the observations with the object. We subtract these phase distributions U k (n +1) from the results U (n +1) o,k obtained for the measurements with the object. The following HS phase retrieval results are obtained using these corrections. Noise in intensity observations Special tests are produced in order to evaluate noise characteristics in measured intensity observations. The results are obtained from 30 0 0 experiments performed with a fixed random encoding phase mask programmed on SLM. Fig. 11 (a) shows the image of the pixel-wise mean value of 30 0 0 intensity images. The histogram of these mean values is given in Fig. 11 (c). The pixel-wise image of standard deviation (STD) for the intensity observations is shown in Fig. 11 (b). The histogram of this STD image is given in Fig. 11 (d). We may conclude from the latter histogram, that STD is spatially nearly invariant with its distribution highly concentrated at the value equal to 3 a.u. It follows that the additive Gaussian model for the noise in intensity observation with invariant STD is quite relevant model to our experiments. Based on this analysis, we use the Gaussian version of the algorithm HSPhR for data processing in the physical tests. Reconstruction results The imaging results for object phase and amplitude are shown in Fig. 12 . The top row corresponds to the true amplitude and phase images as they are programmed on SLM, and the bottom row is for reconstructed images. Fig. 12 (a,b,c) are obtained for λ = 60 0 , 745 , 90 0 nm, respectively. See all spectral reconstructions in Supplementary Video 1. For each wavelength, the reconstructed phase images correspond to the true phases with great detail. We would like to mention here that in the HSPhR algorithm, we do not use the support constraint, but we can note that the support is reconstructed automatically with high accuracy. The reconstructed amplitudes are not spatially invariant as it is in the true images. These amplitude variations are defined by leakage of phase information to amplitude, which can be attributed mainly to non-ideality of SLM performance as well as to the misalignment effects in the optical setup. Despite the point that the quality of imaging is much lower in physical experiments than it is in simulation tests, the amazing result is that spectral analysis is produced for 81 spectral channels, and it is done using the numerical algorithm without traditional optical spectrometry. Remind that the ability to resolve the spectral channel is based on the spectral properties of the algorithm. The blue curve in Fig. 13 shows a normalized spectral distribution of the total image intensity (intensities summarized in the spatial domain) as a function of wavelength. This intensity is calculated for the images reconstructed by the HSPhR algorithm. For comparison, we show in this figure the normalized spectral intensity of the laser beam (red curve) multiplied by the camera sensor response function, as it is in Fig. 10 . This curve is obtained by intensity measurements using a commercial spectrometer. The red and blue curves are close to each other. It shows that the devel-oped algorithm properly preserves the spectral intensity distribution of the illumination source. Conclusion A novel formulation of the HS broadband phase retrieval problem is proposed, where the object and image formation operators vary spatially and spectrally. The proposed algorithm is based on the complex domain version of the ADMM technique. The derived spectrum proximity operators and CCF noise suppression are important elements of this algorithm. The spectral proximity operators are defined for Gaussian and Poissonian observations and calculated by solving the sets of cubic (for Gaussian) and quadratic (for Poissonian) algebraic equations. The ability to resolve the HS phase retrieval problem and to find the spectral varying object components U o,k , k ∈ K, thoroughly depends on spectral properties of the spectral channels A t,k . The object model is not used in the algorithm, and the modulation phase masks are calculated in advance and fixed in the algorithm iterations. The simulation and experimental tests demonstrate that the HS phase retrieval in this formulation can be resolved with a quite general modeling of object and image formation operators. The MATLAB demo codes of the developed algorithm are made publicly available, Supplementary Code 1. Declaration of Competing Interest Authors declare no competing interests. Data availability Data will be made available on request. Supplementary material Supplementary material associated with this article can be found, in the online version, at 10.1016/j.sigpro.2023.109095
10,859.8
2021-05-14T00:00:00.000
[ "Physics" ]
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptography The use of logistics functions as a random number generator in a cryptography algo-rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x 0 was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography. INTRODUCTION The logistic function f (x) = rx(1 − x) or in the iterative form x i+1 = rx i (1 − x i ) was usually used as a generator in the generation of chaos-based random numbers [1]. This function is able to accommodate the diffusion properties of Shannon's principle on cryptography algorithms, since they have a sensitivity to initial values. Research [2], [3], [4], uses numbers as static initialization on logistics functions in cryptography algorithms. This means that changes to the key don't have an effect on random numbers, thus the algorithm will still acquire a random number that is always the same. Research [5], also uses numbers as input values that directly fill in by user, nevertheless becomes inefficient since the user needs more input other than the key and plaintext. This research not only combines algorithms such as [6], [7], [8], but designs a new algorithm to obtain an unique key. Additionally, the function will generate a random number, if the initialization domain r and x 0 as seed is limited to a certain value, 0 < x 0 < 1, and r = 4 . The value of r is constant, making the strength of the algorithm rests on the value of x 0 . Moreover it needs an examination process by using any scheme that can increase x 0 complexity space, on the contrary remain in logistics function domain. This study provides the flexibility of key inputs that are efficient and can generate different random numbers of logistics functions. 4578 ISSN: 2088-8708 PROPOSED EXAMINATION SCHEME The examination process of the logistics function domain conducted to a wide variety of inputs that allow it to be used as a key. Suppose the key k 1 , k 2 , . . . , k 8 is the result of a conversion of eight ASCII characters. Key input is set up to eight characters, considering the user's ability to recall key and also consider the complexity of the key guessing space when the use of the characters too little. Each key input of less than eight characters, then do the padding process with character " §" that equivalent to 167 in ASCII. This study provides three algorithms that are used for the examination process of initialization value in domain 0 < x 0 < 1, as shown in the general scheme in Figure 1. The ratio of r a is designed as a comparison of two values to accommodate the domain from initialization. Let r a = p a /q a where q a > p a , for a = 1, 2, 3 and p a , q a ∈ R. The First Scheme Given each k i ∈ Z 256 for i = 1, 2, . . . , 8 is the decimal base number of ASCII conversion results. To be able to make changes at every turn of the input, given index value d i ∈ Z 256 for j = 1, 2, . . . , 8 as a constant value which is multiplied by the value k i . Scheme-1 is determined by using r 1 = p 1 /q 1 where, Determination of p 1 value is obtained from the sum of multiplication k i and d i which is performed based on position, it is only for k 1 and d 5 there is crossing position and at k 1 squared. Multiplication of position difference is also done to acquire the q 1 value, however in sum with the average value of each k i . The multiplication combination is done as a variation to gain a unique ratio value, taking into account the requirement q 1 > p 1 . Second scheme Scheme-2 also uses the same decimal number and index value, where each k i , d i ∈ Z 256 for i, j = 1, 2, . . . , 8. The ratio value is determined by the equation r 2 = p 2 /q 2 , (3) The p 2 value is obtained using the average of the first six values of k i , whereas to earn the q 2 value is the sum of the multiplication of k i and d i values according to the order of each value. Third scheme Every k i , d j ∈ Z 256 for i, j = 1, 2, . . . , 8. Scheme-3 is obtained based on r 3 = p 3 /q 3 , where the determination of the numerator and denominator is given in Equation (5) and Equation (6). The p 3 value is the IJECE ISSN: 2088-8708 4579 average of the multiplication k i with d i only at the first value, the third value, and the sixth value with each index. The value of q 3 is the sum of the k i and d i multiplications based on the index with the constants selected by different increments. Analysis of Examination Process Domain examination of the logistics function is performed based on the three schemes given in the previous section. Referring to the general scheme in Figure 1, then testing related variations of inputs that allow to be used as a key. Each possible key is an ASCII character whose decimal basis is in the range 0 to 255. The problem that occurs is not all numbers have a character. Consequently the test for the lowest number can't start from decimal 0 instead in decimal 32 which is proportional to the space character, whereas testing for the largest number at decimal 255 is equivalent to the character . In addition to character testing for minimum and maximum decimals, key tests are also tested with one bit difference, so that it can be seen how sensitive each scheme is to generating initialization values. Table 1 is the simulation result of each scheme in obtaining the value of x 0 . Testing with one bit difference (numbers 1 through 6) shows the changes in values that begin to occur in the 4th or 5th mantissa of the x 0 value in each schema. Accordingly, the process of domain examination for each scheme succeeds to generate different initialization. This condition corresponds to the need of logistics functions in obtaining random numbers based on chaos. Tests with the same eight-character input based on the smallest decimal, the medium decimal, and the largest decimal are given successively in the numbers 7, 8, and 9 in Table 1. The initialization values in scheme-2 and scheme-3 obtain the same value, although the input is very different. This condition occurs since the determination ratio of r 2 and r 3 using the average process to get the numerator, while for the denominator using the addition and multiplication of characters with the index value with the same position. On the other hand, scheme-1 uses a combination of multiplicity of position difference, squared and mean process. In that case, scheme-1 keeps generating different initialization values. Relative Error Test Relative error testing [9] was conducted to see whether linear character reduction would yield also linear results on the x r values with proportional or inversely proportional. Use E R = |c a − p a | /c a · 100%, the keyÿÿÿÿÿÿÿÿ is chosen as the reference value c a (number 1 in Table 2), and the approximation value as the key input p a less than eight charactersÿ (iterated from number 2 to number 8). ISSN: 2088-8708 The relative error results in each scheme do not show a proportional or inversely proportional pattern. So, it will complicate cryptanalyst to see the pattern of input changes on the same character. Linear Regression Test One-to-one correspondence between the input-output is also important so that cryptanalyst difficult to reconstructing scheme and make prediction of the key used as input. This relationship can be seen through a linear regression test based on the rate of change on the resulting value of x 0 . Figure 2, is the result of each scheme visualized using the Scatter plot. The diagram of each scheme has no linear relationship, because when the curve matching process is used, the coefficient of determination (R 2 ) is close to zero. This test illustrates that any changes to the key characters is done patterned, will not provide an initial value x 0 linearly patterned, either proportional or inversely. Butterfly Effect Test The butterfly effect test is used to indicate the change in bits in the key input, whether it gives a large change to the output. Suppose that as a comparator key ZZZZZZ and ZZZZZY are selected which has a difference of one bit. The result of the two keys with scheme-1 is obtained by a random number of the first 500 iterations, shown in Figure 3. Visually, the random number generated is very different although the difference in initialization value x 0 is only 0.0000151538 ≈ 1.538 × 10 −6 or in an absolute relative of 0.063%. A minor change in input and a major change in output proves that the 1st scheme has successfully fulfilled the butterfly effect. The initial value difference of x 0 is 0.0487% in scheme-2. But the rate of change occurs very significantly that appears in the Scatter diagram in Figure 4. So, the scheme-2 has also fulfilled the butterfly effect test. Significant changes also occur at random values with scheme-3, as shown in Figure 5, although the difference is 0.0399% at input value x 0 . So, the scheme-3 also meets the properties of the butterfly effect. The three schemes have succeeded in fulfilling the properties of the butterfly effect, thus the existence of the logistics function in generating chaos numbers can be accommodated based on key inputs. Each scheme can be used as a complement to a cryptography algorithm to meet the diffusion properties of Shannon's principle. Analysis Of The Algorithm Ability Each scheme is tested in correlation [10], to detect the connectedness of the random number generated based on the input. While MAPE [11], it is used to find out how massive the difference of random value of key change. Used three variations of the input [12], the first is the same character input, a second input alphabetic characters that revolve around the 26-character alphabet. While the last test, used alphabet, symbols, and numbers. Calculation of correlation in Table 3 show that there are two values on scheme-1 and scheme-3 which correlation is negative, besides the rest is positive. Cryptographically, the negative value is not too influential, hence it seen how close the value to zero indicating the unrelated two random numbers are generated. In the context of the relations, this same analogy can be used to test the difference of two random numbers generated. Overall the correlation value generated by each scheme is within the range of 0.00 -2.99. Based on [13], the interval shows the strength of a very weak relationship. This condition provides information that 1 bit key input difference, can generate different random numbers on each scheme. ISSN: 2088-8708 In addition to the correlation and MAPE analysis, we also tested the distribution of random number data using box-plot diagrams. Based on Figure 6, each box has almost the same size in which the upper and lower whisker lines vary slightly, but the maximum value is always close to one and the minimum is near zero. The distribution of data in the chaos range will strengthen the cryptography algorithm if used as a key, this condition will certainly complicate cryptanalyst to be able to search for infinitely many numbers although limited. CONCLUSION Each designed scheme is capable of providing key input flexibility that can execute domain logistics function values. A 1 bit difference in the key character affects every random number generation, so each key will generate a different random number sequence. Under the key input conditions of the same eight characters, the 1st scheme is better at generating different initialization values than the scheme-2 and schema-3. In addition, the one character reduction of the eight identical characters in the key input does not show a proportional or reverse pattern with the initial x 0 values and the relative error x 0 . The resulting random numbers distributed evenly over the chaos range will amplify the algorithm when used as a key in cryptography. This condition will certainly complicate cryptanalyst to be able to search for infinitely many numbers although limited. The three schemes have succeeded in fulfilling the nature of the butterfly effect, thus the existence of the logistics function in generating chaos numbers can be accommodated based on key inputs. Each scheme can be used as a complement to a cryptography algorithm to satisfy the diffusion properties of the Shannon principle.
3,218.6
2018-12-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
The economic potential of regions and the development of the transport network . A case study of the regions located alongside the Oder Waterway On the one hand, the social and economic development of regions is determined by the development of the transport network, but on the other hand, the economic potential of a region may be a basis for the development of the transport system in its catchment area. The regions located alongside the Oder Waterway (OW) are the main subject of the study undertaken in this article. The objective of the article is to determine the economic potential of these regions in the context of legitimacy of carrying out investment actions planned by the government with regard to the improvement of technological parameters of the OW, and its ultimate adaptation to the navigational class Va. Introduction A region is a polysemous and multi-dimensional term, and therefore its definitions differ in the scientific literature (e.g., Nazarczuk 2013;Simmie, Martin 2010;Chądzyński, Nowakowska, Przygodzki 2007;Bojar 2001;Pietrzyk 2007;Brodecki ed. 2005;Tomaszewski 2007;Domański 2006;and Agnew 2000).In economic terms, a region is an area of specific economic specialization resulting from the utilization of its own resources, as well as production factors and the flow of capital, people, information, and technology (Filipiak, Kogut, Szewczuk, Zioło 2005;Kosiedowski 2001).A region is not a scaled down country's economy, yet it is in a sphere of its influence, and is an intermediate (meso) level between particular economic entities (micro) and the national economy (macro) (Nazarczuk 2013).A region is a dynamic structure which undergoes processes of permanent transformation.Internally, this transformation is observable in the arrangement of its hubs for growth and surrounding areas.Externally, it is associated with the development of its competitive position in relation to other areas in terms of various economic categories (Nazarczuk 2013;Christopherson 2008;Pike, Rodríguez-Pose, Tomaney 2010). A region's competitive position and its specialization result from its competitive potential (Porter 2000).Internal qualities of a region, including, in particular, its external determinants, determine the possibilities and directions of its social and economic development.A region may have its own, small potential, but due to its convenient location in the regional system of influence, this potential is significantly increased (Czyż 2002). Apart from the economic potential, demographic, social, and cultural determinants are of key importance to the development of a region's competitive position (Jaworska, Feb. 2009).This influence is reciprocal.For example, the development of the transport infrastructure is determined by the economic development of regions (Rosik 2004;Komornicki et al. 2010), but, on the other hand, the economic potential of a region can be a basis for the development of the transport system in its catchment area (Oosterhaven, Knaap 2003). The main subject of the study undertaken in this article are the regions located alongside the Oder Waterway (OW), which is an important component of the Baltic-Adriatic as well as CETC-ROUTE65 transport corridors meridionally integrating Scandinavia with the Central and Eastern European states and, farther down, with the south of the continent.Latitudinally, the OW also provides access to the western regions of Europe by means of a connection with the German waterway system (the Oder-Havel and Oder-Spree canals). Following the example of the largest seaports in Western Europe, the OW may be an alternative to land transport and an important transport link for the Szczecin-Świnoujście port complex hinterland.First of all, its main advantages include its environment-friendly solutions and competitiveness.As a result, in the states with broad networks of inland waterways, including the Benelux Union, Germany, and France, numerous investment projects are being carried out with the aim of improving parameters of inland waterways, particularly in their relations with seaports.These initiatives are a direct following of the sustainable development policy which is being fulfilled at the EU level. However, the long-standing degradation of the hydrolic structures in the river and lack of modernization resulted in the OW not being a consistent traffic route with operational parameters which enable to provide regular navigation.Possibilities of making the OW navigable should be seen in the government's investment plans related to the development of transport infrastructure for 2016-2030 (Strategy for the Development of Inland Waterways 2016, Implementation Paper 2014, Transport Development Strategy 2013).It is for the first time within so many years that making the OW navigable has been included in the list of key projects in the plans in which it is designed to reach The economic potential of regions and the development of the transport network... the navigational class Va. The objective of the article is to determine the economic potential of the regions located alongside the Oder Waterway (OW) in the context of legitimacy of carrying out investment actions planned by the government with regard to the improvement of its technological parameters and ultimate adaptation to the navigational class Va. Methodology In order to assess the economic potential of a region, including comparative analyses related to different regions, various sources of data and research methods are applied.They help describe and assess phenomena and economic, social, and demographic processes taking place in a region (e.g., Teresa 2002, Strahl ed. 2006, Kompa 2009, Nazarczuk 2013, and Kubiczek 2014).Both static (indicators of structure) and descriptive methods were used in the article.They allowed to identify, analyse, and assess primary economic categories as well as social and demographic ones which are typical of the regions analysed in the study. Having taken into account the OW catchment area, the spatial scope of the study included 9 European regions (NUTS2) located alongside the OW including: -6 Poland's provinces including West Pomerania, Lubusz, Greater Poland, Lower Silesia, Opole, and Silesia, as well as 1 Czech region, the Moravian-Silesian Region, with its central part in Ostrava -studied meridionally; and -2 German lands -studied latitudinally: Berlin and Brandenburg.The sources of the data were primarily official statistical data published by Eurostat, Central Statistical Office of Poland, Czech Statistical Office, and Federal Statistical Office of Germany.In order to provide a general comparative description of the regions analysed in the study and their position in comparison to the national economies, the following indicators and metrics were applied: -size of the area; -GDP of the region including GDP per capita expressed in purchasing power standard (PPS ); -gross value added, which expresses the value of goods and services provided by the national market and non-market entities, decreased by indirect wear related to its generation; this index is a primary income category determining the economic situation of a region and an important criterion for assessing the efficiency of expenditures related to production factors; -total population numbers, including the number of births, deaths, and migration balance, as well as population density; -crude rate of total population change which is an index of natural changes in the population number (number of live births decreased by number of deaths) and migration balance (number of immigrants decreased by number of emigrants and increased by statistical adjustment); an increase in the number of people occurs only when the value of migration balance increased by the number of births and decreased by the number of deaths is positive; and -unemployment rate within the group of people of working age (15 and over). On the basis of the information which was primarily acquired from the official national and regional statistical data as well as from economic trade websites, a description of the structure of economies in the regions was developed. A comparative analysis with regard to the number and trade structure of economic entities running their businesses in the analysed regions (including employment generated in particular sectors) was also performed.In order to obtain comparable data, the Statistical Classification of Economic Activities in the European Community, NACE 2008, was used.At the presented level of data aggregation, this classification is fully compatible with the national statistics (e.g., PKD 2007 for Poland), and includes the following sections marked in the charts with symbols: -B -Mining and quarrying; -C -Manufacturing; -D -Electricity, gas, steam, and air conditioning supply; -E -Water supply, sewerage, waste management, and remediation activities; -F -Construction; -G -Wholesale and retail trade, repair of motor vehicles and motorcycles; -H -Transportation and storage; -I -Accommodation and food service activities; -J -Information and communication; -L -Real estate activities; -M -Professional, scientific, and technical activities; -N -Administrative and support service activities; and -S95 -Repair of computers and personal and household foods. Having taken into account the subject structure of the cargo which is traditionally transported by means of inland navigation, the next part of the study focused on assessing the extent of industrialisation of the regions under study.In order to do so, a detailed description of the industry and building sector was developed.Additionally, its share in the structure of economies of the regions under study was determined.In the Eurostat statistical nomenclature as well as national statistics this sector includes activities B, C, D, and E. The length of time in relation to the analysed data included the years 2010-2014, which was connected with the accessibility of comparable data related to the regions under study relating to this period.The economic potential of regions and the development of the transport network... General description of the regions under study The analysed regions located alongside the OW cover an area of over 144,000 km 2 .The area is inhabited by over 21 million people (Table 1).Bearing in mind the level of economic development illustrated with GDP per capita in comparison to the EU average, all the regions under study, apart from Berlin, are poorer EU areas.In comparison to the regions analysed in the study, the highest GDP values are observed in the federal states of Berlin and Brandenburg.However, on the national level these regions are one of the poorest regions in Germany, where GDP per capita is much more below the national average (especially in Brandenburg).The highest GDP per capita within the regions of Poland is observed in the highly industrialized central and southern provinces of Silesia, Lower Silesia, and Greater Poland.The least developed OW regions include the provinces of West Pomerania, Lubusz, and Opole. In all the analysed regions, the unemployment rate is at a relatively high level, but it does not exceed the EU average.The lowest level of unemployment is observed in Brandenburg, where it is significantly over the national average.A lower unemployment rate in comparison to other regions is observed in Greater Poland and Opole.However, with regard to the latter, it is not only related to its good economic situation, but also to a strong emigration trend.The highest unemployment rate is in Berlin, and it is almost twice as much as the national average. What is observed in most of the OW regions is a negative demographic trend expressed in a decrease in the total number of people (the number of births, an increase in the number of deaths and negative migration balance).Having taken into account the values of the index of overall demographic changes, the worst demographic situation is in Opole, Silesia, and the Moravian-Silesian Region.A positive (highest) population balance was only observed in Berlin, which is related to a metropolitan status of the capital, as well as in Greater Poland. Having taken into account the level of gross value added, the biggest increase in absolute values in the value of produced goods amongst the regions in 2013 was observed in Germany, particularly in Berlin.However, in total it refers only to 6% of the gross value added on the national level.The Polish Oder regions account for approximately 40% of the total gross value added generated on the national level.The provinces which generate the biggest contribution to its creation include Silesia (12.5% of the national value), Greater Poland, and Lower Silesia (9.7% and 8.5% of the national value, respectively).The Czech region generates the gross value added at a level of 10% of the national value.The position of Opole, West Pomerania, and Lubusz is the worst.They together account for 8% of the total gross value added generated in Poland in 2013. Description of the structure of economies in the regions under study The structure of the analysed economies of the OW regions is varied.The Moravian--Silesian Region, with its centre in Ostrava, is one of the richest regions in Czech Republic, traditionally shaped by mining and heavy industries connected to it.The structural changes, which are now taking place, result in a gradual decrease in traditional sectors share for the development of trade and services and other branches of industry, including food-processing and industrial processing based on knowledge such as the electronic and electric industry (production of computers, electronic goods, and optical devices).The economic potential of regions and the development of the transport network... Silesia is one of the most industrialized regions in Poland and one of the most industrialized areas in Europe, generating almost 13% of the national GDP.The industrial activity of the region is mainly performed in the Upper Silesian Industrial Region.The region has numerous natural resources including hard coal, zinc and lead deposits, layers of methane, natural gas, deposits of marlstone, limestone and natural aggregate), which results in a continued dominance of coal and metallurgy in the structure of the province.However, the restructuring processes, which have been going on for several years, lead to a shift toward the electrical machinery industry, steel industry, chemical industry, IT industry (including ICT), power industry (including sources of renewable energy), as well as the automotive industry and food industry.Investments in the region are performed mainly in the area of Upper Silesian Urban Area and the former voivodeship towns including Bielsko-Biała and Częstochowa.For example, the choice of locations for investments in the region is related to the functioning of the Katowice Special Economic Zone (KSEZ) and numerous industrial and technological parks. The province of Opole is a region with a varied industry structure.The dominant position is taken by the food, chemical, gas and fuel and energy, electro-engineering, lime-cement, mineral, metallurgic, steel, automotive, and furniture industries.It is also the sector of services, especially in the field of new technologies, that is becoming more and more important to the region's economy.The region has got rich, natural resources, including marlstone and limestone deposits in particular.The province of Opole has got a much larger area for investments, especially alongside the A4 motorway.There are Special Economic Zones (SEZ) operating in the region: Wałbrzyska and Katowicka, and numerous technological and industrial parks. The Lower Silesia region is rich in mineral resources including brown coal, copper, non-ferrous metals, natural gas, hard coal, and aggregates for construction industry.Hence, the traditional industries include mining industry (brown coal and copper), copper and non-ferrous metallurgy.The following industries are also very well developed: textile and clothing, automotive, electro-engineering, power, electronic, construction, chemical, food, as well as wood and paper.The construction, IT, and BPO (Business Process Offshoring) share has been on a systematic increase for over several years.The province has a significant area of SEZ: Legnicka, Wałbrzyska, Kamiennogórska, Tarnobrzeska, and numerous industrial and technological parks. Apart from the traditional industries in the province of Greater Poland, including electrical, textile and clothing, farming and food, and metallurgy, an increase has been recently observed with regard to specialization in the automotive industry, transport and logistics, and BPO.Business activities are mainly performed in the conurbation of Poznań and subregional centres including: Kalisz and Ostrów Wlkp.(farming and food, textile, and electro-engineering industries), Leszno (farming and food), and Konin (gas and fuel and power industry based on brown coal).Operations in relation to brown coal are performed in the area of the Konin Industrial Region.Investments are primarily located in the areas operating in the region of SEZ: Wałbrzyska, Łódzka, Kamiennogórska, Kostrzyńsko-Słubicka, Pomorska, Słupska, and numerous industrial and technological parks. In the economy of Lubusz, the sector of services, trade, and construction plays an important role as well.In the structure of industrial processing, the greatest importance is assigned to the wood industry (49% of the province area is covered by woods, which is the biggest afforestation rate in Poland), electronic, as well as farming and food industry, and then production of chemicals, cellulose and paper, textile, and the construction industry.The automotive industry has also been developing dynamically over the recent years.In the region, there are special investment areas within the Kostrzyń-Słubice Special Economic Zone and Wałbrzych Special Economic Zone.Business activities are also developing in the areas of industrial and technological parks. In the economic structure of Western Pomerania, first of all the sectors of trade and construction, and then the industry and transport and logistics services are of the biggest importance.An important factor which determines the economic development of the region is maritime economy, including, in particular, the activities performed by the main Polish seaports of Szczecin and Świnoujście.In the sector of industrial processing, the dominant industries include farming and food industry (including brewing and fishing), steel industry, engineering industry, chemical industry, plastics processing, wood and paper industry, as well as shipbuilding (construction of yachts and other boats).The power industry, including, in particular, renewable energy sources (RES), is also developing.There are dedicated SEZ's in the region including: Pomorska, Kostrzyńsko-Słubicka, Słupska, and Euro-Park Mielec, and numerous industrial and technological parks. Brandenburg is one of the poorest regions in Germany.It is mainly the sector of trade and services that is of great importance to the economy of the region (60% of gross value added), including, especially, transport-forwarding and logistics as well as IT services.The metallurgic, steel, engineering, automotive, refining, farming and food, as well as wood and paper industries are dominant in the structure of industrial processing by type. The sector of services, especially financial, transport, and tourism services which generate over 60% of the gross value added of the region is dominant in the structure of the Berlin region economy.It is related to the metropolitan nature of Berlin which is a political centre of Germany.Nevertheless, Berlin is still an important industrial centre.This sector generates about 14% of the region's GDP.The electro-technical, chemical, pharmaceutical, automotive, and printing industries, as well as farming and foodprocessing, and the sector of high technologies, including, in particular, renewable energy, biotechnology, nanotechnology, as well as IT and medicine are of great importance to the structure of industry. Taking into account the number of enterprises existing in 2012 (Fig. 1) alongside the entire OW, the highest number run their businesses in the area of trade and services connected with vehicle repair (35%).The sector of construction, which accounts for 15% of the total number of operating enterprises, goes second.The industry involves 12% of the total number of enterprises and activities related to mining, and quarrying of raw materials accounts for 8% of the total number.The structure results from the fact that in the sector of trade, the sector of micro, small, and medium enterprises is dominant as compared to a small number of big and medium businesses operating in the field of industrial production and the quarrying industry. * Without Germany, because the data with regard to particular regions is incomplete.The biggest employment is generated in the sector of industry, and then trade, and construction (over 60% in total) (Fig. 2).The remaining business activity sectors' share in the overall employment differs in relation to particular regions.For example, in the region of Silesia, apart from industrial processing, construction, and trade, about 10% of the employment as a whole is generated by mining, but in comparison to other regions, in the region of West Pomerania a big share in the number of employees is in the transport and logistics sector (10%). * Because the data with regard to particular sectors without Germany is incomplete, and in the case of Greater Poland and Lubusz in sections D and B, the value of 0 was assumed.Taking into account the uniqueness of inland navigation, including, in particular, the dominant types of transported cargo, in the following part of the analysis of the economic potential of the OW, the focus was turned towards the sectors of industrial processing and mining, as well as quarrying. Description of industrial operations in the regions under study What may be distinguished within the group of the analysed regions are highly industrialized areas where the sectors of industry and construction generate over 40% of the gross value added.They include Silesia, Lower Silesia, Greater Poland, and the Moravian-Silesian Region. 1 In the regions analysed in the study, except for Greater Poland, agricultural production is a small percentage of the national production.In the case of such regions as West Pomerania or Lubusz, which have a much larger area of arable lands, its efficiency is low.The most effective agricultural industry is in Greater Poland, whose share is at the level of almost 9% in the national agricultural production with a relatively small area of arable lands and the sector of agriculture, forestry, hunting, and fishing generates 5% of the region's gross value added (Table 2).For comparison, section A generates 3% of the whole state's gross value added (GUS, 2012).Industrial operations are performed by over 97,000 business entities in the OW area analysed in the study (2012).Most of industrial enterprises are located in Silesia, Greater Poland, the Moravian-Silesian Region, and Lower Silesia.These regions as a whole account for 70% of the business activities in the entirety of the OW regions (Fig. 3). Over the years of 2008-2012, excluding the period of growth in 2010/2011, the number of production enterprises decreased in most of the regions analysed in the study.The only exception was the German federal states of Berlin and Brandenburg.Although a temporary decrease was observed in the entire period of 2008-2012, the number of industrial enterprises increased by 17% (Berlin) and 19% (Brandenburg).At the end of the day, when it comes to the number of enterprises in particular sections of industrial processing, the most important companies involve businesses operating in the steel industry (finished steel structures), which accounts for approximately 22% of the entire number of enterprises, wood and paper industry (products made of wood and cork, and wood and cork-related products, as well as paper and stationery goods, no furniture included), which accounts for approximately 12.5%, and farming and food industry, which accounts for 11%.With regard to the particular regions, some developing specializations may be pointed out.For example, in the regions of Silesia and Greater Poland, a considerable number of entities deal with clothing production and processing and production of rubber products and other plastics.In comparison to other regions analysed in the study, the Moravian-Silesian Region is known for its big number of electrical companies.The region of Lower Silesia has also got a significant percentage of companies manufacturing goods from other non-metallic materials including glass, glass fibres, refractory products, ceramic construction materials, lime and gypsum, etc. Traditionally, the quarrying sector is of the biggest importance for the regions of Silesia, Lower Silesia, the Moravian-Silesian Region, and Greater Poland, where business activities are performed by several hundred enterprises in the area of hard coal and brown coal quarrying.The biggest share in the overall number of enterprises in the sector of mining and quarrying (70%) is related to the entities classified in the subsection "other mining and quarrying" (Fig. 4). This subsection includes business activities related to quarrying and processing stone, sand, gravel and clay, including gemstones, stone for the purposes of construction, limestone, gypsum, chalk and shales, as well as minerals for the chemical industry and production of fertilizers and quarrying peat, salt, and other minerals and materials.The dominant share of enterprises in the subsection "other mining and quarrying" in the entire quarrying operations is primarily related to the overwhelming number of small and medium entities in this group, compared to the section of hard and brown coal (9% of the entities in total) or crude oil and natural gas (1%) dominated by few, big economic holdings.This is confirmed by the data in relation to employment, e.g., in the region of Silesia, which is the Poland's coal basin, the percentage of employees in the subsection "other ..." in the whole group of those employed in mining accounts for only 2%-20% in Lower Silesia, 29% in Greater Poland, whereas in Opole Silesia -99%.The economic potential of regions and the development of the transport network... Source: own elaboration based on Eurostat. Conclusions The Oder Waterway, whose course, from the transport perspective, coincides with the general national directions of the main cargo mass flows, and via the latitudinal waterways provides an opportunity to have convenient connections with the waterway systems in the neighbouring states, has the greatest chance to develop among all the Polish waterways.This statement is supported by the provisions of Strategy for the Development of Inland Waterways in 2016-2020 with an outlook to the year 2030, which gave the investment actions to be performed on the OW the highest (first) priority.A reason for the development of the OW is also the significant economic potential of the regions located within its catchment area. The regions with varied levels of their economic development are located alongside the OW.The areas with the highest potential include the highly industrialized regions located in the upper and middle course of the river, i.e., the Polish regions of Greater Poland, Lower and Upper Silesia, and the Czech Moravian-Silesian Region.It is to be expected that the highest demand for OW transport will be generated from these areas.Transport will include both bulk cargo (e.g., coal, coke, ores, and scrap metal) and general cargo (e.g., semi-finished steel products, cellulose, and project cargo) generated by traditional industries.In the future, when the OW is adapted to the minimal class IV, and ultimately to Va, it will also be containerized general cargo which is handled in the OW regions by the dynamically developing sector of industrial processing based on knowledge. To sum up, assuming that the investment projects on the Oder planned by the Polish government are fulfilled, the identified economic potential of the regions located alongside the OW may, at least partially, be transformed in realistic demand for transport by inland navigation.It is to be expected that the transport performed in relations with the Szczecin-Świnoujście port complex, including land and water transport chains, will be of the highest importance.Apart from the vessel-barge transport, direct domestic and international transport will also be developing, particularly with Germany.Furthermore, the OW's social and economic catchment area may be extended by the southern regions in Poland (Lesser Poland) if the construction of the Silesian Canal is fulfilled.Internationally, however, the strategic project is related to the idea of building the Oder-Danube Canal, which will connect the southern regions of Europe to the area of OW influence.Streszczenie: Rozwój sieci transportowej determinuje rozwój społeczno-gospodarczy regionów, z drugiej strony potencjał ekonomiczny regionu może stanowić podstawę dla rozwoju systemu transportowego w obszarze jego ciążenia.Głównym przedmiotem badań podjętych w niniejszym artykule są regiony zlokalizowane na przebiegu Odrzańskiej Drogi Wodnej (OW).Celem artykułu jest określenie potencjału ekonomicznego tych regionów w kontekście zasadności realizacji planowanych przez rząd działań inwestycyjnych w zakresie poprawy parametrów technicznych OW i przystosowania jej docelowo do klasy żeglugowej Va. Fig. 1 . The structure of the OW economy * based on the dominant number of enterprises in particular sectors (2012) Source: own elaboration based on Eurostat. Fig. 2 . Fig. 2. The structure of the OW economy * based on the employment level in particular sectors (2012) Source: own elaboration based on Eurostat. 1 Without mining and quarrying. Fig. 3 . Fig. 3. Structure of the arrangement of industrial enterprises in the OW region (number of industrial enterprises in 2012) Source: own elaboration based on Eurostat. Fig. 4 . Fig. 4. Structure of the quarrying industry in 2012 in particular regions of the OW (number of enterprises) Table 1 . Primary description of the regions located alongside the OW in comparison to the national and EU data Source: own elaboration based on Czech Statistical Office, Eurostat, Federal Statistical Office of Germany, and Statistical Office of Poland. Table 2 . Description of selected indicators for the sectors of industry and construction as well as agriculture, forestry, hunting, and fishing
6,442.8
2016-01-01T00:00:00.000
[ "Economics", "Geography" ]
INFLUENCES OF DIFFERENT DRYING CLIMATES ON Eucalyptus camaldulensis WOOD PROPERTIES One of the most important disadvantages of the wood material, whose usage is becoming more and more widespread, is the dimensional instability that occurs in its interaction with water. Therefore, studies to improve these drawbacks of wood, remain always up to date. For the mentioned purpose in this study, some chemical, morphological, physical and mechanical properties of Eucalyptus camaldulensis woods, which were naturally dried in outdoor and indoor climate in Eastern Mediterranean (Kahramanmaraş province) atmosphere conditions of Turkey, were investigated. According to the results of the study, chemical properties of Eucalyptus woods dried indoor were measured as merely 0,23 % higher than dried ones outdoor. The results of morphological measurements indicated that the fiber dimensions of eucalypt wood dried in indoor were average 1,48 % lower than the ones dried out outdoor. Also, as a result of statistical analysis, it was found that there were significant differences (ρ < 0,000) between the physical properties of Eucalyptus wood samples indoor and outdoor according to the t-test. At the same time, as a result of the t-test applied to determine the effect of drying conditions on mechanical properties of Eucalyptus wood, modulus of elasticity, compression, tensile, dynamic bending and shear strength did not cause any significant difference between indoor and outdoor, while bending and Janka hardness strengths showed significant differences at ρ < 0,000 level. Finally, when the data obtained as a whole is considered, it can be said that testing of Eucalyptus wood which requires a very sensitive drying in different climates has important contributions on the subject. Regarding eucalypts, which has a high distribution area (20 million hectares) in the world, it is recommended to relevant institutions and organizations to expand and maintain such study in the future. Lastly, according to obtained data from this study, it can be said that the experiments of Eucalyptus woods which require a very delicate drying in different environments provide important contributions on the subject. INTRODUCTION One the most important disadvantages is the change in the dimensions of wood material during its interaction with water. As a result of the reduction of wood-based resources, demand for wood cannot be adequately met and price increases are higher than expected. For these and similar reasons, the studies on reducing the objectionable aspects of the wood material and increasing their positive properties are always keep up-to-date (Kocaefe et al. 2008, Aytin et al. 2015, Cetin and Gunduz 2016. As is known, wood is an anisotropic material and its properties differ in various directions (Dahl andMalo 2009, Dackermann et al. 2016). Knowing of the physical, mechanical and chemical properties of wood makes it easier to compare wood materials with others and gives ideas about processing and usage characteristics (Sancak 2010, Duzkale et al. 2014, Cetin and Gunduz 2017. Mechanical properties are defined as the degree and condition of the wood to resist external forces and various types of loads, such as mechanical and external forces, which lead to dimension and deformation, stress and rupture (Bozkurt and Goker 1996). Due to its hygroscopic properties, wood material or any wood-based product can reach certain equilibrium moisture content by adapting to weather conditions in various places of use. In dimensions of wood material, which is not dried sufficiently changes occurs such as shrinkage or swelling during utilization (Kantay 1993, Bergman 2021. Because of these changes, defects such as cracking, separation from joints, and deformation of wood material can take place (Ors and Keskin 2008). There are two main components in wood: lignin (18 % -35 %) and carbohydrates (cellulose and hemicellulose) (Ors and Keskin 2008). These components are complex and polymeric materials. There are minor extraneous materials in structure of wood, which are organic extractives and inorganic minerals (ash). Generally, wood has an elemental composition of about 50 % carbon, 6 % hydrogen, 44 % oxygen, and trace amounts of several metal ions (Pettersen 1984). Australia is the homeland of the genus Eucalyptus. Eucalyptus camaldulensis is cultivated for commercial purposes in Turkey and this plant of foreign origin has become native (Davis 1988). Eucalypts, which is grown especially in the southern regions of Turkey, is a type of tree that has an important role for forest industry in the world due to its large diameter arises in a short time and has smooth trunk. This plant, which has been grown in Turkey since 1939, obtains different forest products for various purposes, primarily for railroad tie and bridge production, followed by cellulose, which is the raw material of paper industry (Karsavuran 2008). Eucalyptus, which was considered as a packing case at first, is nowadays used in many areas such as building constructions, veneer, furniture, chest, turnery, agricultural tools, musical instruments, sports equipment, stull, trolley pole and fiber-chip wood (Yaltirik andEfe 1994, Korkut et al. 2008). The water present in new-cut trees should be eliminated from wood before using as end product. Hence, rough and fresh timber should be exposed to drying process. Depending on species, weather conditions, timber dimensions and the season when the wood piled, natural-drying times vary widely. Temperature, relative humidity and airflow effect the drying process of wood piles (Bektas et al. 2017). Different timber dimensions, bark content and piling specifications may strongly affect natural-drying times (Simpson andWang 2004, Bown andLasserre 2015). The drying process has several important advantages such as increases some strength properties and improves dimensional stability of wood (Forest Products Laboratory 1999). However, drying defects such as cracks, hardening, cell collapse, shape changes, color changes adversely affect the quality of the wood material (Kantay 1993). As far as can be researched, no studies have encountered effect of different drying climates on the properties of Eucalyptus wood. Closest to the subject of this study, mechanical properties of Eucalyptus urophylla wood (Lahr et al. 2017) and Eucalyptus grandis wood ) of different moisture (12 %, 30 %) levels were investigated. Eventually, Lahr et al. (2018) found that mechanical properties were significantly affected by moisture content, and the behavior pattern consisted in increasing the values of the properties with reduction of moisture content. Other researchers also investigated Eucalyptus saligna (Nogueira et al. 2019) and Eucalyptus maidenii wood's properties in abovementioned moisture contents . In another study, pine (Pinus sylvestris) and beech (Fagus sylvatica) woods were subjected to steam kiln-drying. Modulus of elasticity and bending strength for steam dried, air-steam mixture dried and air-dried samples as reference were measured. They concluded that steam and air-steam mixture drying cause changes of the mechanical properties of analyzed wood species (Baranski et al. 2014). In the light of the above explanations, the effects of natural drying climates on some chemical, morphological, physical and mechanical properties of Eucalyptus wood, one of the fast growing tree species with high distribution areas, was investigated in this study. Materials The freshly sawn lumbers were obtained from Mersin-Karabucak Forest Sub-district Directorate. Lumbers were cut into the dimensions of 6 cm x 15 cm x 300 cm and the ends of edged timbers were not sealed. When the timbers were brought to drying environments, their starting moisture content (MC) varied between 57 % to 72 %. Twenty timbers were dried in each environment in order to minimize random variations and the test samples were randomly selected from the piles. Drying process Air drying method was applied to the timbers stacked in Kahramanmaraş Province. Timbers were piled in two different environments: Outdoor (OD) and Indoor (ID). Drying process was began in summer period (effective drying) on June and followed during one year. The data were taken from meteorological station for OD and Geratech DT-172 was used to measure temperature and relative humidity of ID. Determination of physical and mechanical properties Test specimens were obtained from eucalypts (Eucalyptus camaldulensis Dehnh.) timbers supplied in Mersin-Karabucak territory. Test specimens were prepared on the basis of TS 2470 (1976). Some physical properties such as air-dry density, oven-dry density and basic density values were determined with the sample dimension of 20 mm × 20 mm × 30 mm, based on TS 2472 (1976). Additionally, volumetric shrinkage and swelling was calculated according to TS 4083 (1983), TS 4084 (1983), TS 4085 (1983), TS 4086 (1983 with the sample dimension of 20 mm × 20 mm × 30 mm. As for the mechanical properties, static bending strength and modulus of elasticity was designated on the sample dimensions of 20 mm × 20 mm × 300 mm based on TS 2474 (1976) The Independent Samples t-test was performed to determine if there were statistically significant differences between the properties of ID and OD dried wood. Determination of chemical and morphological properties Eucalyptus samples were chipped to matchstick size and in order to determine chemical components, chips were ground in a laboratory type Wiley mill according to TAPPI T257 om-85 standard. Samples passing through 40-mesh sieve and remaining over 60-mesh sieve were stored in closed containers for chemical analysis and moisture contents were determined. Holocellulose, cellulose, alpha cellulose, lignin, extractive and ash contents of the samples were determined according to Wise and Karl (1962), Kurschner and Hoffer (1969), TAPPI T203 cm-99 (1999), TAPPI T222 om-15 (2015), ASTM D1107-96 (2013) and TAPPI T211 om-16 (2016), respectively. Cold water, hot water and 1% NaOH solubilities of the samples were determined according to TAPPI T207 cm-08 (2008) and TAPPI T212 om-18 (2018) standards. The maceration process was carried out with chlorite method to make the woody fibers individual. Fiber slides were prepared for determination of fiber dimensions by using Nikon FS1 photo microscope; the average fiber width, length, lumen diameter, and cell wall thickness of 100 fibers were measured. RESULTS AND DISCUSSION The results obtained from the experiments on the chemical contents of Eucalyptus wood dried in indoor and outdoor conditions are given in Table 1. According to Table 1, toluene-acetone-ethanol solubilities and lignin contents of Eucalyptus wood dried ID and OD were found to be similar. Hot-water solubility of woods dried OD is lower than that of ID. The coldwater solubility of OD dried Eucalyptus wood was about 5,1 % lower than that of the ID. The holocellulose content of the OD dried wood was 0,5 units higher than that of the ID. In terms of cellulose and alpha cellulose contents, woods dried ID are richer than woods dried OD. When Table 1 is examined in general, there are significant differences between water solubility values of Eucalyptus wood dried ID and OD climates, but there is no significant difference between other chemical components and solubilities. Ayata (2008) determined holocellulose and lignin contents of Eucalyptus grandis wood as 81,2 % and 25,7 %, respectively. In the same study, ash and extractives contents were found to be 0,3 % and 2,4 %. It was found that holocellulose content was lower and lignin and extractive contents were higher than those of undried Eucalyptus wood. Table 2 indicates the differences revealed in fiber dimensions of Eucalyptus samples dried in ID and OD climates. When fiber dimensions of Eucalyptus woods dried in different climates are examined, it is seen in Table 2 that there are no differences between fiber lengths and cell wall thicknesses. The most significant difference is in fiber width, and wood dried OD is about 5,4 % wider than the one dried ID. Likewise, lumen diameter of wood dried OD was found to be wider 0,52 units than that of dried ID. It can be thought that these differences can be occurred due to relative humidity of OD climate being higher than that of ID. In the literature, no study has been found on the effects of drying climates on fiber morphological properties. However, it is seen that the fiber dimensions of the samples dried in both conditions are similar with the literature (Bhat et al. 1990, Gurboy and Ozden 1994, Trevisan et al. 2017. Statistical analysis data of air-dry, oven-dry and basic density values are given in Table 3. According to data in Table 3, differences of drying condition on air-dry density, oven-dry density and basic density values were found to be significant at ρ < 0,000 level. In same table, it can be seen that the air-dry density, oven-dry density and basic density values of the samples dried in OD were higher than those of dried in ID. These increases were determined as 10,88 % in air-dry density, 13,50 % in oven-dry density and 9,91 % in basic density. In a study conducted in the literature, air-dry density, oven-dry density and basic density values were found as 670 kg/m 3 , 620 kg/m 3 , 510 kg/m 3 , respectively (Aslan et al. 2008). In another study performed between two different types of Eucalyptus, these values were found to be 530 kg/m 3 , 520 kg/m 3 , 460 kg/m 3 for Eucalyptus grandis. and 700 kg/m 3 , 680 kg/m 3 , 570 kg/m 3 for Eucalyptus camaldulensis Dehn., respectively (Ayata 2008). The results obtained from this study are similar with the literature. Shrinkage values made to determine the dimensional stability of the Eucalyptus wood dried in ID and OD conditions are given in Table 4. As it can be seen in Table 4, tangential shrinkage values constitute significant differences (ρ < 0,000) between Eucalyptus timber dried in ID and OD climates, however in radial, longitudinal and volumetric shrinkage values did not demonstrate same trend. When shrinkage percentages of the samples dried in ID and OD were evaluated, radial shrinkage showed a decrease of approximately 3 % in OD (5,08 %) measurements compared to ID (5,24 %). Tangential and longitudinal shrinkage values of samples dried ID were found to be lower about 19 % and 30,8 % than those of samples dried OD, respectively. Finally, volumetric swelling is less with a rate of 9,4 % compared in ID climate (10,89 %) to OD (11,91 %). All shrinkage and deformations in the wood, from cutting to usage area, are the main reasons why the need to be dried before use (Kilic Ak 2016). In this study, volumetric swelling results (ID: 10,89 %, OD: 11,91 %) different when compared from other species of Eucalyptus: lower than that mentioned by Lima et al. (2014) for Eucalyptus resinifera (16,67 %), by As et al. (2001) for Eucalyptus rostrata (12,7 %). Moreover, similar results were found for Eucalyptus camaldulensis as 11,4 % and 11,8 % (Aslan et al. 2008, Ay et al. 2008. Table 5 shows percentages of swelling calculated in Eucalyptus samples dried in ID and OD climates. According to the results of t-tests applied to swelling measurements presented in Table 5, tangential and longitudinal swelling values have significant differences in terms of drying climate, whereas the effect of climate is found to be insignificant on the radial and volumetric swelling percentages. When swelling values of the samples dried in OD compared with ID, a decrease of approximately 11,4 % was observed in the radial direction, whereas in increase was observed 26,1 % in the tangential, 72 % in the longitudinal direction and 8,6 % in the volumetric dimension. The values found for volumetric swelling were determined to be lower to those reported in the literature (Aslan et al. 2008, Ay et al. 2008. From these results, it can be said that a more protective drying was performed in ID climates. Mechanical properties and statistical analysis of the wood samples dried in ID and OD climates are presented in Table 6. The results of the t-test given in Table 6, which were applied to reveal effect of ID and OD drying climates showed differences on mechanical properties. Significant differences were found between the bending strength and modulus of elasticity of the samples dried in ID and OD climates according to t-test results at ρ < 0,050 significance level. Likewise, Janka hardness values measured in cross sectional, radial and tangential directions constituted significant differences at ρ < 0,000 level according to ID and OD drying climates. T-test results introduced that there is no significant effect of drying climate differences on compression, tensile, dynamic bending and shear strength samples. On the other hand, calculations based on data in Table 6 revealed that the compression strength and dynamic bending strength for the samples dried in OD were 2,3 % and 9,8 % lower than those dried in ID, respectively. However, the modulus of elasticity, static bending strength, tensile strength and shear strength values of the samples dried in ID were determined as 8,6 %, 11,3 %, 0,80 % and 2,11 % lower than those of dried in OD climates, respectively. As it can be seen from Table 6, effect of drying climates difference on the tensile strength parallel to the fibers was calculated as much lower (0,80 %) than OD climate, in contrast to the compressive and bending strengths. Another interesting data is also that the dynamic bending strength value of samples (0,51 MPa) dried in ID climate is approximately 10 % higher than the other one (0,46 MPa). Since dynamic bending strength is particularly prominent for using structural wood in the seismic zones, the effect of the drying environment difference in such usage areas should be take into account. Regarding Janka hardness values, it can be seen from the data in Table 6 that the difference between the samples dried in OD and ID is much higher than other mechanical properties. The Janka hardness values measured in ID all three directions (C rs , R, T) are lower than those measured in the OD. These decreases were calculated as 12,17 %, 25,74 % and 19,29 % according to C rs , R and T directions, respectively. There are many studies on the mechanical properties of Eucalyptus woods in the literature; however, there have been any research dealing with the differences in drying climates (Aslan et al. 2008, Ay et al. 2008, Bektas et al. 2008). On the other hand, temperature and relative humidity, which are two of main drying factors of environment where this research was carried out, were investigated by Kilic Ak (2016) and obtained data are illustrated in Figure 1. The most effective period for fast drying is summer period called "effective drying period". This period is between intersection points of curves showing the change of air temperature and relative humidity within one year at place of drying (Kantay 1993). The effective drying periods were indicated in Figure 1 with arrows. According to Figure 1, the temperature in summer is higher in OD than ID, but it is lower in winter. Relative humidity values are generally higher in ID than those of in OD. It is understood from Figure 1 that the fastest drying in both (ID, OD) climate took place between June and September, when the temperature was higher and the relative humidity was lower. Furthermore, these data indicate that the effective drying period in Kahramanmaraş province occurred between June and September. CONCLUSIONS The results of the tests to investigate effect of drying climate on the chemical, morphological, physical and mechanical properties of Eucalyptus woods can be summarized as follows; There was no significant difference between the chemical compositions of Eucalyptus woods dried in ID and OD. The fiber width and lumen diameters of the wood dried in OD were wider than those of dried in ID due to differences relative humidity. As a result of the physical tests, significant differences were found between the air-dry and oven-dry density, basic density, tangential shrinkage, tangential and longitudinal swelling values at the level of ρ < 0,05, while there was no significant difference between radial-, volumetric-swelling, radial-, longitudinal-, volumetric-shrinkage. Significant differences were found between the some mechanical properties such as static bending strength, modulus of elasticity and Janka hardness (cross section, radial, tangential) of the Eucalyptus samples dried in ID and OD climates. On the other hand, it was found that drying climate had no significant effect on compression, tensile, dynamic bending and shear strength values. Finally, in the light of the data obtained in this study, it is suggested to take into account the differences in chemical, morphological, physical and mechanical properties of Eucalyptus wood in drying process and solid usage areas.
4,673.2
2021-01-01T00:00:00.000
[ "Materials Science" ]
Time-Discrete Parameter Identification Algorithms for Two Deterministic Epidemiological Models applied to the Spread of COVID-19 The pandemic outbreak of COVID-19 threatens people worldwide. Politicians around the globe have to balance different interests. Accordingly, they make decisions which heavily impact our daily life or even curtail fundamental rights. They consequently base their judgements on advices from scientists. For these reasons, many epidemiological models have appeared and have been famous since the beginning of the twentieth century. However, scientists depend on rich data to evaluate them and their assumptions. As this pandemic outbreak has lead to large data collections by the John Hopkins University since its beginning, we take advantage of this situation and compare two deterministic epidemiological models heavily used - the Susceptible-Infectious-Recovered model (SIR model) and the Susceptible-Infectious-Recovered-Dead model (SIRD model). In contrast to other works which concentrate on future predictions, we aim to investigate the validity of assumptions with respect to available data. For that purpose, instead of using these models forward in time, we propose modified time-discrete versions of the continuous models with relaxing all transfer rates to be time-varying and develop algorithms to predict these coefficients based on the principle of “divide-and-conquer” for the inverse problems. Since Asia, Europe and North America are especially affected by this pandemic, we choose countries from these continents to illustrate our results. In the main part of this article, we escepially discuss our results for Hubei (China). As supplementary material, we illustrate our findings for the aforementioned choice of countries. We finally draw some conclusions for future applications of these models and data acquisition. Introduction Since its outbreak in Wuhan (China) in December 2019, the worldwide spread of COVID-19 has enormously impacted our daily life and governments around the globe are actually facing tremendous different challenges. Due to complexity of the current situation and extensive consequences of governments' decisions, politicians heavily rely on scientific advice. Thus, John Hopkins University collects epidemiological data from many countries during the last months 1 . Additionally, many biological and medical studies regarding different aspects of this new corona-virus have been rapidly appeared in scientific journals [2][3][4][5][6][7][8] . For example, Lauer et al. published one study of estimation of the incubation time for COVID-19 9 . Additionally, even more theoretical scientists like mathematicians have recently gained further interests in epidemiology. One of mathematical epidemiology's most groundbreaking work introduced the now well-known SIR model by Kermack and McKermack in 1927 10 . Kermack and McKendrick assumed a fixed population size and decomposed this population in three different homogeneous groups of people, namely susceptible people, infectious people and recovered people. They excluded births, deaths and deaths by disease from their model. Due to its success and simplicity, their works were reprinted in 1991 [11][12][13] . In upcoming decades, epidemiologists and mathematicians have proposed many variants and extensions of this basic model by, for example, adding age or spatial structures [14][15][16][17][18] . After the outbreak of COVID-19, many scientists are recently publishing articles with emphasis on epidemic forecasts which strongly relate to mathematical modelling. Many approaches, mainly focusing on stochastic arguments, with respect to predicting forecasts of the total number of infected people have been appeared during the last weeks [19][20][21][22][23][24] or in the past 25,26 . Recently, neural networks have been applied to forecasting 27 . Since these questions definitely deserve their current attention, current data acquisition by John Hopkins University allows us to certainly take a closer look on the parameter identification problem in such epidemiological models, so-called inverse problems 28 . Here, one aims to estimate different parameters appearing in our epidemiological models based on observed data. Different deterministic approaches by optimisation techniques 29,30 like least-squares methods 31,32 , variational imbedding 33 and Gaussian fitting 34 or stochastic models like time-series 35 or parameter identification by Bayesian methods 36 have been successfully used in inverse epidemiological problems. Evaluating the calculated constant or time-varying transfer rates on acquired data, we have reasonable foundations to make model assumptions plausible. In contrast to many of the aforementioned works, our main goal is the proposal of parameter identification algorithms for SIR and SIRD models which avoid optimisation techniques for the full problem and use principal techniques from applied linear algebra and applied statistics on a decoupled time-discrete problem formulation for both the SIR and the SIRD model. We start with continuous formulations of the time-continuous SIR model and the time-continuous SIRD model. Based on forward finite differences, we propose implicit time-discrete models for both model approaches. However, instead of using constant transfer coefficients as in the classical models, we allow all transfer rates in these models to be time-varying. With this proposed modification, we obtain implicit time-discrete models with time-varying transfer rates we want to identify. Given observed data for susceptible people, infectious people, recovered and dead people, we can now identify our time-varying parameters by simply solving one small linear system at each time step instead of application of optimisation techniques with respect to the full SIR or SIRD problem. Hence, we are able to avoid high-dimensional, possible non-convex problems with many local minima. If we further want the earlier parameters constant, we additionally propose an estimation based on arithmetic means, medians and standard deviations for uncertainty estimation which can readily be applied to the data. Due to current data acquisition of John Hopkins University, we test our developed parameter identification algorithms on data of the worldwide spread of COVID-19 and evaluate their performances. Finally, we discuss some future research possibilities. Methods In this section, we describe our continuous and time-discrete forward SIR and SIRD models. The ideas of these models heavily base on the pioneering works by Kermack and McKendrick [10][11][12][13] . Afterwards, we propose two algorithms for solving each inverse problem to compute time-varying transfer coefficients in these models. SIR Model We begin with describing the SIR model. Before stating equations, we give some information on assumptions of the model. After its formulation, we propose a time-discrete version of the model. In contrast to traditional works with constant transfer coefficients, we introduce time-varying coefficient for our parameter identification algorithm which we finally portray in this subsection. From a mathematical viewpoint, we can directly solve this inverse problem without application of regression algorithms. Instead, we apply optimisation techniques on smaller subproblems. Assumptions In its classical formulation [16][17][18] , the following assumptions should be fulfilled for application of the SIR model: • One neglects the population's age-structure, inhomogeneities with respect to spatial dependence and group behaviour. One assumes that the population consists of homogeneous pools like in simulations of different biological system 54 or chemical reactions 55 ; • In addition, one divides the population into the three homogeneous pools of susceptible people (S), infectious people (I) and recovered people (R); • Since the considered time period is small, one ignores natural births and natural and infection-related deaths within the population. Thus, the population size N is constant in time and it holds for all times t in the respective time interval; • Except from the transfer coefficient from susceptible people to infectious people which might depend on government's rulings in this classical formulation, all other transfer rate between these pools are constant. For further details on assumptions, we refer the interested reader to two papers by Brauer 56,57 . We illustrate a sketch of this model in Figure 1. Let further [t 1 ,t M ] be the respective time interval with start time t 1 and end time t M . Later, since our constant time data step amounts to one day, we consider a sequence t j M j=1 of time points regarding our time-discrete model versions. 2/14 Susceptible People (S) Figure 1. Sketch of the SIR model. We divide the people into three homogeneous pools of susceptible people (P), infectious people (I) and recovered people (R). The arrows portray flows between these pools with respective transfer rates. Continuous Formulation As an abbreviation for the time derivative of a function f , we simply write The classical continuous formulation of the SIR model reads . This means that solutions could be strongly differentiable in a classical or weakly differentiable in a Lebesgue-measurable manner 58 . Time-Discrete Formulation For abbreviation, we shortly write f j := f (t j ) for all j ∈ {1, . . . , M}. We apply finite differences such that the approximation . . , M − 1} since ∆t = 1 with respect to our data and for ease of notation. In addition, we apply an implicit scheme for forward modelling in contrast to Dehning et al. in their stochastic SIR model 19 . To state our modified time-discrete version of the SIR model (2), we relax our conditions and allow time-dependence of the transfer coefficient β . Consequently, application of finite differences leads to as our time-discrete version of (2) for all j ∈ {1, . . . , M − 1} with initial conditions S 1 > 0, I 1 > 0 and R 1 ≥ 0 as stated before. Time-Discrete Inverse Parameter Identification SIR Problem Why do we proceed with (3) although the transfer coefficient β is generally assumed to be constant in time? If β satisfied this condition, we must solve a regression problem due to for all j ∈ {1, . . . , M − 1} to obtain β and possibly use all data points. Depending on the cost function defined for this high-dimensional, probably non-convex minimization problem, this normally yields an optimisation problem with many local extrema if we take also add the other subproblems to our investigation. Since we want to avoid this, we use our transformed system (3). Preprocessing of Population Data Our input data consists of cumulative infected people for M time points. Hence, we necessarily must process these data first to develop an algorithm for our time-discrete inverse parameter identification SIR problem (6). Our procedure reads for all j ∈ {1, . . . , M}. Possible Parameter Reduction by Simple Statistical Analysis Since often the transfer coefficient β is assumed to be constant in time, we further need to process our time-varying transfer coefficient sequence β j M j=2 . For this purpose, we simply calculate the arithmetic mean and respectively the median β . With (8), we obtain the standard deviation Optimisation Procedure for Time-Varying Parametric Coefficients Since social behaviour even changes doe to severe epidemics before established rulings by governments, Chowell et al. 60 , Fisman et al. 61 , Althaus 62 and Brauer 63 suggested application of a time-varying contact rate (exponential decaying) with positive constants δ and ε which adapts to given data. For that purpose, we need to identify a proper cost function. Since we do not want to discuss optimisation techniques in details (we refer interested readers to the book of Nocedal and Wright 64 ), we simply use Nelder-Mead-algorithms 65 as implemented in GNU Octave's function FMINSEARCH 59 due to its simplicity and its ability to capture local extrema of non-differentiable functions. However, one still has to deal with problems of ending in local extrema 64 , but this is out of the scope of this work. Now, our scoring function for minimization of the desired parameters δ and ε, given data points (t j , α j ) for all j ∈ {2, . . . , M}, reads In case of Hubei(China), we additionally consider Gaussian recovering rates with real constants a, b and c. We proceed similarly to the case of exponentially decaying contact rates and we want to shortly note that an additional uncertainly quantification analysis by the Fisher-Information matrix (i.e. Hessian matrix) becomes possible by applying a two times continuously differentiable function like the l 2 -norm or mollified versions of the above used l 1 -norm. However, we relinquish it here. SIRD Model We start with a description of the SIRD model. Before giving its mathematical formulation, we report typical assumptions of the model. Afterwards, we present a time-discrete version of the model. Again, we use time-varying coefficients for our parameter identification algorithm in contrast to constant ones. Assumptions All assumptions of the SIR model apply. Additionally, we extend the pool of recovered people (R) by an additional homogeneous pool of dead people (D) and for ease of notion, we still count dead people (D) for total population size N because we estimate D N. Susceptible People (S) Infectious People (I) Recovered People (R) Dead People (D) γ · I (t) Figure 2. Sketch of the SIRD model. We divide the people into three homogeneous pools of susceptible people (P), infectious people (I), recovered people (R) and dead people (D). The arrows portray flows between these pools with respective transfer rates. A sketch of this model is given in Figure 2. Continuous Formulation We have the classical continuous formulation Time-Discrete Formulation Equivalently to the case of the modified time-discrete SIR model, we generalize the constant transfer rates and allow them to be time-varying. Accordingly, this yields for our modified time-discrete SIRD model for all j ∈ {1, . . . , M − 1} with initial conditions S 1 > 0, I 1 > 0, R 1 ≥ 0 and D 1 ≥ 0. Time-Discrete Inverse Parameter Identification SIRD Problem We proceed similarly to the time-discrete inverse parameter identification SIR problem. At first, we notice for all j ∈ {1, . . . , M − 1}. We reduce (14) by (15) and get for all j ∈ {1, . . . , M − 1}. The validity of the same argument as in the time-discrete inverse parameter identification SIR problem holds. Preprocessing of Population Data Similarly to the case of the time-discrete inverse parameter identification SIR problem, we need to process the same given for all j ∈ {1, . . . , M}. SIRD Parameter Identification Algorithm We summarize our procedure in Table 2 and use GNU Octave to implement this algorithm 59 . Parameter Reduction by Simple Statistical Analysis We proceed as in the corresponding description of the time-discrete parameter identification SIR model. Optimisation Procedure for Time-Varying Parametric Coefficients Again, we apply the same procedure as for the time-discrete parameter identification SIR model. Results Here, we consider our results for Hubei (China) because the worldwide spread of COVID-19 started there. Hence, the longest timed data are available in this case 1 . In this case, the population size reads N = 59170000 as stated in our additional information. We consider a time period ranging from 22 January, 2020 (t = 1) to 30 April, 2020 (t = 100). In Figure 3, we plot cumulative cases of reported infected people, cumulative cases of reported recovered people and cumulative cases of reported dead people. Apparently, there are two jumps around day 35 and day 85 of the COVID-19 epidemic which possibly occur due to changes in Chinese reports. Especially, the last jump values are excluded for calculation of means and medians. For details, we refer to our code availability statement. In all different subplots of Figure 4, we depict our results of Hubei (China) for our proposed time-discrete parameter identification SIR algorithm. We relinquish integrating the standard deviations of arithmetic mean and median in our plots for our estimates of β . In all subplots of Figure 5, we illustrate our results of Hubei (China) for our time-discrete parameter identification SIRD algorithm. Especially, the last jump values are excluded for calculation of means and medians. For details, we refer to our code availability statement. Discussion In media, we normally see non-decreasing curves which report cumulative cases of infected, dead or recovered people. However, we note that we need those values at a specific time in our aforementioned SIR and SIRD models. Hence, the grouped plots of Figures 4 and 5 differ in general from the ones we notice in Figure 3 or in media. Regarding all transfer rates, we note that they should be non-negative for all times. However, due to changes in Chinese reports, we observe statistical outliers around day 85. Thus, we exclude those data from calculate which react sensitively on such phenomena like arithmetic means. If we closely look at the contact rates in Figures 4 and 5, we are able to detect an exponentially decaying behaviour of contact rates in time. Hence, in accordance to descriptions of Brauer 63 and references therein 60-62 , we can probably conclude that people change their social behaviour due to life-threatening diseases. This general trend does not only occur in data from Hubei (China), but is also observed in the analysed data in our supplementary material. However, we want to note that the contact rate heavily depends on social behaviour and if care eases up, this will generally lead to rising contact rates and to exponential growth of infected people again. Let us shortly remark on the recovery and death rates. Surprisingly, we only notice an approximately Gaussian behaviour in recovery rates in Hubei (China) compared to all other countries in our supplementary material. Hence, we recommend to stay with arithmetic means and median for these transfer rates regarding their biological meaning 14,63 . For all data, we have to keep in mind that these are reported cases and nobody really knows real numbers with respect to infected, dead and recovered people due to COVID-19. Hence, we state that application of such models and of our parameter identification algorithms build on profound data which need to be provided. Due to this short time period of observed data, it is reasonable to apply SIR or SIRD models. Furthermore, as already pointed out in our discussion, time-decreasing contact rates can also be assumed at the beginning of such epidemics in general. In addition to these observations, we stress that our proposed algorithms are not only applicable to the actual COVID-19 epidemic, but to all epidemics which fulfill the assumptions of SIR and SIRD models. Therefore, it is a reasonable contribution to report about. As future research projects, we might consider different adaption like statistical 66 or stochastic approaches by basis functions 67 . Pollicott et al. applied a deterministic continuous approach through multiple differentiation and interpolations by splines and trigonometric functions for the time-varying contact rate function α (t) 68 . Since it is out of the scope of this work, it might be an interesting point to integrate time-delay in our parameter identification approach without applying a full optimization process as given by Chen et al 69 . In addition to that, one could evaluate extensions to fractional differential operators in time [70][71][72] . As a concluding remark, one can apply PDE-ODE-models to epidemiology 73 to introduce spatial or even further structures.
4,349.8
2020-05-11T00:00:00.000
[ "Mathematics" ]
COVID-19 Diagnosis: A Review of Rapid Antigen, RT-PCR and Artificial Intelligence Methods As of 27 December 2021, SARS-CoV-2 has infected over 278 million persons and caused 5.3 million deaths. Since the outbreak of COVID-19, different methods, from medical to artificial intelligence, have been used for its detection, diagnosis, and surveillance. Meanwhile, fast and efficient point-of-care (POC) testing and self-testing kits have become necessary in the fight against COVID-19 and to assist healthcare personnel and governments curb the spread of the virus. This paper presents a review of the various types of COVID-19 detection methods, diagnostic technologies, and surveillance approaches that have been used or proposed. The review provided in this article should be beneficial to researchers in this field and health policymakers at large. Introduction The recent incident of the novel coronavirus (SARS-CoV-2) in Wuhan, China, and its spread globally has impacted the world's economy. So far, the virus has claimed more than 5 million lives and infected over 278 million people worldwide as of 27 December 2021 [1]. The emergence of different variants shows that the fight against the deadly and infectious viruses is far from over [1]. It also shows how swiftly new infectious diseases might emerge and spread while wrecking global economic havoc. The viral aetiology of coronavirus disease 2019 (COVID-19) is SARS-CoV-2, which has unexpectedly increased the need for clinical knowledge and information, epidemiological investigations, and quick diagnostic technology. It is well known that quick, efficient, and ultrasensitive detection of SARS-CoV-2 is crucial for epidemic prevention and containment [2,3]. As a result, there has been worldwide demand for knowledge on SARS-CoV-2 diagnostic and surveillance technologies. With accurate diagnostics in place, health workers may decide where and how to allocate resources and efforts to effectively isolate and treat patients. This mechanism can slow the spread of infectious diseases and minimise mortality. Thus, all of the tools required for the rapid detection of SARS-CoV-2 are extremely useful to frontline healthcare workers and policymakers working together to alleviate the disease's devastation and limit its spread. Since the emergence of COVID-19, many methodologies have been used worldwide to identify and diagnose the virus. The approaches reported include whole-genome sequencing, electron microscopy, and different computed tomography (CT) imaging based methods, which were initially employed to screen for and detect SARS-CoV-2 [4,5]. Meanwhile, the availability of known diagnostic tools for SARS-CoV-1 (from the 2002 SARS outbreak) was helpful in the diagnosis of COVID-19. These tools are currently performing critical roles in detecting and controlling the spread of COVID-19. For example, the transmission electron microscopy (TEM) was employed to determine the morphology of the SARS-CoV-2 virus [6]. The virus's identification was confirmed via genome sequencing [7,8], and the sequencing data was used in the development of primers and probes for polymerase chain reaction (PCR) [9]. It is worth noting that whereas the SARS-CoV-1 identification took about five months in 2002, the same procedures were employed to identify SARS-CoV-2 within a few days [10]. Recently, artificial intelligence (AI) has shown great potential in detecting diverse diseases [11][12][13]. Some AI-based methods have been proposed for COVID-19 detection, tracking, and treatment [14][15][16][17]. Deep learning, a subfield of AI that is based on artificial neural networks [18], has been applied to learn and analyze the lung regions using CT images and chest radiographs (X-ray) in order to detect COVID-19 [19,20]. This study presents a detailed review of the various types of detection methods, diagnostic technologies, and surveillance approaches that have been used or recently proposed in the fight against COVID-19. Holistically, this will aid decision making by researchers and policymakers. The rest of this paper is structured as follows: The current and emerging COVID-19 diagnostic tests are presented in Section 2. Section 3 discusses real-time reverse transcriptase-polymerase chain reaction, and rapid antigen detection test is presented in Section 4. Section 5 presented a detailed application of AI techniques for COVID-19. Section 6 highlights some contributions of AI in the fight against COVID-19. Finally, the paper is concluded in Section 7. Current and Emerging COVID-19 Diagnostic Tests Nucleic acid tests and computer tomography (CT) scans were earlier used to diagnose COVID-19. At the start of the COVID-19 epidemic in China, syndromic testing using CT scans was predominantly employed to diagnose and examine the virus [21]. However, molecular technology is more suitable for detecting the virus than syndromic examination and CT scans because it can target and detect specific infections. However, due to the need for a more effective and real-time diagnosis of COVID-19, researchers and scientists have created several tools summarised in Table 1. These technologies are divided into three categories: nucleic acid testing, protein testing, and point-of-care (POC) testing. Figure 1 also depicts the stages of development of diagnostic tests used thus far. As can be observed, most of the approaches are still in the proof-of-concept stage. The majority of the proposed diagnostics technologies are likely to enter the commercial phase and be applied for disease detection in the future. Notably, it remains unpredictable if the vast knowledge gathered from the various variants seen so far can be generalised for investigating future variants of the COVID-19 virus. For example, to mitigate the widespread of the new variant (Omicron) that was first confirmed on 9 November 2021 [22], testing will be paramount. And to achieve this, less expensive, sensitive, user-friendly, and point-of-care kits will be required. Such kits will ultimately reduce a surge in cases as people can self-test and isolate themselves accordingly. Real-Time Reverse Transcriptase-Polymerase Chain Reaction In recent years, nucleic acid detection-based techniques have become a reliable and rapid approach for detecting viruses. Precisely, the polymerase chain reaction (PCR) has gained popularity and is considered the gold standard for diagnosing some variants of viruses and is characterised by rapid detection, high sensitivity and specificity [33]. Based on those above, real-time reverse transcriptase-PCR (RT-PCR) has gained much interest in detecting SARS-CoV-2 due to its specific and easy qualitative assay. To achieve its aim, the RT-PCR involves the reverse transcription of the virus RNA into complementary DNA (cDNA) strands, followed by the amplification of certain regions of the cDNA. The amplified cDNA of the virus is targeted, and a section of the SARS-CoV-2 genome is amplified via PCR [34]. The Center for Disease Control and Prevention (CDC) presented a recent study using the Fulgent COVID-19 RT-PCR test to detect COVID-19 [35]. A total of 2039 patients admitted to the emergency department in a California hospital between June and August of 2020 participated in the study, and the RT-PCR test obtained a specificity above 99%. Another study used the RT-PCR test to detect COVID-19 in samples from 323 patients, and metrics such as sensitivity and specificity, with confidence intervals (CI) that indicate the test results' statistical significance, were used in the study [36]. The RT-PCR obtained a sensitivity of 94.7% (95% CI 74.0 to 99.9%) and a specificity of 100% (95% CI 94.9 to 100%). To date, the RT-PCR test is the most commonly used test for detecting COVID-19, and several studies have stated its robustness and reliability compared to other testing methods [37][38][39][40]. The SARS-CoV-2 RNA is extracted and diluted with a master mix containing both forward and reverse primers, nuclease-free water, a fluorophore-quencher probe and a reaction mix (magnesium, transcriptase, nucleotides, polymerase, and additives). The extracted viral RNA and master mix are amplified in a PCR thermocycler, and the respective incubation parameters are set to run the assay. During the assay, the cleavage of the fluorophore-quencher probe results in a fluorescent signal detected by the thermocycler and amplification is recorded in real-time. The primary method of this diagnostic test involves the collection of samples from the upper respiratory of a person via nasal and oropharyngeal swabs. Notably, swabs from the nose and throat can produce false-negative results if not done correctly. Hence, the person performing the test must be familiar with the upper respiratory anatomy [41]. False-negative tests may also occur due to mutation in the real-time RT-PCR primer and probe-target segments of the virus genome [33]. Others may include but are not limited to sample cross-talk, and hitches with software may reduce the sensitivity of PCR reaction. More so, the commercially available RT-PCR kits are only operated in well-equipped laboratories due to the specialised tools and instruments used for the reactions coupled with safety reasons, thereby limiting the number of tests that can be performed daily [42]. Several nucleic acid amplification tests have been authorized by the U.S. Food and Drug Administration (FDA) [43]. These kits have been used to analyze samples from the nose, throat, bronchoalveolar lavage and sputum, and the sensitivity of some are higher than RT-PCR. For example, loop-mediated isothermal amplification (LAMP) can rapidly amplify the viral RNA at a single temperature whilst providing a reliable diagnosis [44]. Some of these tests can be self-administered at home or in laboratories. Indeed, antigen tests targeting various virus proteins could be vital to support the currently available RT-PCR kits and speed up detection. Rapid Antigen Detection Test A rapid antigen test (RAT), also known as an antigen rapid test (ART), or a rapid antigen detection test (RADT), or simply a rapid test, is a type of rapid diagnostic test that immediately identifies the absence or presence of an antigen for point-of-care testing. Rapid tests are a form of lateral flow device that detects antigens, as opposed to other medical tests that detect nucleic acid (nucleic acid tests) or antibodies (antibody tests), of either point-of-care types or laboratory. Usually, it is widely used to detect SARS-CoV-2, the virus that causes COVID-19. Rapid tests often have fast turnaround times of less than 5 to 30 min, involve minimal training or equipment, and offer considerable cost benefits, and may be employed in decentralised testing; and they have the potential to boost testing procedures [45]. Meanwhile, Gremmels et al. [45] studied the potential of PanbioTM COVID-19 antigen rapid test and its diagnostic performance, which is yet to be fully confirmed. According to their results, the PanbioTM COVID-19 Ag rapid test successfully detects SARS-CoV-2 infected people with a significant viral load in nasopharyngeal samples in a population of participants dwelling in a community with mild respiratory tract infection, and this test has a 100 percent specificity. Though the sensitivity is lesser than that of RT-qPCR, all false-negative rapid test findings were attributed to low viral loads in nasopharyngeal samples. The study concluded that because the PanbioTM COVID-19 Ag rapid test has a lower sensitivity, RT-qPCR would be the appropriate diagnostic test for clinical applications in a hospital environment. However, for community monitoring of SARS-CoV-2, this rapid antigen test accurately and quickly identifies persons with a high risk of continued transmission. In the future, it might be an essential tool in our testing strategy to prevent SARS-CoV-2 transmission. In another research, Torres et al. [46] studied the performance of the PanbioTM COVID-19 Ag rapid test device to identify and detect SARS-CoV-2-infected asymptomatic peoples. The study discovered that the Panbio test has limited sensitivity in asymptomatic COVID-19 patients close contacts, especially non-household contacts. Notably, the authors believe that determining the best time to collect upper respiratory tract samples in this group is critical for pinpointing test sensitivity [46]. However, a low sensitivity test such as this is not suitable for detecting COVID-19, considering the impact a false-negative test result can cause. Furthermore, Mak et al. [47] studied the specificity and sensitivity of COVID-19 Ag rapid tests such as SARS-CoV-2 detection using the rapid antigen detection kit from the WHO Emergency Use List. The findings from the study revealed that the rapid antigen detection kit was 100 times less sensitive than RT-PCR. The clinical sensitivity of the rapid antigen detection test for identifying specimens from COVID-19 patients was 68.6%. A multicenter study was also conducted by Merino et al. [48] also conducted a multicenter study to evaluate the PanbioTM COVID-19 rapid antigen-detection test for SARS-CoV-2 infection diagnosis. Machine Learning Machine learning (ML) has shown high performance in several image processing applications such as image analysis [49], image segmentation and classification [50], and medical imaging [11,51]. With the recent outbreak of the COVID-19 pandemic, ML offers a great potential for accurate and fast detection of COVID-19 from computed tomography (CT) and chest radiographs (CXR) images. As we learn more about the natural history of COVID-19, it has become apparent that the disease progresses in stages. The need to pre-empt deterioration and personalise preventative interventions has emerged as a priority [52]. Currently, imaging research has focused on diagnosis based on appearances once the disease has progressed. Early detection of the infection, when the initiation of appropriate therapy is likely to be most effective, would be more helpful. CT also has a well-established role as a tool to detect several diseases, particularly when combined with clinical data. This finding is essential in COVID-19 detection; because the primary concern for healthcare providers is becoming overwhelmed by patients requiring intensive care and ventilatory support, accurate prognostication is a more pressing clinical problem than diagnosis [53]. For COVID-19, training a model to predict outcomes such as the need for mechanical ventilation, intensive care unit admission, and mortality could have a considerable clinical effect [54][55][56]. Since the pandemic, several studies have been conducted on how ML can be used to detect and diagnose COVID-19. The systematic review in [57] critically examined the methodologies used in 29 studies that focused on ML and COVID-19. Of these 29 studies, seven were based on conventional ML algorithms, 20 on deep learning techniques, and two used deep learning and traditional ML techniques together. Most of the studies, i.e., 23, address the detection of COVID-19, while six focused on building systems for prognostication. The findings in Roberts et al. [57] showed that none of the developed models in the 29 papers has any potential for clinical use because of their methodology's underlying biases and flaws. An ML model was developed in [58] for screening potential neutralising antibodies discovered for the COVID-19 virus. The model was developed to check synthetic antibodies to detect antibodies that potentially inhibit SARS-CoV-2. The study used 14 different types of viruses and developed models using graph featurisation with different ML algorithms such as support vector machine, random forest, and logistic regression. The model's out-of-class predictions for covid and influenza were 100% and 84.61%, respectively; this shows the model's robustness in neutralising predicted antibodies for the SARS-CoV-2. To understand the spread of the virus in the top five affected countries (USA, India, Brazil, Peru, and Russia) as of 10 July 2020, Hazarika and Gupta [59] proposed a new approach based on a random vector functional link (RVFL) network. RVFL was hybridised with a wavelet-coupled RVFL network and 1-D discrete wavelet transform. Hazarika and Gupta [59] concluded that the wavelet-based hybrid models can be useful in the fight against COVID-19. Elaziz et al. [60] introduced an ML technique that uses fractional multichannel exponent moments (FrMEMs) to extract features from X-ray images. The technique classified chest X-ray images into two classes, i.e., COVID 19 patients and non-COVID-19 patients. The proposed method achieved an accuracy of 96.09% in predicting the COVID 19 class and 98.09% in the non-COVID-19 class. Meanwhile, India is among the countries worse hit by the COVID-19 pandemic [61]. During the pandemic surge in that country, an ML forecasting model was proposed in [62] to assist in predicting the spread of the virus. The authors in Sujath et al. [62] used multilayer perceptron, linear regression, and vector autoregression on datasets obtained from Kaggle to understand the fast-spreading virus across the country, i.e., confirmed, death, and recovered cases across India. The review in Kushwaha et al. [63] discussed the importance of ML in the fight against the COVID-19 pandemic. The research examined several papers that addressed how different ML models could have assisted in detecting the virus. The review concluded that ML could be used for COVID-19 diagnosis, precise and personalised patient treatment, patient behaviour analysis, and future symptoms prediction. A hybrid ML model based on a multilayer perceptron algorithm and adaptive network-based fuzzy inference system was used to predict a patient mortality rate [64]. The dataset used was collected in Hungary, and the model validation was performed for nine days with good results. ML-based CT analysis has shown to be a promising screening medium for COVID-19 and has outperformed viral real-time PCR testing [65,66]. Deep Learning Although machine learning powers many aspects of societal applications from social media networks to consumer products and devices, supported by cameras and smartphones [67]. However, conventional machine learning techniques are not well-suited for some modern societal applications [68]. The advent of deep learning, a subset of machine learning, has found applications in many areas, such as object and speech detection, drug and genomics, and natural language processing systems [67,69]. Since the outbreak of COVID-19, deep learning algorithms have been widely used to understand and forecast the disease pattern [70,71]. Several attempts have been made to utilise these algorithms to estimate and forecast the future spread of the disease [72]. A review by [70] on six deep learning algorithms used for disease forecasting shows popular algorithms such as Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Bi-directional (Bi-LSTM), Variational Auto Encoder (VAE), and Gated recurrent units (GRUs) have been used on time-series data for predicting newly affected and recovered cases of COVID (See Figure 2). These models are based on time-sensitive analysis (temporal) and possess attractive modelling features. All these algorithms are based on RNN [73,74]. Meanwhile, RNNs possess less memory and are not suited for time dependencies in historical data. LSTMs were designed to mitigate many of the dependency issues of RNN, and they contain three gates for controlling information flow-input, forget, and output gates. Bi-LSTM enhances the capabilities of the LSTM and retains the option of being reconstructed to a backward context [73]. The GRU models are an alternative to LSTM. GRUs was created to optimise the performance of LSTM models, and VAE models are based on generative modelling and extend the capabilities of RNNs. Other works in literature have explored the convolutional neural networks (CNN) [75][76][77]. CNN is a type of artificial neural network that has been widely used for medical imaging analysis. It has multiple layers to process data at less computational power with higher accuracy. A pre-trained CNN was used in classifying X-ray images to uncover healthy and non-healthy chest scans [78]. These pre-trained models include ResNet18, ResNet50, ResNet101, VGG16, and VGG19. The ResNet50 model was trained alongside the support vector machine, which was found to have achieved higher accuracy. Other CNN variants have been created to join the fight against COVID, which include CoVNet-19 [79], Res-CovNet [80], and MTU-COVNet [81]. COVID-19 Datasets Data is essential in all machine learning models and applications [82]. If data is made publicly available, ML algorithms can help in reducing the spread of COVID-19. Therefore, the first step in designing a COVID-19 detection model is data collection. Having ML-based COVID-19 diagnosis from X-rays and CT scans will lower the challenge on the short supplies of reverse transcriptase-polymerase chain reaction (RT-PCR) test kits. Epidemiological and statistical analysis of reported covid cases can also be useful in understanding the relationship between virus transmission and human mobility. Social media data can also provide socio-economic and sentiment analysis for governments and policymakers in the current pandemic. Therefore, data is essential for the effective implementation of models to fight and reduce the spread of the virus. Most available datasets were private at the beginning of the pandemic due to privacy issues. However, several of these datasets were recently made open to researchers [83]. Meanwhile, most researchers have focused on using CT image datasets for their diagnosis due to the time consumed and low sensitivity associated with the standard COVID-19 diagnosis methods, such as the RT-PCR and CXR [84][85][86][87][88]. This has made both CT and chest X-ray diagnostic imaging modalities quickly produce large volumes of data on COVID-19, which has enabled the development of machine learning models for detecting and diagnosing the virus. At the start of the COVID-19 pandemic, radiologists were extremely busy with little bandwidth to read many CT scans timely. Also, radiologists in developing countries may not be well equipped to recognise COVID-19 from CT scans since the disease was relatively new at the time. Therefore, in order to accurately detect the virus, several deep learning methods were developed to screen COVID-19 from CTs [65,[89][90][91]. COVID-CT, an opensourced dataset, was introduced in [92]. The dataset comprises 349 positive COVID-19 CT images and 463 negative COVID-19 CTs. The dataset was extracted from reported CT images in 760 bioRxiv and medRxiv preprints about COVID-19. Experiments on the dataset showed that COVID-CT is efficient in developing an AI-enabled model to diagnose COVID-19. The authors [92] further used the dataset to build self-supervised and multi-task learning, which achieved an area under the receiver operating curve (AUC) of 0.98, F1-score of 0.90, and accuracy of 0.89. Table 2 summarises some notable COVID-19 datasets in the literature, the techniques used in building the ML model, and the obtained results. The metrics used in evaluating the performance includes accuracy (ACC), sensitivity (Sen), and specificity (Spe). Afshar et al. [93] proposed COVID-CT-MD, a new COVID-19 dataset. The dataset also consisted of healthy and participants infected by Community-Acquired Pneumonia (CAP). The COVID-CT-MD consists of 169 chest CT scans of positive patients, 60 patients with CAP, and 76 patients that do not have either covid or CAP. The results obtained from COVID-CT-MD showed that the dataset could advance the application of ML and DL in diagnosing COVID. Ref. [85] developed an open-source SARS-CoV-2 CT scan dataset to encourage the development of AI techniques for detecting COVID-19 through the analysis of their CT scan. SARS-CoV-2 CT contains 2482 CT scans, with 1252 positive and 1230 non-infected cases. The dataset is made up of actual patient CT scans collected in Sao Paulo, Brazil. Using the eXplainable Deep Learning technique (xDNN), Angelo and Soares [85] achieved an F1-score of 97.31%, which is promising. Furthermore, in light of understanding how chest X-ray images can assist in diagnosing COVID-19, Hall et al. [94] obtained 320 chest X-rays of bacterial and viral pneumonia and 135 chest X-rays of positive COVID-19 patients. These datasets were used with a pre-trained deep convolutional neural network (DCNN). The model achieved an accuracy of 91.24% and an AUC of 0.94. Another notable application of ML in the COVID-19 era is contact tracing, which has yielded excellent results [95]. Notable Contributions of AI in the Fight against COVID-19 6.1. AI for COVID-19 Tracking and Dashboarding Since the outbreak of COVID-19, there have been concerted efforts towards tracking and predicting its debilitating effect across many nations [96][97][98]. These unified efforts have led AI researchers to utilise predictive modelling for forecasting actual and expected spread and reporting on open data dashboards, thereby supporting the efforts of healthcare professionals. One of the foremost real-time dashboards used for COVID-19 tracking was developed by the Center for Systems Science and Engineering (CSSE) at the John Hopkins University [99]. The CSSE platform has effectively tracked recoveries, death, and new cases worldwide. This platform aimed to provide stakeholders, such as public authorities, researchers, and the broader public, with an interactive interface to track the virus in real-time [100]. Similar platforms have emerged to unify monitoring and prediction efforts. These dashboards include the Center for Disease Control and Prevention (CDC), COVID-19 Data Tracker, Microsoft Bing's COVID-19 tracker dashboard, the BBC, New York Times, HealthMap, and Upcode. Other notable platforms include nCoV2019, 1point3arces, China's Baidu, South Korea's KSIC, Time's Coronavirus Map, NPR, and Worldmapper [101]. Countries within the Global South have intensified efforts to create dashboards for monitoring and predicting COVID-19 [102][103][104]. The African Union Centre for Disease Control, a public health agency for member states, created the African CDC Dashboard to provide updates on the COVID-19 crisis within the region [105]. Another popular example is the COVID-19 ZA South Africa Dashboard, developed at the University of Pretoria by the Data Science for Social Impact Research Group (DSFSI). In South America, especially in Panama, a dashboard termed the COVID-19 Open data was developed to track and predict cases [106]. Saudi Arabia launched its own dashboard in the Middle East to enable public authorities to monitor and combat COVID-19 cases. Table 3 shows the summary of COVID-19 dashboards discussed in this research, and they are classified according to the source, name of dashboard, country, its purpose, coverage, and accessible medium. However, this list is not exhaustive. AI for COVID-19 Diagnosis and Forecasting Recently, researchers have intensified efforts to combat the threats posed by COVID-19 using different techniques. Several AI initiatives are in continuous development to detect COVID-19 infections, assisting healthcare professionals. A study by [113] used the Random Forest (RF) algorithm to extract eleven key blood indices to accurately identify traces of the COVID-19 virus on patients' blood test data. The study showed that the RF algorithm extracted the features, enabling the predictive model to achieve an accuracy of 96.7%. Similarly, Tang et al. [114] applied the RF algorithm on chest CT images and identified features of COVID-19. The study reported an accuracy of 87.5% and an AUC of 91%. Yan et al. [115] proposed a method to detect COVID-19 using the extreme gradient boosting (XGBoost) classifier. The XGBoost classifier was trained with samples from 485 infected patients, and it achieved excellent performance. In another research, Bertsimas et al. [116] used the XGBoost classifier to predict the mortality of COVID-19 patients. Furthermore, Wang et al. [89] proposed a deep convolutional neural network approach called COVID-Net to diagnose COVID-19 from radiography images. Sun et al. [117] applied the Support Vector Machine to predict severe symptoms in COVID-19 patients using four clinical indicators. The study reported an accuracy of 77.5% and a specificity of 78.4%. Chimmula and Zhang [71] applied the Long Short Term Memory (LSTM) algorithm, a type of recurrent neural network, to forecast COVID-19 prevalence. The proposed method achieved an accuracy of 93.4% on the test set. Other studies have also focused on forecasting COVID-19 prevalence. For example, Chintalapudi et al. [118] and Gupta et al. [119] applied the Auto-Regressive Integrated Moving Average (ARIMA), a time-series method, to forecast the number of confirmed cases. Ref. [120] used a modified stacked auto-encoder method to forecast the prevalence of confirmed cases in China. A summary of articles reviewing COVID-19 diagnosis and forecasting can be found in Table 4. AI for the Treatment of COVID-19 AI is changing the landscape of many disciplines, particularly the pharmaceutical industry [121][122][123]. Through clinical trials, AI has found applications in the area of drug discovery even prior to the existence of COVID-19. Anecdotal evidence revealed that drug discovery and development are capital-intensive processes that typically cost billions of US Dollars and take about twelve years on a typical average [124][125][126]. In itself, drug discovery and development involve target selection and validation, screening of compounds identified from molecular libraries, preclinical studies, and drug candidates, which must pass clinical trials by being administered to patients (See Figure 3). Recently, this landscape has been changing with the exploration of AI to the voluminous data produced in the genomics field [121,125]. Arshadi et al. [127] outlined three approaches of AI to drug discovery: protein-based, RNA-based, and generative methods. The study showed that AI was useful in identifying unique drug and disease relationships. Furthermore, the study reviewed Benevolent AI, a UK-based organisation that integrated biomedical data from structured and unstructured sources through its AI-knowledge graph. Other organisations that have intensified efforts in using AI for COVID-19 drug discovery include Innoplexus through assessing the capability of Hydroxychloroquine and Remdesivir in the treatment of COVID-19 [128]. Similarly, Deargen and Gero used AI techniques to recommend atazanavir and niclosamidenitazoxanide, respectively, for treating the virus [128][129][130]. In a study that utilised AI for drug screening, Delijewski and Haneczok [131] applied a supervised machine learning model based on gradient-boosting, an ensemble learning technique to identify zafirlukast as the best-repurposed candidate drug in the fight against COVID-19. The study utilised the Food and Drug Administration (FDA) set of approved drugs as the dataset, consisting of approximately 290,000 negative and 405 active molecules of COVID-19 3CL pro inhibitors, and concluded that the ML algorithm was helpful in the drug identification. Meanwhile, Pham et al. [132] proposed a graph-based neural network called DeepCE, a technique used for predicting chemical-induced gene expression profiles from chemical and biological objects. The study achieved excellent performance on the L1000 dataset and indicated that it could also be helpful for phenotype-based drug screening. In a pathogenesis study used to find possible drug candidates, Kabra and Singh [133] used ML algorithms in a high-dimensional nucleotide sequence dataset for selecting and identifying peptides against a strain of the COVID-19 virus. The dataset consists of 2765 sequences of COVID-19 patients from different countries. The study concluded that the ML algorithms obtained excellent performance and opened a new dimension in designing and generating peptides with desired targets. Meanwhile, Jin et al. [134] proposed a drug-target interaction using a deep learning architecture, ComboNet, to predict whether a drug is likely to bind to a biological target. ComboNet consists of a graph convolutional neural network that learns a molecule representation and a linear function that learns antiviral activity and synergies in biological targets. According to the study, ComboNet performed very well despite limited drug combination training data. AI for COVID-19 Surveillance Many AI techniques have been used to build surveillance tools that predict the impact of COVID-19 [135][136][137]. As represented in Figure 4, these tools can help towards pandemic combating strategies, which can be difficult if manual methods are applied. According to Arora et al. [138], these tools incorporate several features, such as location tracking, travel data, epidemiological and behavioural patterns, to build reliable surveillance toolkits. Although these tools have been effective in reducing the spread of COVID-19, they have been widely criticised for privacy infringements. Different machine learning models have been used to curb the spread of infection in several countries. For example, the Taiwan authorities implemented AI-based health checks for airline travellers who had visited Wuhan, China, after the outbreak was reported [140]. Taiwan attributed the low number of death cases to the AI surveillance system. Before the COVID-19 pandemic, many Chinese cities had surveillance security cameras linked to AI-based facial recognition systems [141]. When the pandemic started, these technologies were repurposed with thermal imaging for screening citizens with high temperatures. Furthermore, South Korean CDC deployed its contact-tracing system, known as COVID-19 Smart Management System, to trace the movement of individuals with COVID-19 [142]. The system incorporated security footage, credit card records, and global positioning system (GPS) data from cell phones. Meanwhile, a US-based firm, Swedish Health Services, developed a platform for healthcare professionals to track COVID-19 cases in hospitals [139]. The app aimed to use the tracking information in allocating healthcare resources and knowing the facilities' status. In India, a local government used geo-mapping of quarantine locations and CCTV recordings to track possible COVID-19 patients [143]. Similar technologies have been used in Israel, Argentina, and Morocco [143,144]. Discussion and Conclusions The health and economic impact of the COVID-19 pandemic has been severe worldwide. Numerous researchers have proposed several COVID-19 diagnosis techniques. Generally, these techniques are based on antibody tests that detect the presence of proteins the body produces in response to a previous infection, molecular tests used for detecting viral genomic material, and CT tests that examine a person's lung. Among these diagnostic techniques, the RT-PCR test that detects various regions of the SARS-CoV-1 genome is the gold standard for diagnosing COVID-19. However, the scarcity of testing tools such as RT-PCR kits could occur during pandemic emergencies. Hence, it is vital to have several testing options. Also, the RT-PCR testing technique is expensive, and many low-income countries cannot afford sufficient testing of a greater part of their population. Meanwhile, a comprehensive review of COVID-19 diagnostic methods has been conducted in this research. The study covered current and emerging diagnostic tests, including RT-PCR, rapid antigen detection, and AI-based methods. Also, this research discussed other areas where AI has been applied to curb the spread of COVID-19, such as surveillance, tracking and dashboarding, forecasting, and treatment. Furthermore, it is necessary to state that different methods are required to control and prevent the spread of COVID-19 effectively in addition to clinical diagnosis. While highly accurate and sensitive tests are needed in the fight against COVID-19, it is also necessary to have a somewhat rapid point of care and easy to use self-testing kits. Data Availability Statement: The data presented in this study are available in article. Conflicts of Interest: The authors declare that there are no conflict of interests.
7,597.8
2022-04-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Survival of Radioresistant Bacteria on Europa ’ s Surface after Pulse Ejection of Subsurface Ocean Water We briefly present preliminary results of our study of the radioresistant bacteria in a low temperature and pressure and high-radiation environment and hypothesize the ability of microorganisms to survive extraterrestrial high-radiation environments, such as the icy surface of Jupiter’s moon, Europa. In this study, samples containing a strain of Deinococcus radiodurans VKM B-1422T embedded into a simulated version of Europa’s ice were put under extreme environmental (−130 ◦C, 0.01 mbar) and radiation conditions using a specially designed experimental vacuum chamber. The samples were irradiated with 5, 10, 50, and 100 kGy doses and subsequently studied for residual viable cells. We estimate the limit of the accumulated dose that viable cells in those conditions could withstand at 50 kGy. Combining our numerical modelling of the accumulated dose in ice with observations of water eruption events on Europa, we hypothesize that in the case of such events, it is possible that putative extraterrestrial organisms might retain viability in a dormant state for up to 10,000 years, and could be sampled and studied by future probe missions. Introduction Jupiter's satellite, Europa, is believed to have a subsurface ocean that might be a potential environment for the existence of extraterrestrial life [1].Astrobiological studies concerning environmental factors and potential habitats [2], proposing sampling end experimental strategies [3,4], have been recently published.As the high radiation and low temperature in this case are believed to be limiting factors of biological activity, thus constraining possible habitats to the bottom of the ice sheet, seafloor, and ocean [2], it seems crucial to study the effect of these factors on living organisms, simulating the harsh environment of Europa.Observational data from the Hubble Space Telescope (HST) provide possible evidence for water plumes being ejected from the ocean to space and permanently renewing the icy surface of Europa [5,6].The ocean is hypothesized to be habitable.In this case, hypothetical microorganisms could be ejected from the ocean and frozen in the surface ice layer during the water eruption event.These potential life forms might remain in a dormant state for an undetermined extended period of time at a very low temperature in the surface ice layer. The viability of microorganisms after millions of years of cryoconservation should not be challenging per se, because living cells are found in ancient permafrost rocks that have not melted for several Myr (e.g., [7][8][9]), but Europa's orbit being placed in the radiation belt of Jupiter means the moon's surface is subject to high doses of radiation [10].This radiation is sufficient to sterilize the surface ice layer, however, our modelling results show that this effect sharply decreases with depth, mainly as a result of highly effective energy loss of MeV energy range electrons and ions in the first centimeters of ice.The determination of the survival time and depth limits is the main aim of this study.We performed irradiation (with accelerated electrons) of Deinococcus radiodurans bacteria embedded into a model of Europa's ice under simulated temperature and pressure conditions of Europa. Vacuum Chamber A special vacuum chamber was developed for modelling the extreme irradiation of icy samples with bacteria under low temperature and low atmospheric pressure (−130 • C, 0.01 mbar). The chamber is a cylindrical stainless steel tank covered thin Al film (100 µm thickness).High energy electrons pass through the film to the vacuum chamber and bombard samples placed on the bottom surface of the chamber.A number of cylinders were installed inside the chamber to support the film against external atmospheric pressure.The chamber was directly connected to a liquid nitrogen tank, which was placed below the chamber.The bottom part of the sample chamber was cooled during irradiation run by several "cold copper fingers" loaded into liquid nitrogen.The chamber was permanently pumped down to 0.01 mbar during the experimental run.The temperature on the bottom surface of the chamber was monitored with a thermocouple.Samples were put on the bottom of the chamber and had a thickness of about 1 mm, so that the irradiation of the sample was close to uniform.The scheme of the chamber is presented in Figure 1. Geosciences 2018, 8, x FOR PEER REVIEW 2 of 9 viability of microorganisms after millions of years of cryoconservation should not be challenging per se, because living cells are found in ancient permafrost rocks that have not melted for several Myr (e.g., [7][8][9]), but Europa's orbit being placed in the radiation belt of Jupiter means the moon's surface is subject to high doses of radiation [10].This radiation is sufficient to sterilize the surface ice layer, however, our modelling results show that this effect sharply decreases with depth, mainly as a result of highly effective energy loss of MeV energy range electrons and ions in the first centimeters of ice.The determination of the survival time and depth limits is the main aim of this study.We performed irradiation (with accelerated electrons) of Deinococcus radiodurans bacteria embedded into a model of Europa's ice under simulated temperature and pressure conditions of Europa. Vacuum Chamber A special vacuum chamber was developed for modelling the extreme irradiation of icy samples with bacteria under low temperature and low atmospheric pressure (−130 °C, 0.01 mbar). The chamber is a cylindrical stainless steel tank covered thin Al film (100 µm thickness).High energy electrons pass through the film to the vacuum chamber and bombard samples placed on the bottom surface of the chamber.A number of cylinders were installed inside the chamber to support the film against external atmospheric pressure.The chamber was directly connected to a liquid nitrogen tank, which was placed below the chamber.The bottom part of the sample chamber was cooled during irradiation run by several "cold copper fingers" loaded into liquid nitrogen.The chamber was permanently pumped down to 0.01 mbar during the experimental run.The temperature on the bottom surface of the chamber was monitored with a thermocouple.Samples were put on the bottom of the chamber and had a thickness of about 1 mm, so that the irradiation of the sample was close to uniform.The scheme of the chamber is presented in Figure 1. Accelerator Irradiation was produced by the electron accelerator with electron energy in the range of 0.4-1.0MeV and beam intensity in the range of 0-3 mA.The electron beam exit window has a length of 450 mm and a width of 20 mm.The vacuum chamber on the mobile platform was placed under the exit window.The platform then moved forward and backward to obtain uniform irradiation of the whole area of samples (see Figure 2). Accelerator Irradiation was produced by the electron accelerator with electron energy in the range of 0.4-1.0MeV and beam intensity in the range of 0-3 mA.The electron beam exit window has a length of 450 mm and a width of 20 mm.The vacuum chamber on the mobile platform was placed under the exit window.The platform then moved forward and backward to obtain uniform irradiation of the whole area of samples (see Figure 2). Samples Preparation and Analysis Strain Deinococcus radiodurans VKM B-1422 T was the object of study.A solution of salts modeling the composition of Europa's ice was used for suspension of bacterial cells. There are no direct measurements of Europa's ice composition, but several studies assume Mg 2+ , Na + , and SO4 2-to be the dominant ions within it [11][12][13].We follow the composition described by the authors of [13] to prepare the model ice samples (artificial Europa ice with bacteria).A solution of magnesium sulfate, sodium sulfate, and sulfuric acid at a ratio of 50:40:10 of molar percent with 35 g/L total concentration was used. To avoid contamination, the samples of ice with bacteria embedded in it were hermetically packed in polyethylene film.Each packed sample had a diameter of about 20 mm and a thickness of about 1 mm.The samples were put on the bottom of the chamber, which was pre-cooled to −100°С.The film was pierced just prior to the chamber sealing and pumping so that the pressure inside the chamber and the sample containment would be equalized during the experiment.The irradiation run began after the stabilization of temperature and pressure in the vacuum chamber at around −130 °С and 0.01 mbar.Four runs of irradiation were executed with doses of 5, 10, 50, and 100 kGy.Irradiation rates were 0.28 kGy/s for 5 and 10 kGy runs and 2.8 kGy/s for 50 and 100 kGy runs.The time from placing the samples in the chamber to unloading from the chamber (including the time of air pumping out, the onset of temperature equilibrium, the exit of personnel from the room, the starting and stopping of the accelerator, and the irradiation) was about 10 min.After the irradiation, the samples were immediately taken out of the chamber and put into sterile 15 ml polypropylene tubes installed in a cooled thermos.All transport and store operations were carried at −18 °С temperature. The determination of the number of culturable bacteria in the samples was performed by plating on glucose-peptone-yeast agar (GPY) (peptone-2 g/L, glucose-1 g/L, yeast extract-1 g/L, Samples Preparation and Analysis Strain Deinococcus radiodurans VKM B-1422 T was the object of study.A solution of salts modeling the composition of Europa's ice was used for suspension of bacterial cells. There are no direct measurements of Europa's ice composition, but several studies assume Mg 2+ , Na + , and SO 4 2− to be the dominant ions within it [11][12][13].We follow the composition described by the authors of [13] to prepare the model ice samples (artificial Europa ice with bacteria).A solution of magnesium sulfate, sodium sulfate, and sulfuric acid at a ratio of 50:40:10 of molar percent with 35 g/L total concentration was used. To avoid contamination, the samples of ice with bacteria embedded in it were hermetically packed in polyethylene film.Each packed sample had a diameter of about 20 mm and a thickness of about 1 mm.The samples were put on the bottom of the chamber, which was pre-cooled to −100 • C. The film was pierced just prior to the chamber sealing and pumping so that the pressure inside the chamber and the sample containment would be equalized during the experiment.The irradiation run began after the stabilization of temperature and pressure in the vacuum chamber at around −130 • C and 0.01 mbar.Four runs of irradiation were executed with doses of 5, 10, 50, and 100 kGy.Irradiation rates were 0.28 kGy/s for 5 and 10 kGy runs and 2.8 kGy/s for 50 and 100 kGy runs.The time from placing the samples in the chamber to unloading from the chamber (including the time of air pumping out, the onset of temperature equilibrium, the exit of personnel from the room, the starting and stopping of the accelerator, and the irradiation) was about 10 min.After the irradiation, the samples were immediately taken out of the chamber and put into sterile 15 ml polypropylene tubes installed in a cooled thermos.All transport and store operations were carried at −18 • C temperature. The determination of the number of culturable bacteria in the samples was performed by plating on glucose-peptone-yeast agar (GPY) (peptone-2 g/L, glucose-1 g/L, yeast extract-1 g/L, casein hydrolysate-1 g/L, CaCO3-1 g/L, agar-agar-20 g/L) [14].Suspensions of samples in different dilutions were plated in triplicate under sterile conditions with simultaneous control of the nutrient medium sterility, sterility of the water used for preparation of dilutions, and control of the presence of foreign air microflora.The plates were incubated at a temperature of +28 • C for two weeks. Calculating Accumulated Dose after Water Release on the Surface of Europa A GEANT4 toolkit [15] was used first to calculate the energy absorbed in a unit water ice layer per incident particle with a certain total energy.GEANT4 was initially developed by CERN co-workers [16] for implementation in high-energy particle physics, and utilizes intranuclear and internuclear cascade models for particle interactions.It provides a wide set of tools for the simulation of particle interactions with matter, and the main advantage of using this toolkit is that it allows the consideration of not only the incident particle effects on the target, but also of contributions from secondary particles, with gamma-rays and X-rays included.The depth of the unit layer was considered to be 0.1 mm and its incident surface as 1 cm 2 with the total depth of the ice column being from 0.05 to 5000 mm.These absorbed energy/depth E abs (E, d) profiles were calculated for electrons, protons, oxygen, and sulfur ions as incident particles.To imply these calculations to Europa's radiation environment, we used differential particle spectra J (E) [counts per cm 2 s sr MeV] from Table 1 in the work of [10].To obtain total accumulated dose at each depth d, one has to numerically integrate J (E) × E abs (E, d) dE for every sort of incident particle, which results in a dose rate profile.Thus, we obtain a one-year dose profile. A simple model of Europa resurfacing was introduced for the following calculations: A layer of "fresh ice" with varying thickness is added atop the ice profile and the calculation is then continued for further 1 million years.Ip et al. [17] proposed the static resurfacing rate of 12 m per 100 Myr, which was also taken into consideration.These computations were done with MATLAB software. Bacteria Survival after Irradiation After irradiation with doses of 5, 10, and 50 kGy, the number of cultivated cells decreased on 2, 3, and 6 orders of magnitude, respectively (Figure 3).Cultivated cells were not found in samples that were irradiated with the 100 kGy doses.Under the conditions of our experiment, Deinococcus radiodurans VKM B-1422 T demonstrated radioresistance that significantly exceeded its radioresistance at normal conditions [18,19].This is the result of the decrease of radiation damages under irradiation at a low temperature and low pressure.This effect was discussed by us previously in detail [14,20,21].Our result is in general accordance with studies of radioresistance of Deinococcus radiodurans carried out previously with high doses of γ-ray irradiation at −79 • C [22,23].However, there is a difference in the decrease of bacterial survival with escalation of the radiation dose.In γ-ray experiments, bacteria populations show signs of decrease only after 30 kGy dose accumulation, whereas in our high energy electrons experiment, such a process begins at 5 kGy.There might be two possible causes for this.It might be that the MeV-electrons produce deadly damages more effectively than γ-rays [20], or it could be the difference in composition of the ice samples used, because salts and minerals may be sources of free radicals at radiolysis (e.g., [24]).Authors of the abovementioned papers [22,23] used an ordinary nutrient solution, while we used a special solution simulating the composition of Europa's ice. We should note that we studied pure bacteria culture, while in nature, microorganisms exist in communities, and inter-and intrapopulation interactions as well as interactions with the environment can significantly increase microbial radioresistance [20,25].Thus, studies of natural microbial communities' radioresistance would be eligible to specify putative life survivability on Europa under the accumulation of radiation dose.Microbial communities of Earth glaciers could be proposed, like terrestrial analogs of Europa ice, for such studies [4,26,27].We should note that we studied pure bacteria culture, while in nature, microorganisms exist in communities, and inter-and intrapopulation interactions as well as interactions with the environment can significantly increase microbial radioresistance [20,25].Thus, studies of natural microbial communities' radioresistance would be eligible to specify putative life survivability on Europa under the accumulation of radiation dose.Microbial communities of Earth glaciers could be proposed, like terrestrial analogs of Europa ice, for such studies [4,26,27]. It should be noted that the irradiation intensity used in the experiment was several orders of magnitude higher than on Europa (see Section 3.2).The effect of such differences on the microorganisms' viability was discussed by us in detail earlier [20,21].Briefly, the effectiveness of a sparsely ionizing radiation (including accelerated electrons) dose is reduced with irradiation intensity decreasing as a result of recovery phenomena [28].Under the low-temperature conditions of Europa, this phenomena should be insignificant because of the low rate or full stop of metabolism [29]. Dose Dependence on Depth and Exposure Time The total of all dose accumulation rates of considered incident particles is presented in Figure 4.In our calculations, the resulting dose rates for different particles (Figure A1) were close to that of Paranicas et al. [10] at depths less than 10 cm, but for depths more than 10 cm, our modelling demonstrated higher dose accumulation values.Those differences increase with depth up to 10 times at 1 m.This is probably the result of using modern models of secondary particle cascade in our GEANT4 calculations.High-energy protons and heavy ions profiles dramatically decrease with depth, thus their impact on the microorganisms' survival in the ice layer is significant only for depths less than 1 cm, even taking into account their higher biological efficiency compared with electrons.Despite the high total dose accumulation rate, the lethal dose does not accumulate in the ice layer at depths of 0.1-1 m until 1000-10,000 years after eruption.For lengths of time greater than 10,000 years, the dose is lethal in the region of interest.For exposure times from 1000 years onwards, we see a nearly constant (in comparison with the one-year profile) part emerging in the dose profile.This is the effect of considering a static resurfacing rate from the work of [17] in our model.This effect might not be real case, but it has no significant impact on the main goals and conclusions of It should be noted that the irradiation intensity used in the experiment was several orders of magnitude higher than on Europa (see Section 3.2).The effect of such differences on the microorganisms' viability was discussed by us in detail earlier [20,21].Briefly, the effectiveness of a sparsely ionizing radiation (including accelerated electrons) dose is reduced with irradiation intensity decreasing as a result of recovery phenomena [28].Under the low-temperature conditions of Europa, this phenomena should be insignificant because of the low rate or full stop of metabolism [29]. Dose Dependence on Depth and Exposure Time The total of all dose accumulation rates of considered incident particles is presented in Figure 4.In our calculations, the resulting dose rates for different particles (Figure A1) were close to that of Paranicas et al. [10] at depths less than 10 cm, but for depths more than 10 cm, our modelling demonstrated higher dose accumulation values.Those differences increase with depth up to 10 times at 1 m.This is probably the result of using modern models of secondary particle cascade in our GEANT4 calculations.High-energy protons and heavy ions profiles dramatically decrease with depth, thus their impact on the microorganisms' survival in the ice layer is significant only for depths less than 1 cm, even taking into account their higher biological efficiency compared with electrons.Despite the high total dose accumulation rate, the lethal dose does not accumulate in the ice layer at depths of 0.1-1 m until 1000-10,000 years after eruption.For lengths of time greater than 10,000 years, the dose is lethal in the region of interest.For exposure times from 1000 years onwards, we see a nearly constant (in comparison with the one-year profile) part emerging in the dose profile.This is the effect of considering a static resurfacing rate from the work of [17] in our model.This effect might not be real case, but it has no significant impact on the main goals and conclusions of this work.At 0.01-0.1 m depths, the lethal dose is accumulated in 100 years, and for depths lower than 0.01 m, no viable cells would be present after only one year of radiation exposure. this work.At 0.01-0.1 m depths, the lethal dose is accumulated in 100 years, and for depths lower than 0.01 m, no viable cells would be present after only one year of radiation exposure. Conclusions Our study demonstrates the ability of terrestrial microorganisms to survive high intensity radiation in conditions modelling shallow subsurface environments of Europa for a long time.It could be used in future missions to Europa aimed at finding hypothetical extraterrestrial life that has been ejected on the surface by sporadic water release.Considering radiation dose accumulation as the limiting factor for microbial life survivability, we hypothesize that there is a chance to discover viable cells at an ice depth of 10-100 cm if the massive water release from the ocean of Europa occurred on a landing site 1000-10,000 years before.Evidence for such water eruptions (plumes) on Europa has been previously found and reported by the HST team and other researchers (the works of [5,6] and references therein) and from these observations, the possible locations for probing and sampling might be determined.The events observed by these authors occurred at the same location with a separation of two years, thus potentially marking the source of "fresh ice" subsurface sample candidates on Europa. Figure 2 . Figure 2. Climatic chamber and exit window of accelerator: (a) chamber on mobile platform under the exit window of accelerator; (b) exit window of accelerator.1-chamber, 2-mobile platform, 3-exit window of accelerator, 4-direction of movement of the platform. Figure 2 . Figure 2. Climatic chamber and exit window of accelerator: (a) chamber on mobile platform under the exit window of accelerator; (b) exit window of accelerator.1-chamber, 2-mobile platform, 3-exit window of accelerator, 4-direction of movement of the platform. Figure 3 . Figure 3. Impact of high energy electrons irradiation on the number of Deinococcus radiodurans VKM B-1422 T viable cells.Error bars are in accordance with the confidence interval for p < 0.05. Figure 3 . Figure 3. Impact of high energy electrons irradiation on the number of Deinococcus radiodurans VKM B-1422 T viable cells.Error bars are in accordance with the confidence interval for p < 0.05. Figure 4 . Figure 4. Total dose accumulation rate in dependence on exposure time and "fresh ice" depth.Red dashed line demonstrates the upper limit of the Deinococcus radiodurans VKM B-1422 T survival in our experiment.
5,063.8
2018-12-25T00:00:00.000
[ "Physics" ]
Gender and mathematical communication ability Communication skills were one of the abilities needed in the 21st century, but the reality showed that students’ mathematical communication skills were still low. The aim of this study was to analyze the mathematical communication ability of male and female students for the content of Space and Shape. The research method used was the qualitative research method. The participants of the study were 6 male students and 6 female students of class VIII Junior High School with age characteristic between 13 -14 years old and have the ability of high, medium and low in general mathematics. Data was collected by using test and interview. Data analysis used includes data reduction, data collection, and conclusions. The study results showed that both male and female students with high ability in general mathematics are able to express situations in the form of pictures or mathematical models, analyse and evaluate mathematical ideas in other forms, but male and female students who have medium and low ability in general mathematics still have difficulty in expressing situations in the form of drawings or mathematical models, analysing and evaluating mathematical ideas in other forms. Result In general, male and female students with high ability in mathematics are able to solve all communication problems well. Although in writing test, the female students' answers are more detailed and complete than male students, however they have the same ability orally. Male and female students with medium ability in mathematics have problems in communication due to lack of prior knowledge and calculation errors. Also, their ability to evaluate mathematical idea is still low. Students with low math skills have low mathematical communication skills too. The male students with low ability are unable to solve all problems at all because they were unable to understand the intent of the problem. Obstacles occur, because of the lack of mastery of the concepts of the material and prior knowledge. Whereas, female students with low ability cannot solve problems due to obstacles in terms of prior knowledge and errors in calculations. Discussion In this study, students are asked to solve three mathematical communication problems related to space and shape (pyramid, cube, and cuboid) for the written test. The analysis ofstudents' answer is presented as follow. The first problem The first problem can be seen in Figure 1. The first question of the indicator is the case which relates to pyramid with certain size is known. Students are asked to determine the amount of the wire needed to create the pyramid frame, and draw the pyramid frame with its size. In this first problem, the process of mathematical communication studied is expressing the situation in the form of an image or mathematical model. The male and the female students who have high ability can answer the first questions; explain elements that are known and asked, and the reasons for their answers. Based on the interview, the first question is a problem that familiar to them, they have a good understanding of pyramid material and have good prior knowledge. Male and female students with medium and low ability cannot answer the first question Cafeteria room measuring 10 m × 6 m with a height of 3 m Library room measuring 12 m × 5 m with a height of 3 m. In each room there is a door right in the middle of the room with window on it. The door size is 1.6 m ൈ 2 m and the window size is 1.6 m × 0.5 m. The wall section in the Canteen room and library will be painted. Each ͳͲ ଶ spent one can of paint and the price of one can of paint Rp. 100,000,00 Explain which room cost more painting? correctly. On the written test, male students with low ability only can answer by drawing frames and pyramid nets. During the oral test, they were unable to explain the known elements, meaning the length of the wire needed to make the pyramid frame. Based on the results of the interview, they do not understand the difference between the frame and the pyramid net; they do not understand what is meant to be the height of the pyramid and the length of the sides of the pyramid. Following is the part of interview section. Interviewer LR Interviewer LR Interviewer LR : : : : : : Can you explain the meaning of the picture you made? The first picture is a picture of a pyramid, while the picture below is a picture of a pyramid frame. Are you sure the image below is a pyramid frame? Um, actually I'm not sure. What I remember was the pyramid net, and I thought the pyramid nets were the same as the pyramid frame. Then why did the picture you made do not write the size of the pyramid? I don't know how to answer it. From the students' answers, it was found that the difficulties were related to the lack of student's understanding of the pyramid material, and the student's prior knowledge related to the prerequisite material to determine the upright edge length using Pythagoras formula. Students 'mathematical communication difficulties are concordant with the theory that the factors that affect students' mathematical communication ability including prerequisites and mathematical understanding [11]. On the other hand, female students with low ability can draw a pyramid frame with size of pyramid height is 8 m, and the size of a pyramid base edge12 cm, but they do not answer the length of wire needed. In the oral test, they only can mention what is known and asked. Based on the results of interviews, they cannot answer the number of wires needed to make the pyramid frame because they forgot how to calculate the pyramid upright side. The following is part of the interview: Female students with low ability answers are the same as the answers of male and female students with medium ability. Based on the results of interview, students' difficulties were mainly due to lack of ability of students in connection with the prerequisite material. The difficulties that occur are related to prior knowledge, whereas prior knowledge has an effect on mathematics achievement [12]. The second problem The second problem can be seen in Figure 2. The government plans to build a water shelter in a village. Many people in the area are 600. Assuming the need of clean water is 120 liters each person. The water shelter is measuring 6 m ൈ 6 m ൈ 6 m. Water-evacuation is done three days once until it meets the shelter. Can the water shelter suffice the needs of all the villagers every day? Explain why! The second question of the indicator is the case which relates to cuboids with certain size is known. Students are asked to compare the cost of the painting between two rooms. In the second problem, the process of mathematical communication studied is analyzing mathematical ideas in other forms. For the second test, only students with high ability can determine which room painting cost higher correctly. Students with medium and low ability cannot answer it correctly. For instance, male answers that to determine the cost of two rooms are calculate the surface area of the wall minus the area of the door and the window in each room. The mistake of the result was because of the mistake of the calculation process. The same mistake is made by another male student with medium ability. For example, female student with medium ability answers that to determine cost of two rooms are calculate surface area of the wall minus the area of the door and the window in each room. Mistake of the result was mainly in the calculation process. The same mistake is made by another female student with medium ability. Both male and female students with medium ability, understand how to calculate the second problem, but make mistake in calculation. Orally, they can explain the element that is known and asked, and the reasons for their answers. But they have difficulty in decimal calculation. Difficulties in resolving communication problems were mainly computing problems. This is concordant with the results of research that computational ability affects written communication skills [13]. The male and the female students with low ability cannot determine how to calculate the second problem at all. This problem is something new for them. The third problem The third problem can be seen in Figure 3. Based on research results, female students with high ability have written mathematical communication skills better than male students with high ability. Male and female students with high ability have the same ability in mathematical communication orally. The result of this study contradicts with the research that found male subjects were better in writing [14], but similar to the results of the study which said female students responded in writing better than men [15].Also, this study same from the results of previous study that male and female students have the same ability in communication as orally [16]. Conclusion The study results show only male and female students with high ability in math are good in mathematical communication. They have the same ability in oral communication, but female students answer more detail and completely than male students in writing. Their high ability is acquired because of experience and more practice. Male and female students who have medium and low ability in math still have difficulty in mathematical communication. It can be seen from the aspects of expressing situations in the form of drawings or mathematical models, also in analysing and evaluating mathematical ideas in other forms. Based on this, we suggest students who have difficulty in communication need to be accustomed to providing arguments for each answer and responding to answers given by others in learning so they learn mathematics meaningfully [17].
2,291.8
2020-04-01T00:00:00.000
[ "Mathematics", "Education" ]
Lunar Far-Side Radio Arrays: A Preliminary Site Survey The origin and evolution of structure in the Universe could be studied in the Dark Ages. The highly redshifted HI signal between 30<z<80 is the only observable signal from this era. Human radio interference and ionospheric effects limit Earth-based radio astronomy to frequencies>30 MHz. To observe the low-frequency window with research from compact steep spectrum sources, pulsars, and solar activity, a 200 km baseline lunar far-side radio interferometer has been much discussed. This paper conducts a preliminary site survey of potential far-side craters, which are few in number on the mountainous lunar far-side. Based on LRO LOLA data, 200 m resolution topographic maps of eight far-side sites were produced, and slope and roughness maps were derived from them. A figure of merit was created to determine the optimum site. Three sites are identified as promising. There is a need to protect these sites for astronomy. INTRODUCTION The far-side of the Moon is sheltered from anthropogenic lowfrequency radio emissions and is considered the best location for a low-frequency radio telescope to investigate a broad scope of research areas, including the cosmic Dark Ages, compact steep spectrum (CSS) sources and solar radio bursts. This paper seeks to establish the best sites on the far-side for an array that maximally extracts information from the highly redshifted 21 cm signal and surveys the low-frequency radio sky. The Astro2020 Decadal Survey of Astronomy and Astrophysics identified the Dark Ages as the discovery era for cosmology (See National Academies of Sciences, Engineering, and Medicine 2021). Studying the Universe during the so-called Dark Ages can deepen our understanding of the evolution of large-scale structure and the theorised Epoch of inflation (Adams et al. (1993)). The Dark Ages can be probed using the signal from a highly redshifted 21 cm transition of neutral hydrogen. This forbidden transition is produced by the emission of a photon with rest frequency 1420 MHz, by the process of the hydrogen's proton and electron spin-spin coupling, and results in a hyperfine transition spin-flip and subsequently the emission of a photon (Muller & Oort (1951)). The 21 cm signal can be measured as a sky-averaged global signal or spatial fluctuations. The 21 cm absorption trough traces the density field of the universe during the Dark Ages epoch. Only one detection of the 21 cm absorption profile, in the Cosmic Dawn, in the sky-averaged spectrum, has been claimed (but this has been disputed (Singh et al. (2022))). This detection comes from the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) experiment and is centred at 78 ± 1 MHz corresponding to a redshift ≈ 17 ★ E-mail<EMAIL_ADDRESS>(Bowman et al. (2018)). The EDGES trough is ∼ 3 times deeper than the standard model, and adiabatic cooling is insufficient to explain the result (see, e.g., Burns 2021;Bowman et al. 2018;Mebane et al. 2020). Spatial fluctuations in the highly redshifted 21 cm signal test the structure formation predictions of the standard cosmological model. Measuring the absorption of galaxy precursor neutral hydrogen gas clouds against the CMB increases the number of observable modes enormously. The billions of galaxies in galaxy surveys were potentially constituted from hundreds of millions of cold hydrogen gas clouds; therefore, trillions of independent modes can be measured (Silk (2018)). They provide unprecedented constraints of the early Universe. In particular, on the matter power spectrum spectral index (Mao et al. (2008)), non-gaussianity (Muñoz et al. (2015)) of the inflationary era, the mass of the neutrino and the curvature of the Universe (Mao et al. (2008)). 21 cm observations have two other major advantages over the CMB (Furlanetto et al. (2006)): (i) They are unaffected by Silk damping (the smoothing phenomenon of primordial density fluctuations) and so spatial fluctuations hold at much smaller mass scales in comparison to the CMB (Silk (1967)); (ii) They can be used to construct the hydrogen structure in a 3D volume (Furlanetto et al. (2006)). An ultra-long wavelength radio interferometer is required to measure high redshift spatial fluctuations. The 21 cm signal is observed by measuring the neutral hydrogen spin temperature against the CMB radiation temperature. The evolution of the transition spin temperature is explained by Loeb & Zaldarriaga (2004), limiting the redshift range to 30 < < 80. For this redshift range, the 21 cm signal lies in the meters wavelength range (6.5 -17.0 m) and the tens of MHz frequency range (17.6 -46.1 MHz). Koopmans et al. (2021) specifies the requirements for a lunar-based instrument to extract the faint 21 cm signal with S/N > 10 (Refer to Section 7 of Koopmans et al. 2021, and Figure 8 for S/N predictions). The cosmological radio interferometer relies on a densely packed (order of 10 4 dipoles) core to reach the sensitivity of the 21 cm signal and a baseline of a few kilometres. Very low-frequency radio astronomy has been recognized to have several other potential uses described below. Currently, the largest radio telescope, LOFAR (Low-Frequency Array, van Haarlem et al. (2013)), operates at the longest wavelengths observed from Earth. The LOFAR Low Band Antenna (LBA) Sky Survey (LoLSS, de Gasperin et al. (2021)) observes the frequency range 42 -66 MHz with high-sensitivity ∼ 1 mJy beam −1 , and high-resolution ∼ 15 arcsecs. To reach these exceptional parameters, the baseline of the LBA is ∼ 100 km. Baselines up to ∼ 200 km can be attained on the lunar far-side, and applying the same imaging capabilities as LOFAR would achieve 10 MHz with an angular resolution of ∼ 25 arcsecs. The multi-research capabilities of a lunar far-side radio interferometer would extend the capabilities of LoLSS. There is a variety of low-frequency radio-loud active galactic nuclei (AGN) sources, from CSS at a few hundred years to radio galaxies on the scale of Mpcs and aged at ∼ 1 Gyr (e.g., Dabhade et al. 2020). Jets, driven by synchrotron processes, propagate beyond the host galaxy and are classed as Fanaroff and Riley radio galaxies (FRIs and FRIIs, Fanaroff & Riley (1974)) with signature doublelobed structures that can span up to hundreds of kiloparsecs. CSS, by contrast, have radio propagation extents ∼< 20 kpc and some are thought to grow in strength and evolve into FRIs and FRIIs (e.g., Fanti et al. 1990). Alternatively, the phenomenon "frustration" describes the inability of jets in CSS sources to penetrate the interstellar medium (ISM) due to a dense ISM or intrinsically weak jets (e.g., van Breugel et al. 1984). High-resolution and sensitive low-frequency measurements of compact objects could assess the extent of the physical processes responsible for the termination of jets (e.g., McKean et al. 2016). An ultra-long wavelength interferometer would open a window for new low-frequency pulsars studies Stappers, B. W. et al. (2011). Despite pulsars being intrinsically brightest at low radio frequency ranges (< 300 MHz), past surveys were conducted at ∼ 350 MHz or ∼ 1.4 GHz due to observational biases. Pilia et al. (2016) investigates the evolution of pulsar profiles over a frequency range < 200 MHz with respect to magnetospheric origins and dark matter-induced variations. A specific trend cannot be identified with decreasing radio frequency, and the distribution of the average spectral index is compatible with different pulsar behaviours. Solar activity, such as coronal mass ejections (CME), solar flares and sunspots, can be observed across the low-frequency radio spectrum; from Type III bursts extending to tens of KHz to the observation of Type I bursts up to hundreds of MHz. Thermal or non-thermal, coherent or incoherent emission mechanisms produce solar radio bursts which trace energetic phenomena in the interplanetary medium and solar corona. Type II radio bursts are emitted from highly energetic accelerated electrons from shock waves driven by supersonic CMEs (e.g., Kouloumvakos et al. 2021). Monitoring solar activity below 10 MHz would advance studies in CME release mechanisms and assess the impacts of CMEs on Earth. For a radio interferometer applicable to a wide range of research, this paper adopts 200 km as the minimum interferometer baseline distance. Earth-based low-frequency/long-wavelength radio astronomy has been constrained by the uncontrollable production of human-created radio frequency interference (RFI) over recent decades. Shortwave broadcasting, emitted by telecommunication and radio communication satellites in Earth orbits and ground-based transmitters, floods radio telescope instruments with noise below 30 MHz (e.g., Maccone (2021)). Radio observations are further restricted by ionospheric effects, with frequencies below 10 MHz being absorbed and distorted, becoming effectively opaque (Kaiser & Weiler (2000)). A solution is constructing radio astronomy instruments on the farside of the Moon. The 3,400 km thickness and tidal locking of the Moon shield the far-side surface from RFI by as much as 90 dB and creates a Radio Quiet Zone (RQZ) (e.g., Kim (2021)). The location of this zone is determined by the attenuation of radio waves around the lunar limb. There are three factors to consider: (1) geometry; (2) libration; (3) diffraction. (1) Geometry refers to the RFI produced from the surface of the Earth or by a Geosynchronous Equatorial Orbit (GEO) satellite. Lunar longitude is measured from the sub-Earth point, so the limb as seen from Earth is at ±90°. The longitude at which RFI from the satellite diminishes to 90 dB is ∼ 6°beyond the lunar limb. (2) Diffraction further constrains the lunar RQZ. At 10 MHz with a -80 dB threshold, waves are attenuated ∼ 4°beyond the geometric limit, reducing the RQZ to ∼ 160°on the lunar far-side (see Bassett et al. 2020). (3) Optical librations occur from a shift in viewing angle due to the non-circular and inclined orbit of the Moon and have three forms: latitude, parallactic, and longitude. Longitude is the most significant libration in which the RQZ varies by ≈ 8° ( Ratkowski & Foster (2014)). Combining these three effects, the -80 dB RQZ in longitude begins at ±108°stretching 4,366 km across the anti-Earth in longitude and latitude begins at ±79°, centred at the anti-Earth, spanning 4791 km. Concepts for radio interferometer arrays on the Moon have been developed. The FARSIDE (Farside Array for Radio Science Investigations of the Dark Ages and Exoplanets) mission concept is a recent example (see Burns et al. 2021). The development of radio interferometer arrays on the Moon will be an evolutionary process, starting with the few-kilometre scale, which has a large choice of locations. Instead, the 200 km-class arrays that are ultimately needed are much harder to find a site. Any such site must be traversable by a rover. Rovers for the Moon and Mars have limited capabilities to deal with steep or rough terrain (Table 2). For example, NASA's Perseverance is a 26 cm diameter wheeled Mars rover with a 30°safety incline limit. Similarly, NASA's Volatiles Investigating Polar Exploration Rover (VIPER), which will explore the South Pole, has ∼> 40% slip on slopes > 15°and can traverse objects only up to 10 cm in height. These considerations greatly reduce the suitable site selection as the lunar far-side is extremely mountainous, without the large, smooth maria of the near side. This paper examines the topography of eight promising sites for a multi-purpose 200 km-scale low-frequency radio telescope on the Moon. The contents of this paper are observations and data reduction in Section 2; Candidate selection in Section 3; Data analysis methods in Section 4; Map products in Section 5; Site comparisons in Section 6; Discussion in Section 7; Conclusions in Section 8. Lunar Elevation: Lunar Reconnaissance Orbiter The data used in this paper was taken by the NASA Lunar Reconnaissance Orbiter (LRO). LRO orbited the Moon in a nominal circular 50 km altitude polar orbit from September 15, 2009, until moving to a fuel-conserving, 1800 km semi-major axis, elliptical orbit on December 11, 2011(Chin et al. (2007). After this date, the change of LRO orbit led to the loss of much data from the spacecraft while LRO was at higher altitudes, limiting the data used for this study (Barker et al. (2021)). One of the LRO mission objectives is to find potential lunar landing sites for crewed missions. The mission also searches for potential resources and characterises the radiation environment. LRO directly produced topographic maps and stereo images from onboard instruments, which can be used to derive digital terrain maps (DTMs). Images taken by the Wide Angle Camera (WAC), part of the LRO Camera (LROC) experiment, were used to derive a global topographic map of the Moon at a 100 m image resolution (Robinson et al. (2010)). LRO is still operating. The Lunar Orbiter Laser Altimeter (LOLA) (Smith et al. (2010)) is one of the six scientific instruments on LRO. LOLA measures topographic elevations to metre elevation accuracy but is limited to 5 m spatial resolution in the best cases and, more often, to 57 m. The Lunar Orbiter Laser Altimeter (LOLA) LOLA provides one of the highest-resolution global surface elevation data sets for the Moon. LOLA uses LIDAR (Smith et al. (1997)) to do so. LIDAR measures the round trip Time Of Flight (TOF) for a laser shot reflected off the lunar surface. The LOLA LIDAR consists of a laser transmitter (Nd: YAG in the LOLA case), emitting a wavelength of 1064 nm and a receiver (an aluminium Cassegrain telescope). TOF halved gives the distance from LOLA to the lunar surface from the light travel time, in conjunction with the spacecraft's well-determined orbit. A similar instrument package was used onboard the earlier Clementine spacecraft (Council (1997)). LOLA measured the slope, roughness and the 1064 nm reflectance of almost the entire lunar surface on scales of 500 m, 200 m, 50 m and 5 m. LOLA operates by propagating a single laser pulse (laser 1 and 2 alternate monthly) through a Diffractive Optical Element (DOE), splitting the pulse into five separate beams, and rotating 26°to the down-track direction. LOLA's five pulse spot pattern is illustrated in Figure 1. LOLA emits short pulses at the rate of 28 Hz (considering the five beams, this is 140 measurements per second) from two lasers (of energies 2.7 mJ and 3.2 mJ for laser 1 and laser 2, respectively). These five beams illuminate the lunar surface in a cross pattern in the far field. Each beam has a diameter of 5 m at the surface. Each beam backscatters off a spot on the lunar surface (yellow in Figure 1). The receiver telescope detects the returning pulses, and the fivespot pattern is imaged. The laser has a pulse width of < 10 ns, and the width of each returned pulse is recorded, measuring the height variation (roughness) within the 5 m footprint of the laser on the surface. Consecutive five-spot patterns are displaced by 57 m due to the orbital motion of LRO. The detector's field of view is shown in blue. The spot pattern calculates slope and surface roughness along a range of azimuths. LOLA Pulse Width Measure LOLA also directly measures the roughness within the 5 m footprint of a single spot using the measured pulse width (Section 2). However, despite the LOLA data being extensively filtered, by the selection of pulses only above 0.15 fJ energy levels and recorded at less than 5°off-nadir angle, the pulse width roughness maps suffered from erroneous values (Gläser (2014)). Pulse width roughness maps will not contribute to site assessment. LOLA Data and Corrections NASA's Planetary Data System (PDS) is a public online archive system where laboratory results, planetary missions and observational Figure 1. Two LOLA shots encircled in dashed grey with five labelled spots as blue circles. The pattern angle is at a 26°angle to the down-track direction. The distance between shots is 57 m, and the smallest spot-to-spot length is 25 m. Yellow circles display the five illuminated 5 m diameter spot footprints, and blue circles show the field of view of each detector. Orange lines show the six possible slope calculations between spot 3 and the surrounding spots. data are stored in common descriptions. We used the calibrated, geolocated and aggregated, time-ordered Reduced Data Record (RDR) files (G.A. (2009)) to create LOLA DTMs for the lunar sites of interest. LOLA raw data includes two instrumental signatures; their removal is required to make accurate elevation maps. The two signatures are that (1) the LIDAR pulses are not symmetric, and (2) there are biases in the TOF calculation. These biases vary for each of the five detectors. Both effects have been modelled by Riris & Cavanaugh (2009). They find (1) pulse asymmetry: a variable bias dependent on the received pulse strength arises from an induced range bias and asymmetry of the detector's electronic impulse response leading to a distortion of the received pulse. (2) A TOF bias: each channel has different cable lengths, leading to a fixed offset. The fixed and vari-able offset corrections are applied to the pulse width datasets; the fixed offsets can be found in Riris & Cavanaugh (2009). The receiver pulse width error calculated by standard deviation remains below 800 ps overall energies, equivalent to 0.24 m in elevation. From 10 ns to 25 ns, the received pulse widths have residuals compared with the emitted pulse width at all pulse widths of ≈ 1 ns, equivalent to 0.3 m elevation error (Riris & Cavanaugh (2009)). The residuals are larger for pulse widths < 10 ns, ≈ 5 ns, 1.5 m. The co-registration techniques of Gläser et al. (2013) were applied to the individual LOLA tracks that are incorporated in the DTMs. Co-/Self-registration is an algorithm identify misaligned measurements and tracks from the inputs of LOLA data and DTMs to which laser altimeter profiles are registered (Scholten et al. (2012(Scholten et al. ( , 2011Gläser et al. (2017)). A 0.13 m to several meters (depending on the number of measured laser shots) positional accuracy and 0.18 m residual heights can be achieved for data sets. This can be compared with the pre-registration accuracy of the LOLA data, Araki et al. (2009);Bussey et al. (2003) found positional errors of ∼4 m radially and ∼77 m horizontally. There are negligible for the ∼ 200 m scale being probed here. Data Loss from a Thermal Anomaly There is a thermally-induced anomaly in the LOLA instrumentation. All five channels work nominally when the spacecraft is over the lunar day side; however, only two of the five receiver channels acquire significant data when LRO observes during the cold lunar night. This 'thermal anomaly' was found in ground testing due to the thermal blanket's contraction in the cold. This contraction pulls the transmitter beam out of focus with the receiver (Smith et al. (2017)). Detectors 1, 2 and 5 in Figure 1 do not operate on the 'night side'. Detectors 3 and 4 align to emitted spots 2 and 5 by a fortunate coincidence, enabling continuous observation (Gläser (2014)). For instance, the effects of the LOLA thermal anomaly can be observed in the Mare Moscoviense region. The variation in the number of spots detected per shot means LOLA is less effective for this site. Only 47% of shots yield detections in all five spots; another 41% of the area is illuminated by two spots per shot; 9% is illuminated by a single spot per shot; 2% is covered by four spots per shot; 1% is covered by three spots per shot. The impact of this anomaly is important when constructing slope and roughness maps because there are inconsistencies between the number of spots per fitted plane (Section 4), which limits the spatial resolution of these maps. The smallest lunar site used in this study is Daedalus. Figure 2 shows LOLA track coverage maps of the smaller site for elevation and pulse width roughness data. The larger region (Figure 2a), 181 196 km in size, consists of 4,614,796 data points. When zooming in on the site to a 91 106 km region ( Figure 2b) the number of data points decreases to 1,138,030 and is covered by 203 LOLA tracks. The small 91 106 km region measured 431,842 pulse width roughness points and is covered by 179 LOLA tracks. The coverage maps show that the centre of Daedalus has data gaps with maximum horizontal separation ≈ 6 km. As a result, the pulse width roughness map has worse coverage and alternative methods to derive roughness are discussed in Section 4.3. CANDIDATE SITE SELECTION The search for a suitable site began with the visual selection of eight large (> 100 km diameter) maria 1 and craters within the RQZ (longitude > 108°) because they have a higher probability of having a smooth crater floor. The locations of the eight lunar far-side sites are shown in Figure 3. Their properties are listed in Table 1. The third column of Table 1 gives the distance in degrees of the lunar sites past the RQZ boundary. Elevation maps of the eight sites of interest are shown in Figure 4 to give a visual impression of their topography. A 4°4°(122 122 km) grid is overlaid on each site map, showing their relative sizes. In addition, three comparison equatorial regions were studied on the lunar far-side. At longitudes +30°, 0°and -30°from the anti-Earth point. These equatorial regions are evidently rougher terrain than the eight candidate sites (Figure 4). Digital Elevation Maps (DEM) More quantitative topographic information is derived from constructing Digital Elevation Maps (DEMs) from the LOLA data. DEMs display surface topography by mapping the height variation of a region. DEMs were generated by the Generic Mapping Tools software (GMT) (Wessel et al. (2019)). GMT is a free and open source code 2 that allows for manipulating data parameters to produce sophisticated illustrations for Earth, ocean and planetary sciences. The motivation for creating DEMs is to determine whether a site has approximately 200 km regions of smooth, low-slope terrain and to note any physical barriers, such as significant mountainous or crater features. The LOLA data sets (Longitude, Latitude, Elevation) are projected onto a grid and interpolated to create a regular grid. Given a discrete data set, interpolation derives a polynomial function which passes through the provided data, enabling intermediate values to be estimated. The generated grid files can be read into Python and MAT-LAB to produce elevation maps. These tools enable 3D interactive elevation created to be used to visualise these sites from differing and exaggerated perspectives. Non-interpolated DEMs were also constructed, where elevation values are assigned to a colour scale and plotted on a longitudelatitude plane. These images are useful as they highlight the nonuniform track coverage of LOLA. These maps are also convenient for laser spot detection analysis where the LOLA thermal anomaly can be observed for each site (Section 2.4). The DEMs can then be used to generate slope and roughness maps. Slope The slope is defined as the terrain height variance over a specified distance and so is a function of the distance over which the slope is measured, the length scale. Several measures of surface roughness have been defined in literature (e.g., Kreslavsky & Head III 2000; Shepard et al. 2001). These include both one-dimensional and twodimensional slopes. One-Dimensional Slope Discussed here are three one-dimensional methods to derive slopes from LOLA data: (1) RMS slope; (2) median absolute slope; (3) median differential slope for a range of lengths. (1) The RMS slope is defined in one-dimension as the Root Mean Square (RMS) difference in height, Δ between two points (also called the deviation, ) over the distance between the points, Δ : where angular brackets indicate the mean of the bracket contents (Rosenburg et al. (2011)). (2) The median absolute slope can be derived on the smallest scales, ∼ 25 m, the slope of the central spot and one of the four edge spots, shown in Figure 1 (Rosenburg et al. (2011)). (3) The median differential slope (Kreslavsky & Head III (2000)) is derived as follows. For a given baseline, , through five points, find the difference in elevation for a point half the baseline ahead, 1 2 , and a point half the baseline behind, − 1 2 . Calculate the elevation difference for points a baseline ahead, 1 , and a baseline behind, −1 . Subtract half the latter derived elevation difference from the first elevation difference. The resultant elevation difference over the baseline is the tangential slope, . Each method has strengths and weaknesses: (1) RMS is an established method because it is also used to measure the scatter of radar reflection. However, the method is sensitive to outliers because of its dependence on the deviation squared. (1,2) The RMS and median absolute slope are one-dimensional slope methods derived in the down-track direction. Therefore, both underestimate the surface gradient if the steepest slope diverges from this direction. (3) Median differential slope removes small-scale and large-scale surface roughness features. Arguably, the median differential slope can be described as an intuitive parameter because small-scale roughness is measured with respect to the long wavelength roughness profile. The median differential slope method is a better measure compared to the RMS slope or median absolute slope method because natural surface slope-frequency distributions are commonly non-Gaussian with long tails. Two-Dimensional Slope A two-dimensional slope is preferred because slopes that diverge from the down-track direction are included. The two-dimensional slope can be derived from multiple spot points within a LOLA shot, e.g., between spots 1-3-4 in Figure 1. A total of six slope measurements can be derived from spot 3 in Figure 1 in the directions shown in orange. Vector geometry computes the plane through three spots, identifying slope magnitude and the azimuth of the slope. The baseline in this method is determined as the square root of the area of the triangle. As discussed in Section 2.4, LOLA experiences a thermal anomaly which reduces the number of spots from which signals are detected. An interpolated grid has to be used to provide a consistent distance over which slopes are calculated, and a vectorized method is the most computationally efficient. Roughness Roughness can be defined as the Root Mean Square (RMS) deviation from a specified plane (Gläser (2014)), i.e., the scatter about a one-or two-dimensional slope. Roughness, similarly to slope, is dependent on the length scale and is calculated here on a 200 m scale because smaller scales show missing LOLA data tracks. Therefore, slope and roughness cannot be reliably determined. The plane fitting method used in Section 4.2.2 measures roughness. The pulse width measurements from LOLA are not used in this study (Section 2); instead, the − method was used. This method calculates the standard deviation, , of the LOLA elevation, , spot measurements to a plane. Alternatively, a method known as roving-window analysis can be used, in which a 3 3 kernel is scanned over the interpolated DEM maps and assigns the deviation in elevation of surrounding pixels to a given pixel. This method is more flexibly applied to various scales. Equatorial Regions The three equatorial regions are mapped to show the typical mountainous surface of the lunar far-side. Figure 5 shows the slope maps for three 445 km squared regions displaced by 45°along the lunar equator. The maps are overlaid by a 4°4°grid (122 122 km), which shows that within these grids, a traversable surface in the length of hundreds of kilometres is not obtainable. The anti-Earth region (Figure 5b) shows no traversable path with inclines < 15°, and at inclines < 25°, two perpendicular paths on the order of hundreds of kilometres cannot be achieved. The East and West regions are similar to the central region, but they both show an area of large impact craters which is a smoother surface (dark purple) in Figures 5a and 5c. These areas could be accessible to a rover on inclines < 25°and on the scales of an interferometer. However, the surface is disrupted by many smaller craters with high slopes (< 25°), so deploying a fullsized 200 km array becomes challenging. Instead, sites for baselines a few kilometres in length are quite easy to find. The highly variable slope of the far-side demonstrates a need for the identification of large smooth marina and craters. Digital Elevation Maps The results for the Mare Moscoviense are described in detail for each map type below. We have chosen to present the results of Mare Moscoviense because the site is a sizable candidate location for an interferometer for its lack of obscuring terrain. The corresponding maps for all eight sites are listed in Table 1, and presented the Appendix A. Mare Moscoviense has a mare floor 3 to 4 km below the lunar equatorial radius (1737.4 km, (Williams (2021))), spanning the longest length of 280 km. The mare edge becomes extremely mountainous, reaching up to 4 km above the lunar equatorial radius and forming a steep border to the mare floor. Figure 6 (left) shows a non-interpolated DEM of Mare Moscoviense. The maximum separation between tracks is 1.8 km. Figure 6 (middle) shows the interpolated DEM of Mare Moscoviense produced in GMT. The right of Figure 6 (generated using MAT-LAB) shows the interpolated DEM in an exaggerated perspective view showing highlights and shading for a given illumination angle (-45, 30) applied to all of the MATLAB figures. The vertical range is 11 km. Slope Maps Three visualisations of the slope maps were created for each site for the recognition of inaccessible terrain. All of the interesting information for rover deployment of the radio array is at low slopes, up to ∼ 25°, with < 15°being especially important. Figure 7 shows three slope maps of the site Mare Moscoviense with decreasing slope range scales. The mare floor does not have slopes greater than 20°. The middle map highlights the accessibility of the site, with all areas in black being less than 20°but identifies the smaller crater within the mare. The right map colours slope greater than 30°, inaccessible to all wheeled vehicles. Mare Moscoviense does not present challenges by a rather generous 30°criterion at the 200 m scale for wheeled vehicles. Even for a threshold of 15°, Mare Moscoviense is traversable except for the lower West region of the crater. Roughness Maps Maps on different scales were created for different topographic goals. 500 m scales measure hills and mountains and indicate possible flat terrain areas. 200 m and 100 m scales locate smaller site features, such as small impact craters but are subject to errors due to poor interpolation between tracks. Mare Moscoviense demonstrates the value of RMS roughness maps on these different scales. Figure 8 presents roughness maps of Mare Moscoviense on 500 m, 200 m and 100 m scales. A significant increase in low roughness (coloured green and blue in the maps) shows that smoother areas appear at smaller scales, as do erroneous tracks. The 500 m scale shows that Mare Moscoviense is a site to be studied in more detail because of a lower roughness area, though these still involved 50 m -scale roughness that would be impassable to a rover unless smaller scales show paths through. This terrain spans ≈ 280 km, but smaller rough terrain within the area is present. The 200 m scale shows an increase in less rough terrain (less than 20 m elevation changes) with isolated rough crater features. The 100 m scale shows the prominent rough features at the mare walls. However, at this scale, track features due to interpolation are visible. The areas with low roughness (< 20 m roughness in 200 m scale are blue in Figure 8) are more patchy than the slope < 20°a reas. The low roughness area in the lower left region of the crater is divided by a linear feature of high roughness (∼ 20 -60 m). Whether this feature is traversable will require higher-resolution mapping. The Gini coefficient can quantify the concentration of roughness levels. The Gini coefficient is dependent on the mean of the absolute difference between pairs of individual measures. The Gini coefficient ranges between zero and one for the clumping of roughness measurements. If the absolute difference between neighbouring measures of roughness is small (large), then the terrain is described as having constant (inconstant) roughness, and the Gini coefficient will be close to one (zero). Topographic features such as these imply that rovers deploying radio antennae will have to take more circuitous paths to reach the full extent of the array. As a result, the rovers will have a larger payload and deployment time will increase. Threshold Slope Maps Maps are tailored to characterise the suitability of the sites for a radio array produced from the maps described in Section 5. A threshold was applied to the slope maps to highlight inaccessible and highly accessible areas clarifying the differences between the sites for deploying an array. The slope from roughness measurements is calculated by projecting the roughness value across the scale. A series of slope thresholds are applied to data sets which mask data exceeding the limit. The result is a slope map presenting the terrain likely traversable by a rover and indicating the accessibility of a site. The slope threshold boundaries were chosen by the maximum slope capability of successful rover missions listed in Table 2. The maximum incline wheeled vehicles are designed for is 30°therefore, this is the maximum slope value in the scale. A conservative 25°is determined as the inaccessible slope limit because it allows several sites to qualify. A threshold of 15°is a safe choice, as all rovers can handle inclines up to this value. VIPER is planned to land on the Moon in late 2024 and has demonstrated complications traversing slopes > 15° (Shirley et al. (2022)). 15°is the lower limit to the threshold slope range. Figure 9 shows slope maps produced in Section 4.2.2 for all eight lunar sites. A colour scale represents the slope in increments of 5°r ather than masked. Orange terrain represents inaccessibility (slope > 25°). Accessible and easily traversable terrain in blue. All the sites have large areas with slopes < 15°. Several of the sites have areas of rough terrain that will have to be avoided. Mare Ingenii is a striking example where the full extent of the site can only be accessed by passing through two narrow passes between ridges. Korolev appears less traversable, given the large areas of rough terrain. The varying difficulty of the terrain at each site is a factor in ranking the suitability of the sites. Figure of Merit (FoM) An objective comparison of these lunar sites can be helped by creating a suitable FoM that combines four factors: (i) Slope constrained size; (ii) Point Spread Function (PSF); (iii) roughness; (iv) terrain. These are discussed in turn below. Factors Four factors were considered to assess the suitability of the sites of interest. (i) Slope-constrained site size. The size below a given slope threshold is the length of accessible terrain to a wheeled vehicle. A wheeled vehicle requires connected accessible terrain with routes around special features and the general roughness of a surface to deploy an array of dipoles. A larger site provides more flexibility in array design and a smaller PSF. (ii) PSF (angular resolution). Baseline lengths are the longest linear distances achievable on a site, representing the baselines of possible arrays. The length does not represent the true path length which considers avoiding objects and excessive terrain sloping, resulting in a greater distance traversed by the rover. At increasing slope thresholds, a site becomes more accessible and the achievable baseline increases. For the simplest array design, orthogonal baselines are required, but this is not attainable in some sites; bisecting baselines within 20°were used for this FoM. (iii) Roughness. If a surface is rough with rapidly varying elevation, then the Gini coefficient will be close to zero, and if the terrain is smooth with little elevation variation, the Gini measure is close to one. A highly ranked site will have a high Gini coefficient, indicating a more concentrated smooth surface. (iv) Terrain obstacles. Special features are identified by using interactive three-dimensional slope maps. Significant mountain regions, large obscuring craters, and the passageways around such features are special features. The questions asked are: Do these obstacles lie in the path of a dipole? Is extra distance traversed and extra cable (and greater mass) required to avoid these obstacles? Calculating Figures of Merit The FoM factors are listed in Table 3. For a site at a given slope threshold, a point for each factor, between one (worst) and five (best), is awarded contingent on the value of each factor, as in Table 3. Low slope and roughness on the meter scale are most important for a cosmology radio interferometer with a high concentration of dipoles in the central region, meaning the surface must be flat. Hence, the points for baseline (slope dependent) and roughness are doubled. A high overall score indicates a good site; the highest possible score is 30. Site size, baseline lengths and length ratios depend on the slope threshold, i.e. the awarded merit typically increases for increasing threshold slope. Gini is independent of the slope threshold, i.e. the awarded merit is the same value for all slope threshold maps. An example: Apollo, with an applied 20°slope threshold, has the longest length baseline > 200 km (compare Figure 10) and a bisecting baseline between the lengths 100 to 150 km, so the points awarded respectively are ten (factor awarded double points) and three from Table 3. The figure of merit is calculated for a given site by summing the points awarded for each factor for a given slope threshold. Two baseline points are awarded for the two baselines which bisect. The maximum number of points awarded by the figure of merit is 30. Below, each factor is analysed and followed by the figure of merit result in Table 5. Figure 10 shows how the maximum baseline (dotted line) and the maximum bisecting baseline change (dashed line) for increasing slope thresholds from 15°to 25°. For some sites, the longest achievable baselines depend strongly on threshold slope, e.g., Apollo's second baseline. Sites are shown in different colours. The 22°slope threshold for a bisecting baseline has only one site, Apollo, with a length greater than 200 km, but by increasing the slope threshold to 24°, four sites have lengths with the required 200 km length. The area between 22°and 24°is shaded in light orange to highlight the changes in baselines, and baselines > 200 km are shaded in green. The simplest radio interferometer design would involve two bisecting orthogonal dipoles, each 200 km in diameter. For such a design, it is important for the dipole lengths to be close to a 1:1 ratio. The site Korolev does not achieve a 1:1 ratio, whereas all other sites are close to a 1:1 ratio with the most tolerant slope limit, 25°. Roughness and slope maps of Korolev suggest that the site is one of the roughest and least desirable sites for an interferometer by eye. The Gini coefficient validates this assumption by being the minimum, 0.23. Similarly, sites showing the most promise from observing their topographic maps are Apollo, Mare Ingenii, and Daedalus. These sites have the highest Gini coefficients > 0.3, implying more consistent terrains. With the FoM characteristics considered, the weighting of each characteristic is summed, and the total merit for each site is calculated. The highest-ranked site is determined by having the greatest total points. Tables 4 and 5 show the figure of merit results by presenting the measured factor value and the sum of points awarded at each slope threshold, respectively. The most feasible locations for a 200 km array are Apollo, Mare Ingenii and Mare Moscoviense. The threshold slopes 15°, 20°, 22°and 24°were chosen. The lower boundary, 15°, is not sufficient to traverse a 200 km baseline, except in Apollo. The following thresholds show increases in the num- ber of sites achieving a 200 km baseline. Figure 11 shows the 15°t hreshold map of the top-ranked sites Apollo, Mare Ingenii and Mare Moscoviense. A restrictive limit allows for one baseline to be traversed. However, a near perpendicular bisecting baseline cannot be traversed. DISCUSSION We conducted a preliminary lunar far-side site survey to determine the optimum location for a 200 km radio-class, low-frequency (10s MHz) interferometer. The lunar far-side has few smooth craters and mare regions where a 200 km radio interferometer could be situated. LOLA Digital terrain maps and slope and roughness maps derived from them were used to examine eight potential sites. The surface of the Moon is a strenuous environment for wheeled vehicles. The engineering limits of the existing and proposed vehicles influence the threshold slope of terrain maps. Since we measured an average slope on a 200 m scale, slope thresholds of 15°, 22°and 24°were used. 22°and 24°show an increase in sites with 200 km baselines. It should be noted that a 15°slope threshold is not sufficient to achieve the goal of a 200 km baseline interferometer. We examined four factors: slope-constrained size, PSF, roughness and terrain obstacles. From the DTMs, slope maps and roughness maps, measurements of these factors created the FoM to describe and rank the sites. Terrain features affect the ease of deployment of the antennas of the array. Figure 9 shows the significant features of each site and their accessibility. Apollo is ranked top in the FoM because it is the largest site, having the largest baseline through terrain with minimal slope. However, the largest baseline is challenging because of the mountainous terrain bisecting the crater. The roughness Gini coefficient of Apollo is high because the crater floor is a smooth U-shaped region. Notably, within the U-shape of accessible terrain, a highly rough region in the lower right region of the crater poses a challenge to a rover (see also Figure 11). Mare Moscoviense has few obscuring objects within the mare floor but does show rough regions, though these appear avoidable. Mare Ingenii has an entirely flat, < 5°, surface separated by a steep mountain wall > 25°. Between crater walls are flat passageways, which create accessible routes for a rover. If the interferometer maximum baseline could be reduced whilst achieving the scientific goal, Tsiolkovsky and Daedalus become suitable locations for arrays smaller than 150 km. The two sites have the most accessible terrain within their crater floors and show one avoidable mountainous feature greater than the threshold slope. The sites Hertzsprung, Korolev and Mendeleev show very rough and disrupted terrain with features inclined greater than 25°. Routes through rougher sites are not identifiable and challenging because the 200 m scale on which the maps were interpolated implies that non-mapped features would be present. Limitations to this study arose from both computational resources and data anomalies. The topographic maps shown in this paper were produced with the best achievable scale of 200 m per pixel. The LOLA instrument has a nominal ranging precision of 10 cm and a vertical precision of < 1 m (Smith et al. (2017)). LOLA can calculate slope on the smallest scale ∼ 25 m. The anomaly affects the instrument's operation by the common case of two spots out of five emitted being detected. To overcome this challenge, interpolated data was created to estimate the surface between track coverage and the greater scale was chosen for computational efficiency. Using large scales means we do not have a detailed understanding of the lunar surface at these sites, and unobserved obstacles could make a site inaccessible. Fernandes & Mosegaard (2022) generated ∼ 1 m per pixel scale topographic maps of the site Mare Ingenii using images from the LRO Camera, which capture shadows cast from sunlight, and they relate this to the gradient. Future work includes using high-resolution topographic maps to determine realistic traverses and resultant cable length requirements. Improvements to the FoM can be made. For example, different weights can be assigned to each factor depending on how they affect mission design requirements (e.g., traverse length, available baseline given realistic slope capabilities). Site Protection This study has identified the importance of a few lunar far-side sites with the likelihood that these locations would be highly competitive in the next decades. The lack of smooth terrain on the far-side in- Table 3). Table 5 gives the sum of points awarded for each of the eight sites at each slope threshold. Figure 11. Left to the right, slope maps of the three highest ranked lunar sites Apollo (338 306 km), Mare Ingenii (306 213 ) and Mare Moscoviense (306 306 km). A colour scale with a 15°slope threshold represents the slope in degrees. White areas cannot be traversed and purple are the smoothest surfaces. creases the value of such exceptional sites. In this case, preventing harmful interference at the sites will form disputes over entitlement to access sites regardless of the local resources (Elvis et al. (2021)). Currently, profitable lunar sites correspond to 'common-pool resources' (Edwards & Steins (1998)) in which 'no single nation has a generally recognized exclusive jurisdiction' (Wijkman (1982)). The impact of the SpaceX Starlink Satellites has been observed on the Zwicky Transient Facility Survey observations (Mróz et al. (2022)). Growing concerns over the impact on astronomical observations from low Earth orbit satellite constellations demonstrate the urgency to protect future research from similar experiences (Lawrence (2021)). We expect to see developments within the field as a recently established group within the International Academy of Astronautics (IAA), the Moon Farside Protection Permanent Committee, aims to call attention to lunar interference corruption 14 . CONCLUSIONS We have conducted a preliminary study of lunar far-side radio array sites. (i) Eight sites of dimension > 100 km were investigated by generating topographic maps on the scale of 200 m with a ranging precision of 10 cm and a vertical precision of < 1 m; (ii) Only the site Apollo is traversable (meaning a linear 200 km baseline can be accessed by a rover) when a 15°slope threshold is applied to the region; (iii) Four sites (adding Mare Ingenii, Mare Moscoviense, and Hertzsprung) are traversable for a 24°slope threshold; (iv) The sites Tsiolkovsky and Daedalus would be good sites for smaller arrays ∼ 100 km; (v) A figure of merit was created using size, slope, roughness, and topographic features to compare the sites objectively. The ongoing work to achieve higher-resolution topography maps would provide a more rigorous site study. The rarity of good sites points to a need for their protection.
10,749.2
2023-07-12T00:00:00.000
[ "Physics" ]
Numerical Solutions of a Quadratic Integral Equations by Using Variational Iteration and Homotopy Perturbation Methods In this paper, the approximate solutions for quadratic integral equations (QIEs) are given by the variational iteration method (VIM) and homotopy perturbation method (HPM). These methods produce the solutions in terms of convergent series without needing to restrictive assumptions, to illustrate the ability and credibility of the methods, we deal with some examples that show simplicity and effectiveness. Quadratic integral equations (QIEs) are often applied in the radiative transfer, neutron transport, kinetic theory of gases and in the traffic theories.The QIEs are studied in many papers and monographs (Bana's, et al., 2005;Bana's, et al., 1998;Bana's & Martinon, 2004;El-sayed & Hashem, 2009a;El-Sayed & Hashem, 2009b).Recently, the different analytical and numerical methods are applied to reach the approximate solutions of QIEs.As there is no exact solutions for the most QIEs, many different kinds of researches are focusing on the effective of QIEs properties like the existence, uniqueness, positive solutions and monotonic solutions of this class of equations (Argyros, 1985;Bana's et, al., 1998;Bana's & Martinon, 2004;El-Sayed & Rzepka, 2006).There are few papers which have dealt with the numerical solutions of QIEs such as Elsayed (El-Sayed et al., 2010) used the classical method of successive approximations Picard and Adomian decomposition method for solving QIEs, Avazzadeh (Avazzadeh, 2012) used the radial basis functions to obtain the approximate solutions of QIEs of Urysohn's type.(He, 1999a;He, 1999b;He, 2000;He, 2003) was the first one who proposed the VIM and HPM to find the solutions of linear and nonlinear problems.Widely, the VIM is used in the literature in different scientific applications in (Abdou & Soliman, 2005;Abulwafa et al., 2006;He & Wu, 2007).This method presents significant enhancements over existing numerical and analytic technique like the perturbation, Adomian, Galerkin, finite differences methods, etc.These methods have dealt with ordinary, partial differential equations, the integro-differential equations (IDEs) and integral equations, in a direct way without needing to any specific restriction which may give the closed form of exact solution if there is an exact solution.The VIM has no specific restrictions for nonlinear terms which involve in the equation.The homotopy perturbation method deforms a difficult problem under study into a simple one which is easy to solve.Most perturbation methods assume there is a small parameter, but there is no small parameter at all in the most nonlinear problems.Many new methods are proposed to eliminate the small parameter (He, 1999b;Liao, 1995).Also, the HPM is employed for solving several kinds of integral equations.Such as, Fredholm, nonlinear Volterra-Fredholm integral equations and Volterra integro-differential equations.The aim of the present paper is extending the application of HPM and VIM to give some approximate solutions for the following QIE where A(t) is given and F (s, x(s)) is any nonlinear functions.We want to point out that this work is applied for first time on these kind of equations. (1) It is clear that the results reveal the effectively and simplicity for the presented two methods. Variational Iteration Method Consider the following differential equation where L and N are linear and nonlinear operators respectively, and g(x) is the inhomogeneous source term The VIM presents a correction functional for eq.( 2) in the following form: where λ is a general Lagrange multiplier, noting that in this method λ may be a constant or a function, which can be identified perfectly by the variational theory and the subscript n denotes the nth-order approximation, u n is considered as a restricted value that means it behaves as a constant, i.e. δ u n = 0.It was found in (Abdou & Soliman, 2005;Abulwafa et al., 2006;He & Wu, 2007).the general formula for λ(x) for the nth order differential equation has the form The solution given by u(x) = lim n→∞ u n (x). Homotopy Perturbation Method Consider the differential equation (2) with following the boundary conditions where B is a boundary operator, Γ is the boundary of the domain Ω and x ∈ Ω B ( u, ∂u ∂n The He's homotopy perturbation technique (He, 1999a), (He, 2000) defines the homotopy ν(x, p) : Ω × [0, 1] → ℜ which satisfies or where x ∈ Ω and p ∈ [0, 1] is an impeding parameter, u 0 is an initial approximation which satisfies the boundary conditions, from eq's.( 7) and ( 8), we have The p process of changing from zero to unity is just that of ν(x, p) from u 0 to u(x).In topology, this is called deformation, L(ν) − L(u 0 ) and L(ν) + N(ν) − g(x) are homotopic.The solutions of eq.( 7) and eq.( 8) can be defined as a power series in p when p → 1, corresponding to (7) becomes the approximate solution is the convergence of the series ( 12) has been proved in (He, 1999a;He, 2000). Numerical Examples In this part, we study some examples and apply the VIM and HPM methods for comparison reasons. Example 1. solve the QIE (El-Sayed et al., 2010) with exact solution x(t) = t 2 .as beginning we have to convert volterra QIE to an equivalent volterra IDE.We can do this by differentiating two sides of the QIEs, we should used the Leibnitz rule for differentiating the QIEs at the right side. x ′ (t) = 2t − 10 we can get the initial condition x(0) = 0 by substituting x = 0 in eq.( 13), the correction functional for Equation ( 14) is We substitute the value of λ(ζ) = −1 in eq.( 15) which is identified by the variational theory, also, we can use the initial value x(0) = 0 to obtain the zeroth approximation x 0 (t) and by using Equation ( 15) we get the successive approximations, x 0 (t) = 0, (16) and so on, and the solution given by 1 shows the approximate solution for n = 4, also it is obvious that we can improve the accuracy of solutions by computing more terms of the approximate solutions.We can construct the following homotopy according to HPM, ) then substituting ( 11) into (20) and equating the terms with the same identical powers of p we have and so on, where H i are He's polynomials of the nonlinear term x 2 , and the solution will be, with exact solution x(t) = t 3 .as beginning we have to convert volterra QIE to an equivalent volterra IDE.We can do this by differentiating two sides of the QIEs, we should used the Leibnitz rule for differentiating the QIEs at the right side. x we can get the initial condition x(0) = 0 by substituting x = 0 in eq.( 25), the correction functional for Equation ( 26) is We substitute the value of λ(ζ) = −1 in eq.( 27) which is identified by the variational theory, also, we can use the initial value x(0) = 0 to obtain the zeroth approximation x 0 (t) and the solution given by Table 3 shows the approximate solution for n = 3, also it is obvious that we can improve the accuracy of solutions by computing more terms of approximate solutions.We can construct the following homotopy according to HPM, substituting ( 11) into ( 29) and equating the terms with identical powers of p we have and so on, where A i and B i are He's polynomials of the nonlinear terms x 2 and x 3 respectively, and the solution will be, 4 shows the approximate solution for n = 3, also it is obvious that we can improve the accuracy of solutions by computing more terms of approximate solutions. Example 3. Solve the QIE (Bana's et al., 2005) x(t According to the VIM, differentiating both sides of eq.( 34) ones with respect to t gives the IDE The correction functional for eq.( 35) is we can use the initial value x(0) = 0 to obtain the zeroth approximation x 0 (t) and by using the eq.( 36) we get the successive approximations x 0 (t) = 0, (37) x and so on, and the solution given by We can construct the following homotopy according to HPM, substituting ( 11) into (41) and equating the term with identical powers of p we have p 0 : u 0 (t) = t 3 , (42) and so on, where H i are He's polynomials of the nonlinear term cos( x(s) 1+x 2 (s) ) and the solution will be The correction functional for eq.( 47) is the zeroth approximation x 0 (t) can be selected by using the initial value x(0) = 1.We can construct the following homotopy according to HPM, Table 1 . Comparison of the numerical results with the exact solution x(t) Figure 1.Comparison of the approximate solution by VIM with the exact solution Table Tbale 2. Comparison of the numerical results with the exact solution x(t) (El-Sayed et al., 2010) the approximate solution by HPM with the exact solution Table2shows the approximate solution for n = 4, also it is obvious that we can improve the accuracy of solutions by computing more terms of the approximate solutions.Example 2. Solve the QIE(El-Sayed et al., 2010)x(t) = Table 3 . Comparison of the numerical results with the exact solution x(t) Figure 3.Comparison of the approximate solution by VIM with the exact solution Table 4 . Comparison of the numerical results with the exact solution x(t) Figure 4. Comparison of the approximate solution by HPM with the exact solution Table Table 5 . Approximate solution x(t) by VIM and HPM for n = 1 Figure 5. Approximate solutions by using (VIM) and (HPM)
2,226
2017-04-17T00:00:00.000
[ "Mathematics" ]
The Interaction between Surface Acoustic Waves and Spin Waves: The Role of Anisotropy and Spatial Profiles of the Modes The interaction between different types of wave excitation in hybrid systems is usually anisotropic. Magnetoelastic coupling between surface acoustic waves and spin waves strongly depends on the direction of the external magnetic field. However, in the present study we observe that even if the orientation of the field is supportive for the coupling, the magnetoelastic interaction can be significantly reduced for surface acoustic waves with a particular profile in the direction normal to the surface at distances much smaller than the wavelength. We use Brillouin light scattering for the investigation of thermally excited phonons and magnons in a magnetostrictive CoFeB/Au multilayer deposited on a Si substrate. The experimental data are interpreted on the basis of a linearized model of interaction between surface acoustic waves and spin waves. Preparation of the sample and material parameters We used naturally oxidized (001) silicon as a substrate supporting the studied [Co 20 Fe 60 B 20 /Au] 20 multilayers deposited on top of a 4 nm titanium (Ti) and a 15 nm gold (Au) buffer layers. The multilayers were deposited by magnetron sputtering in argon atmosphere (P Ar = 1.4 × 10 −3 mbar) in a multi-chamber system with a base pressure below 2 × 10 −8 mbar. The thickness of each layer was controlled by the deposition time. 1 The multilayer structure ((Co 20 Fe 60 B 20 (2.1 nm)/Au(0.9 nm))×20) was designed to reduce the effective magnetization. According to Kittel's equation, by decreasing the effective magnetization it is possible to lower the resonance frequency in a given field. This means that the dispersion relation can be shifted by changing the effective magnetization. The effective magnetization is the sum of the shape anisotropy and the surface anisotropy. The shape anisotropy is defined by the saturation magnetization (M S = 1200 kA/m) of the ferromagnetic layers and cannot be tuned without changing the alloy composition. On the other hand, the contribution of the surface anisotropy is inversely proportional to the thickness of the ferromagnetic layer. The Co 20 Fe 60 B 20 /Au interface was found to have a surface anisotropy large enough to obtain an easy out-of-plane magnetization direction for films thinner than 1.1 nm. 1 By choosing a thickness of 2.1 nm of the Co 20 Fe 60 B 20 layer the surface anisotropy can be tuned to substantially reduce the effective magnetization and, as a result, lower the dispersion relation. We used a 0.9 nm thick Au spacer, which assured ferromagnetic coupling between layers. 2,3 So designed, the whole structure should behave like a single ferromagnetic film. Typically used for inducing surface anisotropy, heavy metals tend to increase the damping of the magnetic layers due to the spin pumping effect. Gold layers are an exception, since they are quite heavy and promote perpendicular anisotropy, but have low spin mixing conductance and long spin diffusion length. 1 Using Au as a spacer we lower the effective magnetization without significantly increasing the damping. By fitting the numerical data to the experimental results we determined the effective material parameters of the multilayer (treated as an effective medium). The magnetic parameters are: saturation magnetization M S = 950 kA/m and exchange stiffness constant A = 5 pJ/m. We assume the value of gyromagnetic ratio γ = 176 GHz/T. The elastic parameters (the mass density ρ and the components of the stiffness tensor c) of the constituent materials and the effective medium are specified in Tab. 1. The materials are assumed to be elastically isotropic. The magnetoelastic coupling constant b 1 is determined using the strain modulated ferromagnetic resonance technique. 4 The value obtained for the considered system is b 1 = −2.5 MJ/m 3 . We assume b 2 = b 1 . Experimental setup We studied the dispersion relations of thermally excited SAWs and magnetostatic SWs using a six-pass tandem Brillouin spectrometer (Scientific Instruments c TFP2-HC -see Ref. 5), which ensures a contrast of 10 15 . A frequency-stabilized diode-pumped solid-state laser (Coherent c VERDI V5) operating at λ 0 = 532 nm was used as a source of incident light. The measurements were performed in the 180 • backscattering geometry with crossed (p-s) polarization of incident and scattered light for SWs and non-crossed (p-p) polarization for SAWs. 6,7 This BLS geometry setting allows collecting spectra for various configurations of the wave vector and the magnetic field. The backscattered light was collected by a lens with a 84 mm focal length and a 0.26 numerical aperture. A detailed description of the experimental setup, represented schematically in Figure 1, can be found in Ref 8. In our paper we present the experimental results obtained for two angles, 0 • and 45 • , between The orientation of the sample holder (b) with respect to the direction of the incident beam determines the wave vector value |k| of the SWs and SAWs interacting with the light. Both the incident beam (of wave vector q i ) and the backscattered beam (q s ) are shaped by an objective lens (c). The frequency spectrum of the backscattered beam is analyzed in a tandem Fabry-Perot interferometer (Scientific Instruments c TFP2-HC) (d). The refracted beam (q r ) is used to determine the orientation of the sample holder. A magnetic field H 0 is applied in the plane of the sample, parallel to k. the wave vector and the magnetic field. The Brillouin frequency shifts of the laser beam inelastically scattered from phonons and magnons are shown in Figure 4. [9][10][11] In our experiment the wave vector value k was varied from 0.6 × 10 5 cm −1 to 2.2 × 10 5 cm −1 . The minimal accumulation time for each spectrum was 6000 cycles. In the paper we present phonon and magnon dispersion relations obtained for two magnetic field values, 30 mT and 50 mT. By appropriately adjusting the magnitude of the applied magnetic field we can slightly shift up the SW dispersion relations in the frequency domain. This allows for the observation of magnetoelastic interactions in the dispersion relations of the F-SW, R-SAW and L-SAW modes in the wave vector range convenient for BLS measurements. The detailed behavior of magnons in the area of F-SW/L-SAW interaction is presented in Figure 4. The F-SW/L-SAW interaction is detectable only by the splitting of magnon peak because the phonon peak for L-SAW is barley measurable. The full width at half maximum (fwhm) of magnon peaks is small enough to trace the splitting with the charges of wave vector and plot The presented spectra include the peaks corresponding to the fundamental spin-wave (F-SW) mode, the first perpendicularly standing spin-wave (PS-SW) mode and Rayleigh SAW (R-SAW), Sezawa SAW (S-SAW). The Love SAW (L-SAW) of the frequency just below the frequency of R-SAW is practically invisible experimentally, except the region of interaction with F-SW (k ≈ 1.13 × 10 5 cm −1 ), where it is detectable due the splitting of the F-SW peak. The right column presents the larger number of BLS spectra aggregated into dispersion relation, showing the F-SW/L-SAW anticrossig (i.e. interaction) and F-SW/R-SAW crossing (i.e. lack of interactions). the F-SW/L-SAW anticrossing -see Figure 3. Theoretical model When we decompose the magnetization M and the magnetic effective field H eff into static and dynamic parts: where h me is the component of the dynamic field resulting from magnetoelastic interaction intensity 6 4 frequency (GHz) intensity 6 4 frequency (GHz) intensity 6 4 frequency (GHz) wave vector (10 cm ) 5 -1 with SAWs, then the damping-free Landau-Lifshitz equation: can be written in the form: We assume that the external field H 0 is applied in-plane, and, because of the homogeneity of the film, there are no static demagnetizing fields. Assuming further that the sample is satu- With the non-linear terms neglected, m × (h + h me ) → 0, the linearized Landau-Lifshitz equations for the in-plane ( ) and out-of-plane (⊥) components read: where H 0 = |H 0 |, and M S ≈ M 0 = |M 0 | is the saturation magnetization. In the coordinate system presented in Figure 1 in the main paper m x = −m sin φ, m y = m cos φ, m z = m ⊥ , where φ is the deflection of the wave vector with respect to the direction of the external magnetic field. In-plane and out-of-plane components of the dynamic effective field of magnetoelastic origin induced by R/S-SAW or L-SAW for three selected angles φ between the in-plane applied field H 0 and the wave vector k =xk. Note that ∂ The dynamic magnetic field components h and h ⊥ are determined by the exchange and dipolar interactions between the dynamic components of magnetization, whereas the fields h me, = 1 µ 0 ∂Fme ∂m and h me,⊥ = 1 µ 0 ∂Fme ∂m ⊥ describe the impact of the strain generated by propagating SAWs on the magnetization dynamics: 12 µ 0 h me, = 2b 2 (ε xz cos φ + ε yz sin φ), µ 0 h me,⊥ = b 1 (ε xx − ε yy ) sin 2φ − 2b 2 ε xy cos 2φ (4) and can be derived from the explicit form of the magnetoelastic energy density: where δ ij is the Kronecker delta. The magnetization vector components are determined in the xyz coordinate system: In the considered geometry (with the external field H 0 applied in the x-y plane and SAWs propagating along the x-direction, k =xk) the strain tensor components figuring in eq. (4) are: ε xx = iku x , ε yy = 0, ε xy = 1 2 (∂ y u x + ∂ x u y ), ε xz = 1 2 (∂ z u x + iku z ), ε yz = 1 2 ∂ z u y . These formulas are further simplified for particular SAW types. For the R-SAW and S-SAW: ε yy = ε xy = ε yz = 0. The non-zero components are ε xx and ε xz , with ε xx dominant for larger k. The ε xz component is reduced near the surface and therefore, in principle, has a smaller impact on the magnetoelastic coupling. For the L-SAW ε xx = ε yy = ε xz = 0. The only non-zero components of the strain tensor are ε xy and ε yz ; ε xy > ε yz close to the surface and for larger k. Let us compare the µ 0 h me, and µ 0 h me,⊥ values for the R/S-SAW and L-SAW for three selected orientations of the in-plane applied magnetic field: φ = 0 • , 45 • and 90 • . Table 1 presents the in-plane and out-of-plane components of the dynamic effective field of magnetoelastic origin, expressed by the respective strain tensor components, for these three orientations of the magnetic field. If we consider the dominant strain tensor components that are maximal at the surface, ε xy and ε xx , we notice that (i) the R/S-SAW drives the SW dynamics by the perpendicular component of the effective field, whereas the L-SAW affects SWs by the in-plane component of the effective field; (ii) the interaction is expected to be maximal with φ = 0 • or 90 • for the L-SAW, and with φ = 45 • for the R/S-SAW. Let us discuss the reverse process, i.e. the influence of the SW dynamics on the SAWs. The fundamental elastodynamic equation, derived from Newton's 2 nd law of motion, reads: with σ me,x i ,x k = ∂Fme ∂εx i ,x k related to the respective derivatives of the magnetoelastic energy density component F me . Equation (6) can be written in the explicit form: 13 In the linear approximation the spatial derivatives of magnetoelastic components of the stress tensor are equal and expressed by the formulas: The terms ∂ x k σ me,x,x k and ∂ x k σ me,x,x k affect the dynamics of the displacement components u x and u z . This allows for the determination of the impact of the R/S-SAW on SWs (described by m and m ⊥ components). The term ∂ x k σ me,y,x k modifies the dynamics of the displacement component u y , carrying the influence of the L-SAW on SWs. In Table 3 we summarize the formulas for ∂ x k σ me,x j ,x k for three selected angles φ = 0 • , 45 • and 90 • . If we consider waves with larger wave vectors (with 2π/k much greater than the thickness of the magnetostrictive layer) and take into account the ellipticity of the SW (due to dynamic demagnetizing effects), m > m ⊥ , then we can notice that: (i) for φ = 0 the SW is coupled with the R/S-SAW (by m ⊥ ) and L-SAW (by m ), but its impact on the L-SAW should be more significant, since m > m ⊥ ; (ii) for intermediate angles (φ = 45 • ) m carries the strongest influence of the SW on the R/S-SAW; its effect on the L-SAW is more difficult to analyze; it is definitely negligible for circular precession (m − m ⊥ = 0), observed in the absence of dipolar interaction, in which case the profile of the F-SW mode is uniform in amplitude (∂ z m ⊥ ), but for elliptical presession both terms (m − m ⊥ and ∂ z m ⊥ ) can cancel each other out, which is consistent with the lack of reverse coupling (i.e. the impact of the L-SAW on the SW, see Table 1); (iii) for φ = 90 • the SW cannot affect the R/S-SAW, but can impact on the L-SAW by m ⊥ alone. Table 3: Spatial derivatives of the magnetoelastic contributions to the stress tensor, ∂ x k σ me,x j ,x k (x j = {x, y, z}), responsible for changes in the displacement (u x and u z for R/S-SAW, u y for L-SAW) induced by a SW (described by m and m ⊥ components). We consider three angles φ between the in-plane applied external field H 0 and the direction of the wave vector k =xk. Note that ∂ x m ⊥ = ik m ⊥ , ∂ x m = ik m . Numerical simulations Using the finite element method in COMSOL Multiphysics c 14 we solve numerically the coupled equations of motion for mechanic displacement and magnetization. The CoFeB/Au multilayer is treated as a 60-nm-thick effective magnetic layer, and the CoFeB/Au multilayer together with the Ti/Au buffer as an 80-nm-thick effective acoustic layer on a Si substrate. This is a reasonable assumption, since every single-layer thickness is much smaller than the wavelengths considered in this study. The Landau-Lifshitz equation is implemented in the Mathematics module and solved (along with the Poisson equation for the magnetostatic potential) for the effective magnetic layer with the effective fields related to the exchange, dipolar, Zeeman and magnetoelastic interactions taken into account. The standard forms of effective fields are implemented. The magnetostrictive fields are obtained from the magnetostrictive energy density: The Landau-Lifshitz equation is coupled to the Solid Mechanics module solver for the acoustic waves by the magnetostrictive stress: The dispersion relations are obtained in the Eigenfrequency study. For this purpose the Floquet boundary conditions are implemented along the x direction to define the wavevector k in the model. No relaxation is necessary since the equilibrium static magnetization is always parallel to the external magnetic field in our system. To avoid noise in the solutions we set the scaling of all dependent variables to a fixed value. To investigate the F-SW/SAW interaction for different angles φ between the external magnetic field H 0 and the wave vector k we need to identify the F-SW mode in the rich magnetoelastic spectrum. We vary the angle φ from 0 • (backward volume configuration) to 90 • (Damon-Eshbach configuration); the dispersion relations calculated for φ = 0 • , 15 • , 30 • , 45 • , 60 • and 90 • are presented in Figure 4(a-f). As the angle φ increases, the fundamental mode (red points) shifts towards higher frequencies. As a result, the frequency and wave vector corresponding to the magnetoelastic coupling between the F-SW mode and the L-SAW rise as well. On the other hand, the strength of the F-SW/L-SAW magnetoelastic coupling reaches a global minimum at 45 • and a global maximum at 0 • in its angular dependence. It is worthy of notice that the identification of the F-SW mode requires the inspection of the profile of the SW modes (the F-SW mode should be uniform in phase across the magnetostrictive layer). This identification is vital in the vicinity of the F-SW/PS-SW (anti)crossings. At anticrossings the mixed character of the SW modes will affect the magnetoelastic interactions -see e.g. the kinks, resulting from the F-SW/1 st PS-SW anticrossing, in the angular dependence of the F-SW/L-SAW frequency splitting ∆f at 19 • in Figure 4(a) in the main paper.
3,995.6
2020-11-24T00:00:00.000
[ "Physics" ]
Development of inertial piezoelectric linear actuator with asymmetric design . This article represents a numerical and experimental investigation of a piezoelectric linear actuator with asymmetric design. The actuator is based on a square-shaped rod with asymmetrical cut and T-shaped clamping. The cylindrical rail, with slider, is on the opposite side of the T-shaped clamping. The low-volume and mass of piezoelectric actuator allows to mount it directly to printed circuit board (PCB). The actuator operation is based on the inertial stick – slip operation principle inducted by the first longitudinal vibration mode excited by two sawtooth signals with phase difference 𝜋 . Furthermore, the longitudinal vibration mode induces bending deformations of the rod, increasing the displacements of the cylindrical rail. The results of numerical and experimental investigations have validated the actuator operating principle, as well as shown that the displacement amplitudes of the guidance rail have reached 153.6 μ m at the 200 V p-p excitation signal amplitude, while the linear motion speed reached 45.6 mm/s at the same excitation signal. Introduction Today, modern mechatronic, aeronautic, and space systems are based on high-accuracy laser and optical devices driven by actuators and motors, while these drives must meet strict motion accuracy requirements [1].However, during the development of these systems, the usage of electromagnetic actuators and motor becomes one of the most challenging aspects.Electromagnetic fields, limits in motion accuracy, scalability, and high mass become the main drawback of electromagnetic actuators and motors [2].The best option to replace these drives are piezoelectric drives which overcomes electromagnetic in size, mass, scalability, and motion accuracy.Moreover, piezoelectric actuators and motors are able to provide magnetic-field-free operation, power free self-locking ability, and direct drive of load.Moreover, piezoelectric drives could be designed to drive the load angularly and linearly.However, in most cases, are more complex in design and drive [3].Therefore, the most common piezoelectric drives are linear motion actuators which are based on the operation principle of inertial stick -slip.These types of actuators ensures the possibility to utilize all advantages of piezoelectric drives.However, there still exist demand for inertial linear piezoelectric actuators which are simple in design and have low mass and flexible mounting and driving possibilities. Chen et al. reported on an inertial piezoelectric linear actuator based on dual discs [4].Operation of the actuator based on inertial stick -slip principle which is obtained via simultaneously excited piezoelectric bimorph discs by a single rectangular excitation signal whose frequency was equal or close to the first bending mode of discs.This design ensures higher motion speeds and output forces at different amplitudes of the excitation signal.On basis of the results of numerical and experimental investigations, the authors claimed that the proposed actuator is capable provide of 37.5 mm/s speed and 15.4 mN of output force. Ko et al. reported on a piezoelectric inertial linear actuator based on a bimorph piezoelectric disc whose operation is obtained via inertial slip -stick principle [5].The actuator is based on single bimorph piezoelectric disc with a cylindrical rod mounted at the center of it and driven by sawtooth signal in order to excite the first, out of plane bending mode of disc.Due to usage of sawtooth signal, the disc slowly bends to one direction and rapidly bends to opposite, and by this way inertial operation principle is obtained.The authors, on basis of numerical and experimental investigations, found that the actuator is able to provide up to 15 mm/s of speed and 140 mN of output force. This paper represents numerical and experimental investigations on piezoelectric linear motion actuator which operation is based on the inertial stick -slip principle.The actuator is based on a square-shaped rod with asymmetrical cut and T-shaped clamping.The operation principle of the actuator is based on the first longitudinal vibration mode of a square shaped rod excited by two sawtooth signals applied to eight piezo ceramic plates.The longitudinal vibrations of the rod are transferred to the rail through a thin wall, which is formed by an asymmetric cut.Moreover, during actuator operation in the first longitudinal mode, additional, in-plane, bending deformations occur due to asymmetric design of actuator.Therefore, in comparison to the state of art, the proposed actuator and its operation propose the possibility to obtain linear motion via excitation of the first longitudinal vibration mode in which displacements are amplified and transferred via thin wall, as well as due to the asymmetric design an additional bending mode of the rod is obtained which leads to higher motion speeds. Design and operation principle of actuator The piezoelectric linear actuator is made up of a square-shaped rod which contains asymmetrical cut and T-shaped clamping.The rod and clamp are made as one body from C17200 beryllium bronze.On the rod ends are glued plates made from NCE81 (CTS Corp, USA) hard piezo ceramic material.The polarization directions of piezo ceramic plates are pointed from a square-shaped rod.Asymmetrical cut, which is made in a square-shaped rod, forms a thin wall, while at the center of the wall a cylindrical guidance rail made of carbon fiber is placed, while on the rail, through friction force, a slider is placed.Finally, T-shaped clamping is used to mount the actuator to PCB via two M3 bolts.The design of the actuator is represented in Fig. 1.The operation of the actuator is based on the first longitudinal vibration mode of the squareshaped rod excited by two sawtooth signals with a phase difference in π.During vibrations, thin wall will be affected by two opposite forces inducted by the first longitudinal vibration mode and as a result will bend.Bending of the thin wall will ensure that longitudinal vibrations are transferred to the cylindrical guidance rail, and will amplify displacements of these displacements.In addition, the rod, during vibration in longitudinal direction, will generate bending, in plane, deformations which additionally improve displacement amplitudes of cylindrical guidance rail.Bending deformations are inducted due to uneven stiffness of the rod which is created by asymmetrical cut and clamping of the actuator.Therefore, the composition of excitation signal shape and structural design of actuator ensure the principle of inertial stick-slip operation. Excitation of the actuator is based on two sawtooth signals which are applied to piezo ceramic plates.In order to obtain proper operation, piezo ceramic plates must be divided to three separate groups, which ensures excitation of the first longitudinal vibration mode as well as enlarges bending vibrations.The actuator excitation schematics is given in Fig. 2. The group of piezo ceramic plates dedicated to excite the first longitudinal vibration mode is composed from four piezo ceramic plates at the top and bottom of a square-shaped rod.The plates induct squeezing and stretching of the rod and by this way excites the first longitudinal vibration mode.The excitation signal for this group is not changed during the entire operation of the actuator.Groups dedicated to enlarge, in-plane, bending deformations are based on two piezo ceramic plates which are placed on sides of the rod.The excitation of these piezo ceramic plates is implemented by two sawtooth signals with phase difference in π in order to enlarge, in plane, the bending deformations of the rod, while one of the signals is the same as for the excitation of the longitudinal vibration mode. Numerical investigation of actuator Numerical investigations were performed to confirm operation principle, indicate the mechanical and electromechanical characteristics of the actuator.For this purpose, the finite element model (FEM) was built by COMSOL Multiphysics software.Boundary conditions were set as follows: the T-shaped clamping was fixed rigidly while electrical boundary conditions were set with respect to the schematic shown in Fig. 2 material characteristics were assigned as described in the previous section.The first stage of the numerical investigation was dedicated to modal analysis of the actuator.The goal of the investigation was to find the proper deformation mode and confirm its suitability for actuator operation.The results of the calculations are given in Fig. 3.As can be found in Fig. 3, suitable modal shape of the actuator was obtained at 59.72 kHz.Also, the modal shape shows that the longitudinal vibration mode inducts, in plane bending deformations as well as thin wall bends under the effect of longitudinal displacements of the rod.Therefore, it can be concluded that vibration mode obtained at 59.72 kHz is suitable for actuator operation.The next stage of numerical investigations was dedicated to calculations of the impedance and phase frequency characteristics of the actuator.For this purpose, frequency domain study was set in range from 59.7 to 60 kHz with increment step of 5 Hz.The results of the calculations are shown in Fig. 4. As can be found in Fig. 4, the actuator resonance frequency is at 59.82 kHz.The slight mismatch between the values obtained by the modal and frequency domain studies is influenced by minor differences in the mesh as well as in the discreet steps used by the frequency domain study.However, the values are in good agreement and confirm the operation frequency of the actuator.In addition, the electromechanical coupling of the actuator was calculated and reached value of 0.045. Finally, through a frequency domain study, displacement amplitudes were calculated at different amplitudes of the excitation signal in the direction.The goal of the investigation was to indicate the displacement amplitudes of the cylindrical guide rail inducted by vibrations of a square-shaped rod.The results of the calculations are given in Fig. 5. As can be found in Fig. 5, the highest displacement amplitudes were obtained at 200 V p-p and reached 153.6 μm or 768 nm/V p-p while the lowest displacements were obtained at 80 V p-p and reached 61.2 μm or 765 nm/V p-p .It shows that the actuator is able to provide stable output displacements at different excitation amplitudes. Experimental investigation of actuator The prototype was made to perform experimental investigations and confirm the results of numerical calculations.The view of the prototype is shown in Fig. 6.As can be found, resonance frequency of the actuator is at 59.45 kHz, which confirms results of numerical investigations.Minor difference in value is obtained as a result of slight mismatch in material characteristics as well as clamping.Finally, the linear motion speed was measured at different excitation amplitudes, as well as with different loads.The results are given in Fig. 8. As can be found in Fig. 8, the highest linear motion speed was obtained at 200 V p-p while the load was 3.8 g and reached 45.6 mm/s or 0.228 mm/s/V p-p while the lowest motion speed at this load value was obtained at 80 V p-p and reached 22.1 mm/s or 0.276 mm/s/ V p-p .In general, the graph shows that the actuator is capable of providing stable and predictable motion speed at different load and excitation signal parameters. Conclusions Novel asymmetrical design of piezoelectric inertial linear motion actuator introduced and investigated.The actuator ensures the possibility to generate linear motion via inertial stick -slip operation principle inducted by the first longitudinal vibration mode of the stator while the asymmetric design of it increases displacement amplitudes and as a result linear motion speeds.Numerical investigations have shown that asymmetrical design of actuator inducts additional bending deformations of stator which allows to obtain higher motions speed while thin wall, which is formed by asymmetrical cut, acts as vibrations amplifier.Experimental investigations confirmed results of numerical investigations and have shown that the linear motion speed generated by the actuator reaches 45.6 mm/s while the amplitude and load of the 200 V p-p and 3.8 g excitation signal are applied to the actuator. Fig. 2 . Fig. 2. Excitation schematics of actuator: 1 -excitation signal source; 2 -group of piezo ceramic plates used to excite longitudinal vibrations; 3 -group of piezo ceramic plates used to enlarge, in plane, bending of rod; 4 -group of piezo ceramic plates used to enlarge, in plane, bending of rod; 5 -polarization of piezo ceramics Fig. 4 . Fig. 4. Impedance and phase -frequency characteristics of the actuator Fig. 5 . Fig. 5. Displacement amplitudes of actuator in the y direction Fig. 6 . Fig. 6.The prototype of the actuatorFirstly, impedance and phase frequency characteristics were measured in order to confirm the operation frequency of the actuator.The results of the measurements are given in Fig.7.As can be found, resonance frequency of the actuator is at 59.45 kHz, which confirms results of numerical investigations.Minor difference in value is obtained as a result of slight mismatch in material characteristics as well as clamping.Finally, the linear motion speed was measured at different excitation amplitudes, as well as with different loads.The results are given in Fig.8.As can be found in Fig.8, the highest linear motion speed was obtained at 200 V p-p while the load was 3.8 g and reached 45.6 mm/s or 0.228 mm/s/V p-p while the lowest motion speed at this load value was obtained at 80 V p-p and reached 22.1 mm/s or 0.276 mm/s/ V p-p .In general, the graph shows that the actuator is capable of providing stable and predictable motion speed at different load and excitation signal parameters.
2,990.4
2023-09-21T00:00:00.000
[ "Engineering", "Physics" ]
Australian atmospheric pressure and sea level data during the 2022 Hunga-Tonga Hunga-Ha’apai volcano tsunami On January 15, 2022, an ongoing eruption at the Hunga-Tonga Hunga-Ha’apai volcano generated a large explosion which resulted in a globally observed tsunami and atmospheric pressure wave. This paper presents time series observations of the event from Australia including 503 mean sea level pressure (MSLP) sensors and 103 tide gauges. Data is provided in its original format, which varies between data providers, and a post-processed format with consistent file structure and time zone. High-pass filtered variants of the data are also provided to facilitate study of the pressure wave and tsunami. For a minority of tide gauges the raw sea level data cannot be provided, due to licence restrictions, but high-pass filtered data is always provided. The data provides an important historical record of the volcanic pressure wave and tsunami in Australia. It will be useful for research on atmospheric and ocean waves associated with large volcanic eruptions. Background & Summary On January 15, 2022, at approximately 04:15 UTC, an ongoing eruption at the Hunga-Tonga Hunga-Ha'apai volcano (184.615°E,20.55°S) in Tonga produced a large explosion with global-scale effects not seen since the 1883 Krakatau volcanic eruption.These included global atmospheric pressure waves with a particularly prominent Lamb wave, seismic and acoustic waves, and a volcanic plume reaching an unprecedented 55 km height [1][2][3][4] .The ocean was perturbed by a combination of mass movements at the volcano, the explosion, and pressure gradients induced by the atmospheric waves, generating a tsunami that was observed globally.Tsunami runup heights reached 20 m in nearby Tonga, while peak-to-trough wave heights reached at least 3.4 m in the eastern Pacific and 1 m in the Atlantic Ocean [5][6][7] . The Hunga-Tonga Hunga-Ha'apai volcano (henceforth HTHH volcano) is named after two small islands on the caldera's northern rim 5 .Its explosive eruption is particularly significant as the first global-scale volcanic tsunami to be well recorded on modern atmospheric pressure and sea level sensor networks.Study of this event will enable better understanding of volcanic tsunami source processes, the dynamics of atmospheric Lamb waves, and the behaviour of volcanic tsunamis, with application to tsunami hazard assessment and risk mitigation, among other fields.For this reason, it is important to archive and facilitate access to observations of the event. This study presents multi-site time series of mean sea level pressure (MSLP) and sea level around Australia during January 2022 8 .All the time series include data before and after the explosion, and the majority show evidence of the atmospheric wave (MSLP) or tsunami (sea level).After discarding some stations due to problematic data, as discussed below, the dataset includes (Table 1): • 503 MSLP sensors from throughout Australia and its offshore territories (Fig. 1a). • 103 tide gauges which have Australia-wide coverage (Fig. 1b) but with higher concentration in south-eastern Australia where the tsunami was larger overall. Compared with MSLP and sea level data for the HTHH volcanic tsunami that is already openly available 6,9,10 , the data herein 8 greatly increases the density of observations near Australia. The MSLP data was recorded at a network of barometers by the Bureau of Meteorology and the QLD Department of Environment and Science (Fig. 1a, Table 1).All these time series record MSLP at 1-minute intervals with occasional larger gaps due to missing data.All but two of the MSLPBOM time series span 8 days, while the MSLPDES time series all span 31 days.The sea level data (Fig. 1b) was collected by a range of organisations (Table 1).Most tide gauges record sea level at 1-minute intervals (80 of 103) while the remainder use intervals between 2 and 10-minutes, with occasional larger gaps due to missing data.Most gauges include 20 or more days of data (73 of 103).They use a variety of vertical datums, often approximating either Australian Height Datum or a local lowest astronomical tide.No attempt was made to convert all the sea level data to a single vertical datum because the required information is not always available (although it is included in the original data for a high fraction of tide gauges).In some instances, the combination of data from multiple sources leads to two gauges being almost co-located, and in one case the same gauge is included twice but with different processing (smoothing and down-sampling) having been implemented by the data providers.In these cases, both datasets are included because they can provide insight into measurement uncertainties caused by instrumentation and processing (discussed below). The original MSLP and tide gauge time series are non-trivial to analyse because they employ multiple file formats and time zones, occasionally contain errors, and do not separate the effects of regular MSLP or sea level variations from the volcanic pressure wave or tsunami.Therefore, in addition to the original data, we provide post-processed datasets with a unified file format, consistent UTC time zone, and additional quality control.The post-processed data includes both raw time series and a high-pass filtered variant which better represents the atmospheric pressure wave (Fig. 2b,c) or tsunami (Fig. 3b,c).For a minority of tide gauges (8/103) licence constraints prevent redistribution of the original data, and only the high-pass filtered timeseries are provided (Table 1). The high-pass filtered time series include wave periods less than 2 hours (for MSLP) or 3 hours (for sea level).These short period waves are often dominated by the atmospheric Lamb wave (Fig. 2b,c) or tsunami (Fig. 3b,c) and studies typically use similar high-pass filters to study their effects 1,7,10 .The short period waves are also generated by other processes, so are not zero amplitude prior to the HTHH volcanic explosion, but their amplitude varies greatly from site to site.At many sites the tsunami arrives earlier than would be expected for a long wave travelling from the HTHH volcano through the ocean (Fig. 3c), reflecting tsunami generation by the atmospheric pressure waves 6,7,11 . The dataset 8 includes post-processed time series, original time series (except where licence constraints prohibit redistribution), code used for post-processing, and figures used for quality control.In practice quality control was implemented by iteratively post-processing and creating figures to identify problematic data, which was then fixed by modifying the post-processing code or obtaining updated original data (details below).Because the archived files 8 represent the final iteration, the post-processed time series should not evidence problems with the data.The aim is to provide an easy-to-use, transparent, and relatively comprehensive record of this rare volcanic pressure wave and tsunami in Australia. Methods MSLP and sea level metadata, and associated time series, were extracted from the original data provided for the study (Table 1).All time zones were converted to UTC.The original metadata files sometimes included stations without corresponding time series, which were skipped in post-processing.Stations were also skipped if the graphical checks (discussed below) showed that, in the days following the volcanic explosion, data was entirely missing or strongly obscured by errors.Station metadata tables were written separately for the MSLP and tide gauge data and imported into Geographic Information Systems (GIS) to visually check the station locations. MSLP data: Processing and high-pass filter.All MSLP time series were plotted for quality control purposes, along with a high-pass filtered variant including wave periods less than 2 hours (Fig. 2).The 2-hour threshold is consistent with the long-period limit used in other studies that focus on the pressure wave 1,10 .Visual inspection showed seven stations had data gaps that strongly obscured the post-explosion atmospheric pressure wave.These stations were skipped in post-processing.Finally, the original MSLP and high-pass filtered time series were written to post-processed files. The high-pass filter algorithm is as follows.Firstly, the data was limited to a 30-day period containing observations before and after the eruption (January 02-31 inclusive).This was transformed to a periodic series by subtracting a linear trend defined by the first and last data points, and then linearly interpolated to a uniform 15 second spacing.A discrete Fourier transform (DFT) was then applied, and spectra with periods shorter than the threshold were zeroed before applying an inverse DFT and adding back the linear trend.This defines the long-period component of the interpolated data.The latter was re-interpolated to the original data times, within the considered 30-day period, and finally subtracted from the original data to define the high-pass filtered time series.Examples are shown in Fig. 2b,c. Sea level data: Processing and high-pass filter.All tide gauge time series were plotted for quality control purposes, along with a high-pass filtered time series containing wave periods less than 3 hours (Fig. 3).The high-pass filter was identical to that used for the MSLP data but with a 3-hour threshold to match a previous study of tsunamis in Australia 12 .Visual inspection highlighted errors in seven time series, including localised spikes, spurious constant data, unrealistic high-frequency noise, or too many data gaps to usefully record the sea level.One record (Port Giles) was heavily contaminated by noise and completely removed.The others are partially edited within the post-processing scripts to remove spurious data.No interpolation was performed. Two different approaches were tested to extract short-period waves from the tide gauge data, which better represent the tsunami.The first approach was the simple 3-hour high-pass filter described above.The second approach was similar except that astronomical tide predictions from the tide model TPXO9v5a 13 were subtracted from the observed sea levels, before high-pass filtering to remove other long-period sea level variations.Graphical comparisons (available in the data repository) show negligible difference between these two approaches at all gauges.This is because the tidal predictions overwhelmingly contain wave periods longer than 3 hours, which are automatically removed by the high-pass filter.Thus, the simple 3-hour high-pass filter was written to the post-processed data files.This also ensures that users of post-processed data are not restricted by the tidal model's non-commercial licence. Sea level time series are included irrespective of whether they show a clear tsunami signal, because even the absence of a signal may give some insight into the tsunami dynamics.When the data licence does not permit release of raw tidal measurements, the post-processed sea level time series contain missing data, but the high-pass filtered time series are still included (Table 1). Data records The data record 8 contains three top-level subfolders.Most of these include README.mdfiles developed by the authors to further document the contents. The folder "original" contains the original MSLP and tide gauge data, and metadata, obtained for this study from the data providers (Table 1).The data formats and time zones vary.Some of the time series include artefacts (e.g.excessive noise) as discussed above.To make the processing transparent we did not alter the time series in "original", but instead, artefacts are treated in our scripts when producing the post-processed data.The tide gauge sub-folder includes a file README.md with some notes on the tide gauge configuration (where available).This often includes information on smoothing of sea levels by the tide gauges, which can affect the tsunami measurements (discussed in the Technical Validation section). The folder "post-processed" contains post-processed data and plots derived from the data in "original".The data has a consistent format and UTC time zone.The folder also includes all plots used for quality-control purposes.The key elements are: • A metadata table describing the tide gauges is in "01_tide_gauge_locations.csv". • A metadata table describing the MSLP sensors is in "02_mslp_sensor_locations.csv".• Post-processed sea level time series are in the subfolder "01_tide_gauges".• Post-processed MSLP time series are in the sub-folder "02_mslp_sensors". • The names of stations that were included in the original metadata but not processed further (due to missing data or other quality control issues) are listed in files 'ignored_mslp_sensors.txt'and 'ignored_tide_gauges.txt' The folder "scripts_to_postprocess" includes all scripts used to post-process the data in "original" and produce outputs in "post-processed".They are mostly written in R 14 . • They are provided for transparency but do not need to be run to use the data. • Because some tide gauge data was removed from the "original" folder for the final archive (due to licence restrictions), the scripts should not be re-run as-is. • Doing so will overwrite files in "post-processed" and remove gauges that could not be included in the archived "original" data folder.• To prevent this, change the variable OUTPUT_DIR in the script global_variables.R before running the code. technical Validation Validation of the MSLP data.The original MSLP data from the Bureau of Meteorology includes a quality flag for each observation.For the post-processed observations, the vast majority (99.3%) are classified as 'Quality Controlled and Acceptable' .The remaining observations are classified as either 'Quality Controlled and Suspect' (0.066%) or 'Inconsistent with other known information' (0.014%) but were not identified as problematic in the tests below.Plots of the MSLP data and its high-pass filtered variant (Fig. 2) were made for every station to help detect problematic data.For two stations all MSLP data was missing.Others had no data available during the period of the Lamb wave, or data gaps that substantially obscured the Lamb wave, and one had a large vertical discontinuity several days after the volcano.These were skipped by our post-processing scripts (noted in the "ignored_ mslp_sensors.txt"file) but are included in the original data. The high-pass filtered MSLP time series were also checked for consistency with expectations for the Lamb wave, considering both arrival times and amplitudes.To first approximation the initial Lamb wave should behave as a wave-front expanding radially from the HTHH volcano near the speed of sound, with form similar to Fig. 2c, and amplitude that decreases as the length of the wave-front increases 1,7,15 .By checking if the high-pass filtered time series match these expectations we can potentially detect outliers associated with errors in the station location, time zone, or the MSLP data itself.Our calculations assume a Lamb wave speed of 320 m/s along great circle paths, and an explosion time of 04:15 UTC, which is a simplification of the globally observed Lamb wave behaviour but adequate for technical validation 7 .The maximum high-pass filtered MSLP value was selected from a time window ± 2 hours of the theoretical arrival time.Although the Lamb wave should occur within a smaller time-window, the larger window is preferable for technical validation as it could enable spurious time-offsets in the data to be detected. The high-pass filtered MSLP maxima occurs at the expected time, soon after the theoretical Lamb wave arrival time (Fig. 4).For all but one station the time difference is less than 25 minutes.Visual inspection showed the single outlier (Dwellingup, time difference of 45 min) was due to unusually strong MSLP oscillations, observed after the leading Lamb wave peak, which our algorithm identified as the maxima (Fig. 4).Similar oscillations are observed at other stations with smaller magnitude (Fig. 2c) and we do not have evidence that this measurement is incorrect. The high-pass filtered MSLP maxima also show a general decrease with distance from the HTHH volcano, without obvious outliers (Fig. 5).This is expected near Australia where the circular initial Lamb wave front is lengthening with distance travelled, which causes spreading of the wave energy 1,15 . Validation of the sea level data.Graphical checks of the sea level time series and high-pass filtered variants were undertaken for January 14 -January 18 inclusive.This highlighted major artefacts in some time series that were addressed as described in the Methods.Graphical checks of the station locations also led to some large errors in tide gauge locations being fixed: one due to an incorrect sign of latitude; one where a site location had been substituted for another.These corrections were reported to the data providers and then corrected in the original data (so there are no inconsistencies in the provided data archive).Errors of the order of 10 m likely remain in some station locations, due to coordinates being provided with limited precision, but we did not have independent information to correct these. Measured sea levels were compared with predictions from the astronomical tide model TPXO9.v5a 13 .This provides an opportunity to detect errors such as an incorrect time zone conversion or a large error in the gauge coordinate, although some differences are expected due to meteorological effects and hydrodynamic processes unresolved in the tidal model.At most sites the observations agree well with the TPXO9.v5atidal predictions (all figures available in the data archive 8 ).At sites showing large differences we checked the deviation seemed reasonable given the site location, assuming the tidal model might not resolve sites in estuaries or near extensive shallow bathymetry.For example, the tidal model overestimates the tide range in Port Phillip Bay (Fig. 1) which is substantially attenuated compared to the nearby open coast and includes gauges at Fawkner Beacon, Hovell Pile, and Queenscliffe. The high-pass filtered sea level time series are expected to show evidence of the tsunami on January 15-16 at many sites, particularly closer to the HTHH volcano.Figure 6 shows the maximum high-pass filtered observations are broadly consistent with expectations, without obvious outliers.The tsunami is largest overall in south-eastern Australia and nearby offshore islands.The relatively dense observations in south-eastern Australia show significant tsunami size variability, consistent with observations of other tsunamis on this coast 12 , reflecting complex interactions of the tsunami with the variable coastal morphology. Further technical validation was undertaken by comparing measurements at pairs of nearby tide gauges.Nearby gauges should exhibit similar sea levels in the absence of measurement biases.However, in practice there are biases due to the tide gauge configuration and/or post-processing by data providers.Our dataset includes one location (at Fort Denison in Sydney) where two time series were derived from a single gauge; a PANSW time series that records the tsunami at 1-min intervals (each an average of 12 measurements of 5-second average sea level), and a 6-min BOMPorts time series that was derived from the latter by smoothing as routinely implemented by the data providers.The dataset also includes five locations where two neighbouring instruments (<350 m spacing) record the sea level at 1-minute intervals but measure the sea level with different degrees of smoothing, either by design or choice of configuration options.Although details of the tide gauge configuration or post-processing are not available for every site, the potential for bias can be illustrated by comparing nearby gauges. Figure 7 compares high-pass filtered sea levels at four pairs of nearby gauges (<350 m spacing) during the initial stages of the HTHH volcano tsunami.Panels A-C represent different yet nearby gauges, whereas panel D represents 6-min and 1-min data derived from a single gauge.We deliberately chose pairs of gauges with relatively large differences and a clear tsunami signal, to better illustrate the potential for artefacts in tsunami measurements. To first order all pairs of nearby gauges have similar high-pass filtered time series (Fig. 7).But significant differences occur at the Gold Coast site which is located on a high-energy beach and exhibits prominent short period waves (Fig. 7b).Here both gauges report data at 1-minute intervals, but the smoother gauge internally averages the measurements over a 4-minute interval before storing, leading to apparent differences in the tsunami maxima of 56 vs 80 cm.That difference is relatively extreme and not representative of most sites in our dataset, which are less influenced by short period waves.However, at all sites there are differences between gauges, with the smoother gauge showing attenuation of shorter periods and some reduction in the tsunami maxima and minima (Fig. 7).Similar distortions of the tsunami signal are expected to be common, depending on the tide gauge configuration, and should be considered when using sea level measurements to study the tsunami. Fig. 2 Fig. 2 Example of MSLP data processing for Broome Airport (location in Fig. 1).(A) Original data from January 14-19 inclusive.(B) High-pass filtered MSLP, with spikes showing the effects of the volcano generated atmospheric Lamb wave.Dashed vertical lines show the theoretical arrival times for a Lamb wave travelling from the HTHH Volcano at constant speed (320 m/s) along a great circle path.(C) Zoom of the middle panel showing only January 15, with the Lamb wave arrival well matching the theoretical arrival time. Fig. 3 Fig. 3 Example of sea level data processing for Crowdy Head (location in Fig. 1).(A) Original data from January 14-19 inclusive.(B) High-pass filtered sea level, with a clear tsunami signal following the main explosion.(C) Zoom of panel B showing only January 15, including the largest tsunami waves.The vertical line 'LW' shows the theoretical Lamb wave arrival time, while 'TTT' shows the tsunami travel time derived theoretically 11 assuming oceanic propagation from the HTHH volcano. Fig. 4 Fig. 4 Comparison of the theoretical Lamb wave arrival time and the observed time of the high-pass filtered MSLP maxima.The red line is y = x.The observed maxima occur shortly after the theoretical arrival time, consistent with expectations for the Lamb wave.The outlier (green) is at Dwellingup with location in Fig. 1. Fig. 5 Fig. 5 Maximum of the high-pass filtered MSLP within ±2 h of the theoretical Lamb wave arrival time.It shows the expected tendency to decrease with distance from the HTHH volcano (red triangle). Fig. 6 Fig. 6 Maximum of the high-pass filtered sea level on January 15-16.(A) Large scale with red triangle showing the HTHH volcano location.(B) Zoom in eastern Australia.The tendency for higher values around southeastern Australia is consistent with expectations for the tsunami. Fig. 7 Fig. 7 High-pass filtered sea level observations for pairs of nearby tide gauges following the HTHH volcano explosion, with locations in Fig. 1. (A) Gauges separated by 2 m at Rosslyn Bay; (B) Gauges separated by 2 m at the Gold Coast Sand Bypass Jetty; (C) Gauges separated by 320 m near Eden Cruise Wharf; (D) One gauge at Fort Denison where the 6-min BOMPorts record is derived from the 1-min PANSW record by smoothing. Table 1 . Summary of MSLP and sea level datasets included in this study.The "# Stations" column counts stations in the post-processed dataset (details in Methods).The "Dataset ID" appears in the post-processed metadata tables and is used to refer to subsets of stations from different data providers. https://www.antarctica.gov.au/about-us/contact/
5,059.6
2024-01-23T00:00:00.000
[ "Environmental Science", "Geology" ]
Elevated Level of Troponin but Not N-Terminal Probrain Natriuretic Peptide Is Associated with Increased Risk of Sudden Cardiac Death in Hypertrophic Cardiomyopathy Calculated According to the ESC Guidelines 2014 The aim of this study was to assess the relationship between biomarkers (high-sensitive troponin I [hs-TnI], N-Terminal probrain natriuretic peptide [NT-proBNP]) and calculated 5-year percentage risk score of sudden cardiac death (SCD) in hypertrophic cardiomyopathy (HCM). Methods. In 46 HCM patients (mean age 39 ± 7 years, 24 males and 22 females), echocardiographic examination, including the stimulating maneuvers to provoke maximized LVOT gradient, had been performed and next ECG Holter was immediately started. After 24 hours, the ECG Holter was finished and the hs-TnI and NT-proBNP have been measured. Patients were divided according to 1/value of both biomarkers (hs-TnI-positive and hs-TnI-negative subgroups) and 2/(NT-proBNP lower and higher subgroup divided by median). Results. In comparison between 19 patients (hs-TnI positive) versus 27 patients (hs-TnI negative), the calculated 5-year percentage risk of SCD in HCM was significantly greater (6.38 ± 4.17% versus 3.81 ± 3.23%, P < 0.05). In comparison between higher NT-proBNP versus lower NT-proBNP subgroups, the calculated 5-year percentage risk of SCD in HCM was not significantly greater (5.18 ± 3.63% versus 4.14 ± 4.18%, P > 0.05). Conclusions. Patients with HCM and positive hs-TnI test have a higher risk of SCD estimated according to SCD calculator recommended by the ESC Guidelines 2014 than patients with negative hs-TnI test. Introduction The risk factors of sudden cardiac death (SCD) for hypertrophic cardiomyopathy (HCM) in the ESC Guidelines [1] included echocardiogram, electrocardiographic (ECG) Holter monitoring, and clinical variables. The calculator for sudden cardiac death risk [1] has not included any biomarker. Recently, Kehl et al. [2] reviewed the available data regarding the usefulness of natriuretic peptides and troponins in HCM. Concentrations of natriuretic peptides, and to a lesser extent of troponins, correlate with left ventricular thickness, symptom status, and left ventricular hemodynamics by Doppler measurements (left ventricular filling pressure, left ventricular outflow tract gradient). Neither ischemic biomarker nor signs and symptoms of myocardial ischemia are included in the calculator [1]. However, ischemic response to stress revealed by echocardiographic methods becomes important prognostic player [3,4]. Currently used high-sensitivity troponin I (hs-TnI) is an super precise biomarker for the detection of myocardial ischemia. In previous HCM studies, measurements of hs-Tn were only at a resting (without stress in unnatural condition) echocardiography and not timely synchronized with maneuvers to provoke LVOTG by natural stimuli reflecting daily common physical activity for patients [5][6][7][8]. Moreover, measurements of hs-Tn were also not timely synchronized with the Holter monitoring. So far, we have used the following protocol: 24-hour cycle-8 a.m., echocardiography with LVOTG provocation by natural stimuli (orthostatic test and Valsalva test [1,[9][10][11][12][13]; the observation was divided into 2 periods: day phase physical activity with probable episodes of provocable LVOTG (unmeasurable) and night phase period as a potential time for rise of troponin, in which the level has been measured after night at 8 a.m. in the next day). Between echocardiography and biomarker sampling, 24-hour Holter electrocardiography (ECG) was recorded and then the measurement of hs-TnI (the biomarker level has a close temporal relationship with findings on Holter ECG). This protocol seems to be reasonable because hs-TnI levels may be a potential cause of life-threatening ventricular arrhythmias occurring during the previous 24 hours. The aim of this study was to assess the relationship between biomarker concentrations (hs-TnI, NT-proBNP) and calculated 5-year percentage risk score of SCD in HCM. Methods Consecutive patients with HCM were recruited to the study. Informed consent was obtained from each participant. All patients fulfilled conventional diagnostic criteria for HCM. The criteria for diagnosis of HCM, according to the ESC Guidelines, were the presence of left ventricular (LV) wall thickness of at least 15 mm without any other cause that could lead to ventricular hypertrophy [1,13]. The exclusion criteria were as follows: sport activity more than recreational, prior myocardial infarction, current symptoms suggestive of coronary artery disease, concomitant neoplasm, infection, or renal failure. Subjects who had a history of alcohol septal ablation or septal myectomy were not included into the present study. Patients on current pharmacotherapy were studied according to the abovementioned protocol. Patients have been asked to perform their common day physical activity and nocturnal resting. This protocol seems to be reasonable because hs-TnI levels may be related with labile, dynamic nature of LVOTG with fluctuating peaks in daytime (provoked LVOTG as a potential cause of myocardial ischemia). First Model of Risk Calculation (Only the Current ECG Holter). For calculating the percentage value of HCM risk score SCD, we assessed the following parameters: episodes of nonsustained ventricular tachycardia (nsVT) in current Holter monitoring (defined as three or more consecutive ventricular beats > 120 beats per minute) and two-dimensional (2D) echocardiography with the assessment of the maximal left ventricular wall thickness in diastole (MWT), left atrial diameter (LAD), and maximal provocable left ventricular outflow tract (LVOT) gradient [1]. For the disease history, we check the following binary variables: syncope and family history of sudden death [1]. Finally, we include into the calculator the age of patients [1]. Second Model of Risk Calculation (All ECG Holter − Current + Previous ECG Holter). Every patient had at least 3 times 24-hour ECG Holter recordings during life. One ECG Holter is defined as current (simultaneous) with biomarker sampling, and the remaining 2 or more recordings took place in past history. Presence/absence of NsVT was assessed by summing the data of all Holter (previous and current). The remaining parameters used in calculation were identical as in the first model. The study protocol was approved by a local institutional review board (Komisja Bioetyki Jagiellonian University). Statistical Analysis. Normally distributed continuous variables were presented as mean ± SD. Differences between two groups were assessed using independent t-test. Categorical variables were assessed using the Fisher exact test and expressed as numbers (percentages). A P value of less than 0.05 was considered significant. Results The baseline characteristics of HCM patients are displayed in Table 1. Hs-TnI was detected in all HCM patients and patients with abnormal level > 19.5 ng/L were defined as positive troponin subgroup; nonelevated hs-TnI subgroup consisted of negative troponin subgroup. After NT-proBNP measurement, only 3 patients have a normal concentration < 125 pg/mL; thus, subgroup division has been created by a median. In comparison between 19 patients (hs-TnI positive) versus 27 patients (hs-TnI negative), the calculated 5-year percentage risk of SCD in HCM was significantly greater, both in the first and in the second models (Table 2). In the second comparison between higher NT-proBNP versus lower NT-proBNP subgroups, the calculated 5-year percentage risk of SCD in HCM was not significantly greater in the first model as well as in the second model (Table 2). Discussion In the current study, patients with HCM and positive hs-TnI test have a higher risk of SCD estimated according to SCD calculator recommended by the ESC Guidelines 2014 than patients with negative hs-TnI test. Level of NT-proBNP is not associated with the calculated 5-year risk of SCD (stratified by calculator). Clinical Application of Biomarkers in HCM. In a recent review paper, authors ask the question: NT-proBNP versus troponin: is one better than the other [2]. Although both biomarkers correlate with indices of HCM disease progression, BNP may be a more sensitive indicator of left ventricular hypertrophy than troponin. It was documented that the wall thickness threshold was lower for BNP elevation than for TnI elevation [14]. Moreover, it has also stronger predictors of hemodynamic parameters and clinical symptoms than troponin. Although a correlation between elevated troponin and elevated BNP has been demonstrated [14,15], it is not a consistent finding [8,14]. Before our study, a strategy for application of these biomarkers to risk stratification has not yet been investigated. These biomarkers may be most useful when risk markers of SCD indicate intermediate or indeterminate risk. The impact of stress echocardiography in HCM is limited by lack of standardization and outcome data. ECS guidelines recommend stress echo solely for evaluation of LVOT [3]. However, large-scale registry data show that stress echocardiography positivity for ischemic criteria (such as new wall motion abnormalities and coronary flow velocity reserve) rather than inducible gradients predicts adverse outcome in HCM [4]. In a large study [3], mortality was predicted using criteria for detecting ischemia on stress echocardiography. The investigators proposed that stress echocardiography has a signficant prognostic role in patients with HCM, with ischemic endpoints showing a greater predictive accuracy than hemodynamic endpoints [3]. In a recent review by McCarthy et al. [16], the utility of hs-Tn assessment in arrhythmic disease is only at the initial stage of investigations, but it has been postulated as a valuable screening marker for patients with HCM at high risk of SCD. Regular training exercise (e.g., fitness activity) has recently been recommended for selected patients with HCM [17]. Based on current observation on the association between tachycardia and elevated troponin level in patients with HCM [18] and phenomenon of troponin release by exercise [19], we suggest that any exercise stress test in HCM patients (performed either for training or diagnostic purposes) should be controlled by troponin level measurements before exercise and 6 and 12 hours after the exercise. Especially predisposed to high LVOT gradient are HCM women > 50 years of age, due to smaller LV cavity size [20]. This subgroup of HCM patients may be particularly at risk to develop high gradient at peak/post exercise period resulting into increased calculated risk in calculator. Our study has several limitations. First, our study group may appear too small to definitely rule out association between NT-proBNP and SCD HCM risk score. Secondly, the current pharmacological treatment was maintained, and particularly, β-blockers were not withdrawn. In our pilot study, we aimed to make the observation on the correlation between hs-TnI release and timely synchronized findings on ECG Holter and resting/stress echocardiography. Our preliminary study showed that β-blocker withdrawal might not be safe in troponin-positive subgroup of patients. In future studies, we will attempt to increase the dose and use only one type of a β-blocker to decrease ischemia burden. We decided to measure hs-TnI levels only once because our pilot study was conducted in an outpatient setting. The optimal protocol, that is, 48-hour profile of troponin measurement with the assessment of troponin with echocardiographic examination every 8 hours, and 48 hours Holter monitoring (recommended by the ESC Guidelines), would require the in-hospital setting for the study and would be more costly. Moreover, only an outpatient-based study provides an opportunity to assess the heart rate during Table 2: Comparison between subgroups of hs-TnI positive versus negative and also between subgroup of lower versus higher NT-proBNT concentration (NS: nonsignificant). (i) Kubo (iv) Kubo et al. did not analyze nsVT in ECG-Holter (which is absolutely needed to measure the risk of SCD by calculator); moreover, nsVT assessment is needed also in the American Guideline from 2011 for risk stratification of SDC. Thus, in a paper by Kubo et al., the lack of ECG Holter analysis is a serious limitation. (v) Kubo et al. did not describe the time period between blood sampling for biomarkers and measurement of echo parameters (nsVT in Holter was not studied). In our study, the time synchrony between echo/ Holter and hs-TnI measurement was defined. (vi) Kubo et al. did not analyze NT-proBNP, but only hs-Tn. Our study provides more information about two important biomarker sampling simultaneously with echocardiographic and ECG Holter measurements. Conclusions Patients with HCM and positive hs-TnI test have higher risk of SCD estimated according to SCD calculator recommended by the ESC Guidelines 2014 than patients with negative hs-TnI test. Clinical Perspective. These findings suggest that hs-Tn may be useful as an additional biomarker for better risk stratification in HCM. Additionally, we have postulated to monitor also the biomarkers of endothelial dysfunction (impaired endothelium-dependent vasodilatation) [21]. Conflicts of Interest The authors declare that there is no conflict of interest. Authors' Contributions Rafał Hładij contributed to the conception, contributed to acquisition and analysis, and gave final approval. Renata Rajtar-Salwa contributed to the conception and design; contributed to acquisition, analysis, and interpretation; critically revised the manuscript; and gave final approval. Paweł Petkow Dimitrow contributed to the conception and design; contributed to acquisition, analysis, and interpretation; drafted the manuscript; critically revised the manuscript; and gave final approval. Renata Rajtar-Salwa and Rafał Hładij contributed equally to this work. Funding The authors received a Jagiellonian University Grant.
2,892.4
2017-11-20T00:00:00.000
[ "Medicine", "Biology" ]
Networks of ·/G/∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\cdot /G/\infty $$\end{document} queues with shot-noise-driven arrival intensities We study infinite-server queues in which the arrival process is a Cox process (or doubly stochastic Poisson process), of which the arrival rate is given by a shot-noise process. A shot-noise rate emerges naturally in cases where the arrival rate tends to exhibit sudden increases (or shots) at random epochs, after which the rate is inclined to revert to lower values. Exponential decay of the shot noise is assumed, so that the queueing systems are amenable to analysis. In particular, we perform transient analysis on the number of jobs in the queue jointly with the value of the driving shot-noise process. Additionally, we derive heavy-traffic asymptotics for the number of jobs in the system by using a linear scaling of the shot intensity. First we focus on a one-dimensional setting in which there is a single infinite-server queue, which we then extend to a network setting. Introduction In the queueing literature, one has traditionally studied queues with Poisson input. The Poisson assumption typically facilitates explicit analysis, but it does not always align well with actual data; see, for example, [11] and references therein. More specifically, statistical studies show that in many practical situations, Poisson processes underestimate the variability of the queue's input stream. This observation has motivated research on queues fed by arrival processes that better capture the burstiness observed in practice. The extent to which burstiness takes place can be measured by the dispersion index, i.e., the ratio of the variance of the number of arrivals in a given interval, and the corresponding expected value. In arrival streams that display burstiness, the dispersion index is larger than unity (as opposed to Poisson processes, for which it is equal to unity), a phenomenon that is usually referred to as overdispersion. It is desirable that the arrival process of the queueing model takes the observed overdispersion into account. One way to achieve this is to make use of Cox processes, which are Poisson processes, conditional on the stochastic time-dependent intensity. It is an immediate consequence of the law of total variance that Cox processes do have a dispersion index larger than unity. Therefore, this class of processes makes for a good candidate to model overdispersed input processes. In this paper we contribute to the development of queueing models fed by input streams that exhibit overdispersion. We analyze infinite-server queues driven by a particular Cox process, in which the rate is a (stochastic) shot-noise process. The shot-noise process we use is one in which there are only upward jumps (or shots) that arrive according to a homogeneous Poisson process. Furthermore, we employ an exponential 'response' or 'decay' function, which encodes how quickly the process will decline after a jump. In this case, the shot-noise process is a Markov process; see [16, p. 393]. There are several variations on shot-noise processes; see, for example, [10] for a comprehensive overview. It is not a novel idea to use a shot-noise process as stochastic intensity. For instance, in insurance mathematics, the authors of [5] use a shot-noise-driven Cox process to model the claim count. They assume that disasters happen according to a Poisson process, and each disaster can induce a cluster of arriving claims. The disaster corresponds to a shot upwards in the claim intensity. As time passes, the claim intensity process decreases, as more and more claims are settled. Another example of shot-noise arrival processes is found in the famous paper [13], where it is used to model the occurrences of earthquakes. The arrival process considered in [13] has one crucial difference with the one used in this paper: it makes use of Hawkes processes [9], which do have a shot-noise structure, but have the special feature that they are self-exciting. More specifically, in Hawkes processes, an arrival induces a shot in the arrival rate, whereas in our shot-noise-driven Cox model these shots are merely exogenous. The Hawkes process is less tractable than the shot-noisedriven Cox process. A very recent effort to analyze ·/G/∞ queues that are driven by a Hawkes process has been made in [8], where a functional central limit theorem is derived for the number of jobs in the system. In this model, obtaining explicit results (in a non-asymptotic setting), as we are able to do in the shot-noise-driven Cox variant, is still an open problem. In order to successfully implement a theoretical model, it is crucial to have methods to estimate its parameters from data. The shot-noise-driven Cox process is attractive since it has this property. Statistical methods that filter the unobservable intensity process, based on Markov Chain Monte Carlo (MCMC) techniques, have been developed; see [3] and references therein. By filtering, they refer to the estimation of the intensity process in a given time interval, given a realized arrival process. Subsequently, given this 'filtered path' of the intensity process, the parameters of the shot-noise process can be estimated by a Monte Carlo version of the expectation maximization (EM) method. Furthermore, the shot-noise-driven Cox process can also be easily simulated; see, for example, the thinning procedure described in [12]. In this paper we study networks of infinite-server queues with shot-noise-driven Cox input. We assume that the service times at a given node are i.i.d. samples from a general distribution. The output of a queue is routed to a next queue, or leaves the network. Infinite-server queues have the inherent advantage that jobs do not interfere with one another, which considerably simplifies the analysis. Furthermore, infinite-server systems are frequently used to produce approximations for corresponding finite-server systems. In the network setting, we can model queueing systems that are driven by correlated shot-noise arrival processes. With regards to applications, such a system could, for example, represent the call centers of a fire department and police department in the same town. The contributions and organization of this paper are as follows. In this paper we derive exact and asymptotic results. The main result of the exact analysis is Theorem 4.6, where we find the joint Laplace transform of the numbers of jobs in the queues of a feedforward network, jointly with the shot-noise-driven arrival rates. We build up toward this result as follows. In Sect. 2 we introduce notation and we state the important Lemma 2.1 that we repeatedly rely on. Then we derive exact results for the single infinite-server queue with a shot-noise arrival rate in Sect. 3.1. Subsequently, in Sect. 3.2, we show that after an appropriate scaling the number of jobs in the system satisfies a functional central limit theorem (Theorem 3.4); the limiting process is an Ornstein-Uhlenbeck (OU) process driven by a superposition of a Brownian motion and an integrated OU process. We then extend the theory to a network setting in Sect. 4. Before we consider full-blown networks, we first consider a tandem system consisting of an arbitrary number of infinite-server queues in Sect. 4.1. Then it is argued in Sect. 4.2 that a feedforward network can be seen as a number of tandem queues in parallel. We analyze two different ways in which dependency can enter the system through the arrival process. Firstly, in Model (M1), parallel service facilities are driven by a multidimensional shot-noise process in which the shots are simultaneous (which includes the possibility that all shot-noise processes are equal). Secondly, in Model (M2), we assume that there is one shot-noise arrival intensity that generates simultaneous arrivals in all tandems. In Sect. 5 we finish with some concluding remarks. Notation and preliminaries Let ( , F, {F t } t≥0 , P) be a probability space, in which the filtration {F t } t≥0 is such that (·) is adapted to it. A shot-noise process is a process that has random jumps at Poisson epochs, and a deterministic 'response' or 'decay' function, which governs the behavior of the process. See [16,Section 8.7] for a brief account of shot-noise processes. The shot noise that we use in this paper has the following representation: where the B i ≥ 0 are i.i.d. shots from a general distribution, the decay function is exponential with rate r > 0, P B is a homogeneous Poisson process with rate ν, and the epochs of the shots, that arrived before time t, are labeled t 1 , t 2 , . . . , t P B (t) . As explained in the introduction, the shot-noise process serves as a stochastic arrival rate to a queueing system. It is straightforward to simulate a shot-noise process; for an illustration of a sample path, consider Fig. 1. Using the thinning method for nonhomogeneous Poisson processes [12], and using the sample path of Fig. 1 as the arrival rate, one can generate a corresponding sample path for the arrival process, as is displayed in Fig. 2. Typically, most arrivals occur shortly after peaks in the shot-noise process in Fig. 1, as expected. We write (i.e., without argument) for a random variable with distribution equal to that of lim t→∞ (t). We now present well-known transient and stationary moments of the shot-noise process, see Appendix 2 and, for example [16]: with B distributed as B 1 , We remark that, for convenience, we throughout assume (0) = 0. The results can be readily extended to the case in which (0) is a nonnegative random variable, at the cost of a somewhat more cumbersome notation. In the one-dimensional case, we denote β(s) = E e −s B , and in the multidimensional case, where s = (s 1 , s 2 , . . . , s d ), for some integer d ≥ 2, now denotes a vector, we write The following lemma will be important for the derivation of the joint transform of (t) and the number of jobs in system, in both single-and multi-node cases. Proof See Appendix 1. A single infinite-server queue In this section we study the M S /G/∞ queue. This is a single infinite-server queue, of which the arrival process is a Cox process driven by the shot-noise process (·), as defined in Sect. 2. First we derive exact results in Sect. 3.1, where we find the joint transform of the number of jobs in the system and the shot-noise rate, and derive expressions for the expected value and variance. Subsequently, in Sect. 3.2, we derive a functional central limit theorem for this model. Exact analysis We let J i be the service requirement of the i-th job, where J 1 , J 2 , . . . are assumed to be i.i.d.; in the sequel J denotes a random variable that is equal in distribution to J 1 . Our first objective is to find the distribution of the number of jobs in the system at time t, in the sequel denoted by N (t). This can be found in several ways; because of the appealing underlying intuition, we here provide an argument in which we approximate the arrival rate on intervals of length by a constant, and then let ↓ 0. This procedure works as follows. We let (t) = (ω, t) be an arbitrary sample path of the driving shot-noise process. Given (t), the number of jobs that arrived in the interval [k , (k + 1) ) and are still in the system at time t has a Poisson distribution . . are i.i.d. standard uniform random variables. Summing over k yields that the number of jobs in the system at time t has a Poisson distribution with parameter The argument above is not new: a similar observation was mentioned in, for example [6], for deterministic rate functions. Since (·) is actually a stochastic process, we conclude that the number of jobs has a mixed Poisson distribution, with the expression in Eq. (3) as random parameter. As a consequence, we find by conditioning on F t , We have found the following result. Theorem 3.1 Let (·) be a shot-noise process. Then Proof The result follows directly from Lemma 2.1 and Eq. (4). In Theorem 3.1 we found that N (t) has a Poisson distribution with the random parameter given in Eq. (3). This leads to the following expression for the expected value: In addition, by the law of total variance we find The latter expression we can further evaluate, using an approximation argument that resembles the one we used above. Using a Riemann sum approximation, we find . We thus have that (7) equals We can make this more explicit using the corresponding formulas in (2). Example 3.2 (Exponential case) Consider the case in which J is exponentially distributed with mean 1/μ and (0) = 0. Then we can calculate the mean and variance explicitly. For μ = r , where the function h r,μ (·) is defined by For the variance, we thus find for μ = r and for μ = r Asymptotic analysis This subsection focuses on deriving a functional central limit theorem (FCLT) for the model under study, after having appropriately scaled the shot rate of the shot-noise process. In the following, we assume that the service requirements are exponentially distributed with rate μ, and we point out how it can be generalized to a general distribution in Remark 3.6 below. We follow the standard approach to derive the FCLT for infinite-server queueing systems; we mimic the argumentation used in, for example [1,14]. As the proof has a relatively large number of standard elements, we restrict ourselves to the most important steps. We apply a linear scaling to the shot rate of the shot-noise process, i.e., ν → nν. It is readily checked that under this scaling, the steady-state level of the shot-noise process and the steady-state number of jobs in the queue blow up by a factor n. It is our objective to prove that, after appropriate centering and normalization, the process recording the number of jobs in the system converges to a Gaussian process. In the n-th scaled model, the number of jobs in the system at time t, denoted by N n (t), has the following (obvious) representation: with A n (t) denoting the number of arrivals in [0, t], and D n (t) the number of departures, Here, A n (t) corresponds to a Cox process with a shot-noise-driven rate, and therefore we have, with n (s) the shot-noise in the scaled model at time s and S A (·) a unit-rate Poisson process, in line with our previous assumptions, we put n (0) = 0. For our infinite-server model the departures D n (t) can be written as, with S D (·) a unit-rate Poisson process (independent of S A (·)), We start by identifying the average behavior of the process N n (t). Following the reasoning of [1], assuming that N n (0)/n ⇒ ρ(0) (where '⇒' denotes weak convergence), N n (t)/n converges almost surely to the solution of Now we move to the derivation of the FCLT. Following the approach used in [1], we proceed by studying an FCLT for the input rate process. To this end, we first definê The following lemma states thatK n (·) converges to an integrated Ornstein-Uhlenbeck (OU) process, corresponding to an OU processˆ (·) with a speed of mean reversion equal to r , long-run equilibrium level 0, and variance σ 2 := ν E B 2 /(2r ). Lemma 3.3 Assume that for the shot sizes, distributed as B, it holds that in whichˆ satisfies, with W 1 (·) a standard Brownian motion, Proof This proof is standard; for instance from [2,Prop. 3], by putting the λ d in that paper to zero, it follows thatˆ n (·) ⇒ˆ (·). This impliesK n (·) ⇒K (·), using (10) together with the continuous mapping theorem. Interestingly, the above result entails that the arrival rate process displays meanreverting behavior. This also holds for the job count process in standard infinite-server queues. In other words, the job count process in the queueing system we are studying can be considered as the composition of two mean-reverting processes. We make this more precise in the following. From now on we consider the following centered and normalized version of the number of jobs in the system: We assume thatN n (0) ⇒N (0) as n → ∞. To prove the FCLT, we rewriteN n (t) in a convenient form. Mimicking the steps performed in [1] or [14], withS and using the relation (9), we eventually obtain Our next goal is to apply the martingale FCLT to the martingales R n (t)/ √ n; see, for background on the martingale FCLT, for instance [7] and [17]. The quadratic variation equals Appealing to the martingale FCLT, the following FCLT is obtained. with W 2 (·) a standard Brownian motion that is independent of the Brownian motion W 1 (·) we introduced in the definition ofK (·). Remark 3.5 In passing, we have proven that the arrival process as such obeys an FCLT. With we find that n (t) ⇒Â(t) as n → ∞, wherê the last equality follows from the fact that ρ(·) satisfies (9). Remark 3.6 The FCLT can be extended to non-exponential service requirements, by making use of [15,Thm. 3.2]. Their approach relies on two assumptions: • The arrival process should satisfy an FCLT; • The service times are i.i.d. nonnegative random variables with a general c.d.f. independent of the arrival process. As noted in Remark 3.5, the first assumption is satisfied for the model in this paper. The second assumption holds as well. In the non-exponential case, the results are less clean; in general, the limiting process can be expressed in terms of a Kiefer process, cf. for example [4]. Networks Now that the reader is familiar with the one-dimensional setting, we extend this to networks. In this section, we focus on feedforward networks in which each node corresponds to an infinite-server queue. Feedforward networks are defined as follows. E) be a directed graph with nodes V and edges E. The nodes represent infinite-server queues and the directed edges between the facilities demonstrate how jobs move through the system. We suppose that there are no cycles in G, i.e., there is no sequence of nodes, starting and ending at the same node, with each two consecutive nodes adjacent to each other in the graph, consistent with the orientation of the edges. We focus on feedforward networks to keep the notation manageable. In Theorem 4.6, we derive the transform of the numbers of jobs in all nodes, jointly with the shot-noise process(es) for feedforward networks. Nonetheless, we provide Example 4.5, to show that analysis is in fact possible if there is a loop, but at the expense of more involved calculations. Since all nodes represent infinite-server queues, one can see that whenever a node has multiple input streams, it is equivalent to multiple infinite-server queues that work independently from each other, but have the same service speed and induce the same service requirement for arriving jobs. Consider Fig. 3 for an illustration. The reason why this holds is that different job streams move independently through the system, without creating waiting times for others. Therefore, merging streams do not increase the complexity of our network. The same holds for 'splits' in job streams. By this we mean that after jobs finished their service at a server, they move to server i with probability q i (with i q i = 1). Then, one can simply sample the entire path that the job will take through the system, at the arrival instance at its first server. If one recognizes the above, then all feedforward networks reduce to parallel tandem systems in which the first node in each tandem system is fed by external input. The procedure to decompose a network into parallel tandems consists of finding all paths between nodes in which jobs either enter or leave the system. Each of these paths will subsequently be considered as a tandem queue, which are then set in parallel. To build up to the main result, we first study tandem systems in Sect. 4.1. Subsequently, we put the tandem systems in parallel in Sect. 4.2 and finally we present the main theorem and some implications in Sect. 4.3. Tandem systems As announced, we proceed by studying tandem systems. In Sect. 4.2 below, we study d parallel tandem systems. In this subsection, we consider the i -th of these tandem systems, where i = 1, . . . , d. Suppose that tandem i has S i service facilities and the input process at the first node is Poisson, with a shot-noise arrival rate i (·). We assume that jobs enter node i1. When they finish service, they enter node i2, etc., until they enter node i S i after which they leave the system. We use i j as a subscript referring to node j in tandem system i and we refer to the node as node i j. Hence N i j (t) and J i j denote the number of jobs in node i j at time t, and a copy of a service requirement, respectively, where j = 1, . . . , S i . Fix some time t > 0. Again we derive results by splitting time into intervals of length . Denote by M i j (k, ) the number of jobs present in node i j at time t that have entered node i1 between time k and (k + 1) ; as we keep t fixed we suppress it in our notation. Because jobs are not interfering with each other in the infinite-server realm, we can decompose the transform of interest: Supposing that the arrival rate is a deterministic function of time λ i (·), by conditioning on the number of arrivals in the k-th interval, in which p i (u) ( p i j (u), respectively) denotes the probability that the job that entered tandem i at time u has already left the tandem (is in node j, respectively) at time t. Note that Recognizing a Riemann sum and letting ↓ 0, we conclude that Eq. (12) takes the following form: In case of a stochastic rate process i (·), we obtain Therefore, it holds that and we consequently find Parallel (tandem) systems Now that the tandem case has been analyzed, the next step is to put the tandem systems as described in Sect. 4.1 in parallel. We assume that there are d parallel tandems. There are different ways in which dependence between the parallel systems can be created. Two relevant models are listed below, and illustrated in Fig. 4. In Model (M1), correlation between the shot-noise arrival rates induces correlation between the numbers of jobs in the different queues. In Model (M2), correlation clearly appears because all tandem systems have the same input process. Of course, the tandem systems will not behave identically because the jobs may have different service requirements. In short, correlation across different tandems in Model (M1) is due to linked arrival rates, and correlation in Model (M2) is due to simultaneous arrival epochs. We feel that both versions are relevant, depending on the application, and hence we analyze both. Analysis of (M1)-Suppose that the dependency is of the type as in Model (M1). This means that the shots, in each component of , occur simultaneously. Recall the definition of f i as stated in Eq. (13). It holds that where the last equality holds due to (15). Analysis of (M2)-Now suppose that the dependency in this model is of type (M2), i.e., there is one shot-noise process that generates simultaneous arrivals in the parallel tandem systems. First we assume a deterministic arrival rate function λ(·). Let M i j (k, ) be the number of jobs present in tandem system i at node j at time t that have arrived in the system between k and (k + 1) . Note that To further evaluate the right-hand side of the previous display, we observe that we can write in this definition p 1 ,..., d ≡ p 1 ,..., d (u) equals the probability that a job that arrived at time u in tandem i is in node i at time t [cf. Eq. (14)]. The situation that i = S i + 1 means that the job left the tandem system; we define z i,S i +1 = 1. Remark 4.4 (Routing) Consider a feedforward network with routing. As argued in the beginning of this section, the network can be decomposed as a parallel tandem system. In case there is splitting at some point, then one decomposes the network as a parallel system, in which each tandem i receives the job with probability q i , such that q i = 1. This can be incorporated simply by adjusting the probabilities contained in f i in Eq. (16), which are given in Eq. (14), so that they include the event that the job joined the tandem under consideration. For instance, the expression for p i (u) in the left equation in (14) would become where Q is a random variable with a generalized Bernoulli (also called 'categorical') distribution, where P(job is assigned to tandem i) = P(Q = i) = q i , for i = 1, . . . d, with q i = 1; the right equation in (14) is adjusted similarly. Other than that the analysis is the same for the case of splits. Remark 4.5 (Networks with loops) So far we have only considered feedforward networks. Networks with loops can be analyzed as well, but the notation becomes quite cumbersome. To show the method in by which networks with loops and routing can be analyzed, we consider a specific example. Suppose that arrivals enter node one, after which they enter node two. After they have been served in node two, they go back to node one with probability η, or leave the system with probability 1 − η. In this case, with similar techniques as before, we can find in which job(u) is the job that arrived at time u and we are examining the system at time t. Now, if we denote service times in the j-th node by J ( j) , then, at a specific time t, Analogously, P( job(u)is in node 1) equals, by conditioning on the job having taken k loops, For example, in the case where all J ( j) i are independent and exponentially distributed with mean 1/μ, we can calculate those probabilities explicitly. Indeed, if we denote by Y a Poisson process with rate μ, then, for example A similar calculation can be done for the probability that the job is in node one. Recalling that a sum of exponentials has a Gamma distribution, we can write where F (2m+2,μ) denotes the distribution function of a -distributed random variable with rate μ and shape parameter 2m + 2. Main result In this subsection, we summarize and conclude with the following main result. and where g(s, v) is a vector-valued function in which component i is given by with f (·, ·) as defined in Eq. (17). Next we calculate covariances between nodes in tandem and parallel thereafter. Covariance in Tandem System-Consider a tandem system consisting of two nodes and we want to analyze the covariance between the number of jobs in the nodes. Dropping the index of the tandem system, denote by N 1 (·) and N 2 (·) the number of jobs in nodes 1 and 2, respectively. Using Eq. (16), differentiation yields cf. Eq. (2) for the last equality. Covariance parallel (M1)-Consider a parallel system consisting of two nodes only. In order to study covariance in the parallel (M1) case, we need a result about the covariance of the corresponding shot-noise process. Lemma 4.7 Let 1 (·), 2 (·) be shot-noise processes of which the jumps occur simultaneously according to a Poisson arrival process with rate ν. Let the decay be exponential with rate r 1 , r 2 , respectively. Then it holds that, for δ > 0, which, in the case 1 = 2 , reduces to corresponding to [16, p. 394] and Eq. (2). Proof See Appendix 2. By making use of Eq. (16), we find This implies where we made use of the fact that, for u ≥ v, cf. Lemma 4.7. Covariance parallel (M2)-Extracting the mixed moment from the transform in Eq. (18), we derive directly that This implies The following proposition compares the correlations present in Model (M1) and (M2). In the proposition, we refer to the number of jobs in queue j in Model (Mi) at time t as N (i) j (t), for i = 1, 2. We find the anticipated result that the correlation in Model (M2) is stronger than in Model (M1). Proof Because of the assumption 1 The expressions for the covariances, which are derived earlier in this section, imply that Cov N which is nonnegative, as desired. Concluding remarks We have considered networks of infinite-server queues with shot-noise-driven Coxian input processes. For the single queue, we found explicit expressions for the Laplace transform of the joint distribution of the number of jobs and the driving shot-noise arrival rate, as well as a functional central limit theorem of the number of jobs in the system under a particular scaling. The results were then extended to a network context: we derived an expression for the joint transform of the numbers of jobs in the individual queues, jointly with the values of the driving shot-noise processes. We included the functional central limit theorem for the single queue, but it is anticipated that a similar setup carries over to the network context, albeit at the expense of considerably more involved notation. Our future research will include the study of the departure process of a single queue; the output stream should remain Coxian, but of another type than the input process. recognizing a Riemann sum, With P B (t) a Poisson process with rate ν and the U i i.i.d. samples from a uniform distribution on [0, 1], it holds that We thus obtain that the expression in (20) equals (where the equality follows by interchanging the order of the summations) which behaves as Furthermore, we have the representation We conclude that E z N (t) e −s (t) equals Conditioning on the values of P B ( ) − P B (( −1) ), for = 1, . . . , t/ , and using that the B i are i.i.d., we find that the expression in Eq. (21)) is equal to the limit, as ↓ 0, of the following expression The lemma now follows from continuity of the exponent and the definition of the Riemann integral. Appendix 2: Proof of Lemma 4.7 Let P B (·) be the Poisson process with rate ν, corresponding to the occurrences of shots, and let E t,δ (n) be the event that P B (t + δ) − P B (t) = n. By conditioning on the number of shots in the interval (t, t + δ], we find We proceed by rewriting the conditional expectation as ..,t n ,δ (n)) dt 1 . . . dt n , denoting by F t 1 ,...,t n ,δ (n) the event E t,δ (n) and the arrival epochs are t 1 , . . . , t n . Note that we have, due to Eq. (1), conditional on F t 1 ,...,t n ,δ (n), the distributional equality 2 (t + δ) = 2 (t)e −r 2 δ + n i=1 B i2 e −r 2 (t+δ−t i ) , and consequently E 1 (t) 2 (t + δ) | F t 1 ,...,t n ,δ (n) Note that for all i = 1, . . . , n we have After unconditioning Eq. (23) with respect to the arrival epochs by integrating over all t i from t to t + δ and dividing by δ n , we thus obtain (1 − e −r 2 δ )n E B 12 and hence, denoting i := lim t→∞ i (t) for i = 1, 2, where we made use of E i = ν E B 1i /r i . It follows that Cov( 1 (t), 2 (t + δ)) = E 1 (t) 2 where the equality E 2 (t + δ) = E 2 (t)e −r 2 δ + (1 − e −r 2 δ ) E 2 is used, which can be directly checked using the expressions for the mean in Eq. (2). This proves the first equality in Eq. (19). The proof of the second equality follows from
7,631.4
2017-03-17T00:00:00.000
[ "Mathematics" ]
Petrographic Analysis of Rocks in Tanah Puteh and Pulai, Gua Musang, Kelantan. . Gua Musang, Kelantan is well known with diversity of rock associations and also rich in mineral resources such as gold and feldspar according to the existence of active mining sites in this area. The diversity of rock associations in Pulai and Tanah Puteh, Gua Musang have been identified and sampled for this study. The objectives are to determine the mineral association and its composition in various rock types as well as its distribution within these areas. Rock and soil samples have been collected for further analyses using petrography and geochemical analysis respectively. Limestone, tuff, shale, chert, phyllite are amongst of the rock types that have been sampled in this study area. The properties of mineral association in the thin section samples (from the rock samples) have been observed in details using optical microscope. Meanwhile, the results of x-ray fluorescence (XRF) analysis for selected samples have shown that the SiO 2 were ranging between 66 to almost 80 wt.%, and Al 2 O 3 varied from 17 to nearly 26 wt.%. The correlation of the mineral composition from chemical analysis are found in accordance with the mineral existence based on the petrographic studies on selected thin sections from Pulai and Tanah Puteh. Introduction Gua Musang which located in the southern Kelantan is the largest district in the state of Kelantan.It quite well known with diversity of rock associations and also rich in mineral resources such as gold and feldspar according to the existence of active mining sites in this area.Most of the gold mines here are from placer deposit type and contributed 10% of the annual gold production in Malaysia [1].Katok Batu Mine is one of the examples for gold mineralization that occur along shear zone of intrusive body (Senting Granite) and Permian metasedimentary rocks [1,2].Aside from metallic minerals, this area also rich with industrial minerals such as feldspar and clay based on the existence of a quarry in Tanah Puteh, Pulai, Gua Musang.Thus, it is important to study the petrography of rock assemblages within this area in order to recognize the occurrences of mineral deposit.The characteristics of mineral association need further understanding for future exploration purposes.Geochemical analysis was also carried out to determine the chemical characteristics together with petrography studies.X-Ray Fluorescence (XRF) was applied to determine major element and validating the mineral's existence.In addition, Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM-EDS) analysis and optical microscopic analysis were used to determine the relationship between mineral association and its composition. Geological setting Kelantan is located within the Central Belt of Peninsular Malaysia, bordered on the west of sediment and meta-sediment while on the east consists of granite from the Main Range.Main Range Granite is dominated on the western part of Gua Musang stretching along western of the state boundary of Perak and Pahang while Boundary Range is distributed on the east part of Gua Musang.Gua Musang Formation consists of argillaceous and calcareous rock interbedded with volcanic along with minor arenaceous rocks.The argillaceous of Gua Musang Formation dominantly consists of shale, siltstone, mudstone, slate and phyllite.Carbonate facies can be found as extensive facies in Gua Musang Formation [3]. Sedimentary and metasedimentary rocks found in the centre of Kelantan, is bordered by the Main Range granite on the west and the Boundary Range granite on the east.Ulu Lalat (Senting) batholith, Stong Igneous Complex and Kemahang pluton are the windows of the granitic intrusives within the central zone that have a north-south trend and the continuation up to north Pahang.Boundary Range granite in the east is overlain by the coastal alluvial flat of Sungai Kelantan [4,5,6].Lower Paleozoic is the oldest rock that outcropping as a northerly-trending belt bordering the foothills of the Main Range and extending eastward up to Sungai Nenggiri.The rocks are mainly metapelites with minor volcanic fragments and slight arenaceous and calcareous intercalations.In the eastern side, Permian volcanicsedimentary rocks occur extensively and overlying the Lower Paleozoic sequence in southwest Kelantan while the Taku Schist (pre-Triassic), dominates the central north Kelantan [6,7].Some of the hydrothermal veins are controlled by the structure caused by granite intrusion.Around the intrusive bodies, low temperature contact metamorphism is formed in the shear zones.The quartz veins with low in sulphides, percolate through these sheared zones and cross-cut the volcanic sedimentary rocks [8] and granitoids [1].The fractures, cracks and bedding of the sedimentary rocks have been infilled with the quartz veins.The hydrothermal solutions travel upwards along the granitoid shear zones to the surface.The hydrothermal veins possibly originate from deeper levels.The intrusive rocks also show sign of gold mineralization especially in the shear zones.This characteristic also can be observed at Katok Batu and Panggong Besar mine in Gua Musang, Kelantan.Generally, the gold mineralization is concentrated in the Triassic and Permian metasedimentary rocks together with volcanic rocks [2]. Sampling Sampling was conducted around selected area in Pulai and Tanah Puteh, Gua Musang, Kelantan (Figure 1 and 2).Geological map from Mineral and Geoscience Department Malaysia (JMG) was used to aid in the selection of the sampling area.Rocks and soils are targeted for sampling.Approximately 500g-1000g of soils are collected together with rock sample and the sampling locations are shown in yellow point as marked in Figure 2. Petrographic analysis Six hand specimens represent the rock types were selected within the study area and sent to Universiti Kebangsaan Malaysia (UKM) laboratory for thin section preparation in order to proceed for mineralogical and petrography purposes.The optical polarised microscope was used to investigate the mineral presence and distribution in the samples. Chemical snalysis X-Ray Fluorescence (XRF) was used to determine the major elements composition and mineral identification within 5 selected samples respectively.The selected samples were sent for analysis at Physic Laboratory in Universiti Sains Malaysia (USM) Penang.The variation of element composition for each sample and its correlation to the petrographic analyses can be further understand. Field observation and sample description Based on fieldwork observation, limestone body is distributed on the North-East of Gua Musang area and majorly found as karst or hill (Figure 3a).Karst limestone consists of calcium carbonate (CaCO3) and found abundance as the isolated steep hills limestone.The structure sometimes is difficult to be seen as it is covered by vegetation.The alluvial or red soil are found covering the surface in most location.However, several limestone boulders are found scattering within the alluvial or top soil (Figure 3b).This formation obviously can be seen when there have soil excavation or road construction especially along the main road/highway in this area. Tuff also exists in Pulai (Figure 3c) with fine-grained size and white in colour.The tuff consists minerals of quartz, feldspar, calcite and a small amount of sericite and chlorite.The tuff potentially formed from chemical deposition of calcite or calcium carbonate.Feldspar and clay minerals are quite abundant and potentially resulted from the weathering of rock associations around karst (Figure 3d).Feldspar and kaolinite can be found mainly distributed along the main road near to Kampung Tanah Puteh.In addition, this location also has an active feldspar quarry and can be observed clearly as the white areas in google earth map (see Figure 1). Meanwhile, meta-sediment presents in the study area as phyllite, shale and slate.Metasediment is the oldest rock in the sequence age in early Triassic.Phyllite found in the study area has dark gray colour and fine-grained size (Figure 3e).Phyllite outcrop is easily break off in a blocky shape.As for the slate, it has fine-grained size with whitish colour.However, slate in some area has been weathered but still show it distinct cleavage.Shale was found in weathered condition with blackish colour.Weathered mudstone with clay association (Figure 3f) can easily break into powder and fine particles. Small sized of iron nodules can be found associated together in some alluvial areas.In deeper alluvial or topsoil, boulders of limestones are found scattered (Figure 3g).Limestone consists of fine-grained size and vary in colour ranging from dark grey to white.Two types of limestone found in the study area are carbonaceous limestone which is dark grey in colour (Figure 3h) and white limestone (Figure 3i).The white limestone commonly originates from the chemical reaction where the calcium carbonate precipitate from the marine water and fresh water.As for the carbonaceous limestone, it forms from biological origin and the dark colour basically represents its composition resulted from the presence of magnesium and iron elements. Petrography analysis Six thin section samples from several rock types in study area have been observed under optical microscope for details study on mineral associations.Based on the petrography analysis, the main mineral associations are quartz and feldspar followed by clay minerals, mica, calcite and etc. Table 1 represents the analysis of selected thin section samples from several rock types and its mineral distribution under microscope observation.The percentage of composition is counted referring on the mineral presence in the thin section sample.Quartz basically colorless in PPL (plane polarized light) but show undulose extinction in XPL (crossed polarized light).In phyllite, quartz or silica occurs as microquartz and calcedonic quartz replacing grains cement and matrix.The mineral associations of biotite, muscovite and chlorite are found scattered within it (Figure 4a).Meanwhile, a phenocryst of quartz can be seen in groundmass of rhyolite (sample AN11, Figure 4b).Muscovite exhibits flaky textures with tabular crystal in tuff groundmass.(Figure 4c).It normally has cleavage in one direction. Calcite and quartz are sometimes mixed and blended together in limestone.Polysynthetic twinning of calcite is quite common especially in sample SS9 (Figure 4d).Rhombohedral cleavage showed at two intersecting lines at oblique angles and easily identified by its colourless appearance, moderate to low relief and grey extinction colour under microscope.Feldspar grains are generally large and form in tabular shape.Feldspar especially sanidine can be identified as a clast within the matrix.Few phenocrysts of sanidine in a groundmass of phyllite can be observed from the simple twinning (Carlsbad twin) under crossed polars for the K-feldspar (Figure 4e).Sericite can also be found interlocking between it and some grain of K-feldspar is spotted with the replacement of sericites that pseudomorph the previous mineral (Figure 4f).Clay minerals always found accompanied both feldspars that may be resulted from the alteration [9,10]. Mudstone and sandstone are also found within the argillite lithology.AN6 is a sample that represent the sandstone and the reddish colour in the thin section (AN6) can be observed clearly within the matrix under the optical microscope using both PPL and XPL as shown in Figure 5.The variety of the grain size can be seen within the assemblages that have large grains minerals of quartz and feldspar.The matrix basically consists the mixture of fine particles of clay, mica, sericite and chlorite. XRF analysis Table 2 shows the major elements of five selected samples from XRF analysis represent Tanah Puteh and Pulai, Gua Musang, Kelantan.The SiO2 concentration is within 66 to almost 80 wt.%, while Al2O3 within range of 13% to 26 wt%.Composition of Fe-oxides is very little for those samples and majority weight less than 1 wt.% except for sample SS9 and SS17.The others elements such as MgO, CaO and TiO2 are exist as trace only. Conclusion As conclusion, the lithology in Pulai and Tanah Puteh are ranged from limestone, metasediment, tuff, argillite, shale and phyllite.Petrographic analysis on thin section samples have shown majority of mineral existence are from quartz, calcite, mica and feldspar.This also similar with XRF analysis that shown the composition of SiO2 has the highest percentage with the range of 66 to almost 80 wt.%.Clay appears in the very fine particles mixed with others fine mineral grains as the matrix of the sample especially for mudstone.The mineral composition from geochemical analysis also in line with the mineral association based on petrography studies. Fig. 1 . Fig. 1. (Left) Map of Kelantan, small box inside represents study area in Gua Musang.(Right) Box of study area in Kg.Tanah Puteh and Kg.Pulai from Google Maps. Fig. 3 . Fig. 3. Several rock associations that can be found within the study area in Pulai, Gua Musang, Kelantan.(a) Karst lithology is quite abundant typically consists of limestone (b) Limestone boulders in various size are scattered within the alluvial resulted from weathering (c) Tuff also overlay together within the assemblages.(d) Sandstone and feldspar are found together in certain area.(e) Outcrops of phyllite in the rubber plantation in Pulai (f) Red mudstone together with the slate are weathered and easily break into small pieces (g) Dark grey limestone is found scattered within the soils as boulders and outcrops (h) Hand specimen of carbonaceous limestone (blackish) (i) Hand specimen of white limestone, this limestone can be found close to the river and several places around the study area. Table 1 . Mineral percentage/composition based on petrographic analysis for selected samples from Pulai (AN) and Tanah Puteh (SS).
2,938.6
2023-01-01T00:00:00.000
[ "Geology" ]
A Numerical Technique to Estimate Water Depths from Remotely Sensed Water Wave Characteristics This paper describes a numerical technique to estimate water depths from remotely sensed water wave characteristics. Two depth inversion models have been developed based on both linear and nonlinear dispersion relations. A simplified technique to get wave height distribution from remotely sensedwater surface elevations is presented. Synthetic input data are generated using a refractiondiffraction numerical model. In intermediate water depths, there is good agreement between actual and estimated depths (relative errors are of order 10%). It is shown that depth inversion using linear dispersion relation overestimates water depth near shoreline. The nonlinear model is seen to improve the inverted depth by 10% and could retrieve two-dimensional depth profile. Introduction As water waves propagate from deep water to shoreline they undergo strong transformation due to shoaling, refraction, diffraction, reflection, bottom friction, breaking, and others.Most of these phenomena are caused by the bottom bathymetry acting on waves.So it is very important to have an accurate knowledge of the sea bottom topography, particularly in the nearshore area.As bottom topography is continuously evolving over time, it is desirable to have methods for continuous monitoring of these changes. Early techniques used heavy weight lowered over a ship's side.This technique measures the depth only a single point at a time, and so it has been costly and labor intensive.Nowadays bathymetric measurements come from GPS devices and echosounders mounted beneath or over the side of a boat.These recent methods are also inefficient in terms of time and money in case of surveying large distance on regular bases.Only a relatively small area can be typically surveyed because of the laborious, time-consuming process associated with these practices. Due high cost associated with traditional methods, there was a need to develop lowcost techniques.One approach is solving the inverse problem, that is, using the remotely sensed surface gravity wave behavior to extract water depths.The theoretical relation between bottom topography and the surface wave characteristics was first modeled mathematically using linear wave theory (e.g., [1]).This relationship is given by the linear wave theory dispersion relation. In the last decade, some depth inversion techniques have been developed based on remotely sensed measurements of surface elevations and wave number distribution.Most methods still use the linear dispersion relationship to carry out depth inversion.For shallow enough water, nonlinear effects and their influence on wave characteristics cannot be neglected. Grilli et al. [2] use nonlinear dispersion relation to predict the bottom topography for cases of monochromatic wave propagating over a depth varying in one direction only.They use wave phase and wave height distribution as input to their model. Leu et al. [3] uses linear dispersion relation to predict the bottom topography in a study area of gentle slope. Misra et.al. [4] developed one-dimensional algorithm to estimate water depths from synthetic two time-lagged 1D profiles of the surface elevation or velocities using one-dimensional linear shallow-water wave equations and one-dimensional form of the Bossinesqu equations. Leu and Chang [5] calculated the spatial distribution of wave number spectra using two images from the French satellite (SPOT).Then assuming that the frequency of wave does not change and the general dispersion relation between the water depth and the wavelength holds during the wave propagation, they determined the coastal water depths from the spatial variation of the wave spectra.Their method was limited to determining bottom topography over coastal zones where water depths are less than about 12 m. Catalan [6] performed experimental work along flume with fixed bathymetric profile containing a single bar.Hybrid data set was collected consisting of remotely sensed wave data combined with model-generated wave amplitude profiles.They use nonlinear dispersion relation to retrieve the onedimensional depth profile where the barred portion of the bathymetry was not recovered. Kennedy et al. [7] presented a technique to reconstruct bathymetry using two snapshots of water surface elevation and velocities as input data.The first snapshot is used to initialize the Boussinesq model to compute wave elevation over a suggested uniform slope bathymetry.The bathymetry is iterated until the best fit is reached with the second snapshot.The difference between computed and measured phase speeds is used as base for updating bathymetry at each iteration. Flampouris et al. [8] used multibeam echo-sounder data to validate depth inversion using different shallow wave theories. Yoo et al. [9] compute the bathymetry in and near the surf zone, from spatially varying celerity and breakpoint location data.They used oblique digital video as the initial data source and showed that water depths can be computed within 15% normalized error. In this paper, two numerical depth inversion models based on both linear and nonlinear dispersion relations are presented.The proposed models could retrieve two-dimensional depth profiles.Comparison between accuracy of both models in different depth regions is presented.Also the validity ranges of both models are clarified.A simplified technique for processing the two snapshots of remotely sensed surface elevation to get the wave amplitude distributions is presented. Mathematical Formulation As waves propagate onshore, their wave numbers and wave heights change with water depths in the coastal zone.Many forms of dispersion relations that govern the relation between local water depth and wave characteristics were derived.Therefore the two-dimensional distribution of wave characteristics may be used to derive the water depth.Many of previous researches used the linear dispersion relationship to carry out depth inversion.The linear dispersion relation takes the form where is the local phase speed, is the wave number, 0 = √ is the deep-water phase speed, and ℎ is the local water depth. It also takes the form where is the wave angular frequency.But for mildly sloping beaches, shallow water regions, and/or finite wave amplitude, nonlinear effects cannot be neglected.Some researchers proposed empirical dispersion relation which models amplitude dispersion in shallow water, for example, Booij [10] and Hedges [11].Others derived nonlinear dispersion relations which are valid in intermediate and deep-water regions. Kirby and Dalrymple [12] proposed an approximate composite dispersion relation.This dispersion relation is an empirical combination of different nonlinear theories dependent upon the relative water depth.The Hedges [11] approximation has been used to model nonlinear effects in shallow water while a third-order Stokes theory was used in intermediate and deep water.The asymptotic behavior of the two takes the following form: For small-amplitude water wave (3) leads to the well-known linear dispersion relation (1). Linear Inversion Algorithms. Two depth inversion models based on both linear and nonlinear dispersion relations are presented in this section.In both models I consider that the only unknown is the depth in the study area and all the required wave characteristics will be measured remotely. As the water wave propagates from deep water to shallow water its wave period does not change.Then the linear dispersion relation equation ( 2) can be solved explicitly in terms of measured wave number () and the constant angular frequency = 2/ to calculate the local depth (ℎ): Nonlinear Inversion Algorithms. As nonlinearity cannot be neglected in practical cases, the nonlinear dispersion relation is used.Equations ( 3)-( 6) cannot be solved explicitly to get the predicted depth.Assuming that the spatial distributions of wave number (, ) and wave height (, ) are remotely sensed, the following algorithm is proposed to solve (3)-( 6) implicitly for the predicted depth ℎ(, ). (1) Calculate the wave period from the deep-water wave number 0 . In deep water ℎ > ; hence substitute in ( 2) (2) Calculate first guess for the depth distribution ℎ 1 (, ) using the linear model: (3) Solve the nonlinear dispersion relation, ( 3)-( 6), at each point in the domain using the Newton Raphson method and the initial guess for ℎ 1 (, ): where iteration number, (4) Repeat steps 2 and 3 until a sufficiently accurate value is reached. Processing Remotely Sensed Data. Remotely sensed measurements of water surface elevations at two or more time intervals are processed to get wave height distribution and wave period as follows. Models Validation and Domains of Applicability Depth inversion models presented in the previous section need the spatial distributions for wave number and wave height as input.Due to unavailability of actual field data, the inversion models were tested for different bathymetries using synthetic field data generated by numerical modeling. There are many models that can provide the required spatial distribution of wave number and wave amplitude.I use here REF/DIF model [13] which is one of the most famous models in this area and freely available. To validate the linear and nonlinear depth inversion models, I examine the bathymetry of the large-scale laboratory experiment that was performed in the Long Wave Flume (LWF) at Oregon's O. H. Hinsdale Wave Research Laboratory [14]. The basin is 104 m long, 3.7 m wide, and 4.6 m deep with a programmable flap-type wave maker equipped with active wave.The bottom of the flume was configured into a piecewise continuous, barred profile designed to approximate the bar geometry of an observed field beach profile at a 1 : 3 reduction in scale and is shown in Figure 1. Three synthetic data sets (D1, D2, and D3) were generated from progressive monochromatic wave with wave period = 4 sec and deep-water wave heights 0 (m) are taken to be 0.5 m, 0.75 m, and 1 m, respectively.All the three data sets were generated by running on REF/DIF model on LWF bathymetry.These data sets were used in six case studies L1, L2, and L3 are for the linear model while NL1, NL2, and NL3 are for the nonlinear model. In order to quantify the agreement between the estimated depth ℎ est () and the actual depth ℎ () the percentage relative error ℎ () can be defined as Also the root mean square error ℎ is calculated for the entire domain as where is total number of points in the study domain and is the spatial index. ℎ is the relative error at location computed as in (15).It is shown also that the model could not predict the depth correctly in deep water as the wave characteristics become less sensitive to the bottom topography.Figure 3 shows the percentage relative error for the three case studies.The root mean square errors are 18%, 21.6%, and 26.8% for case studies L1, L2, and L3, respectively.It is shown that error increased when wave heights increased as nonlinear effects could not be neglected.mean square errors are 8%, 10.5%, and 12.2% for case studies NL1, NL2, and NL3, respectively. Nonlinear Model Results. Comparing results of both linear and nonlinear models, it can be shown that the nonlinear model gives more accurate results than the linear model in the region over and at the lee side of the bar and also in shallow water.The RMS errors reduced from O (20%) in case of linear model to O (10%) in case of using nonlinear model.The nonlinear model could not also correctly predict depths in deep-water region.The error in estimating the depth in the region over and at the lee side of the bar may be due to change in wave height by wave reflection.In this case study, the ability and accuracy of depth inversion technique in case of wave transformation due to refraction, diffraction, and shoaling are checked.As in the first case the input data was generated using an REF/DIF wave transformation model.Deep-water wave height ( ), wave period (), and incidence angle ( ) are 1 m, 4 sec, and 30 ∘ , respectively.Figure 6 shows a comparison between the actual and estimated depths at the cross section ( = 40 m).The root mean square errors are 14%. Conclusions (1) The linear model gives good predictions (relative errors are of order 10%) in intermediate water depths.However the errors increase with shallower depths (relative errors are of order 25%).(2) Estimated depths using linear model are overpredicted in shallow water. (3) The linear model gives a much poorer depth estimate when increasing the wave height from 0.5 m to 3.5 m, proving the need for using the nonlinear model. (4) The RMS errors reduced from O (20%) in case of linear model to O (10%) in case of using nonlinear model. (5) Both linear and nonlinear models could not predict the depth correctly in deep water as the wave characteristics become less sensitive to the bottom topography. Figure 2 : Figure 2: Actual and estimated depths using linear model. Figure 4 shows a comparison between actual depth and estimated depths for the nonlinear model test cases NL1, NL2, and NL3.Figure5shows the percentage relative error for the three test cases.The root Figure 4 :Figure 5 : Figure 4: Actual and estimated depths using nonlinear model. Figure 6 : Figure 6: Actual and estimated depths at the cross section ( = 40 m).
2,940
2013-06-23T00:00:00.000
[ "Environmental Science", "Physics" ]
Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test : Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model. Introduction Artificial intelligence and machine learning has had great strides in recent years in terms of musicality, specifically in human-like algorithmic composition, so much progress has been made, in fact, that there have already been discussions if music can be used as a valid metric in satisfying the Turing Test [1,14,26]. This study concedes that these previous studies and techniques are able to emulate human musicality by studying a large corpus of music or rulesets. This study takes a different approach in musical composition totally eliminating the requirement for large datasets and rather, focusing on a method dubbed "immediate learning" wherein an immediate singular input of music is used as the basis for composition of human-like music. While it is natural for a composer or musician to study a specific style or a specific composer's music and be able to produce a musical piece based on this study, it is also possible for musicians and composers listen to a single piece of music and take elements from just this single piece and then create a new piece of music that is inspired by just this single piece of music. Musicians can do this without copying the music directly and by taking specific stylistic elements from the original. This study serves as a proof of concept of a model in which this seemingly human-only skill of being able to quickly compose music after hearing only one song is emulated in algorithmic composition. This study investigates how the model can generate similarly styled musical pieces based off a specific single input piece. This study is heavily based on the concept of an instrument solo, improvisation or the concept of cadenza. The author personally associates this with guitar solos wherein a guitarist may start playing an instrumental solo, then queues or signals another guitarist to continue the instrumental solo. The second guitarist is now in a position wherein he has to continue the solo in a similar style but not be a direct copy of the first solo. This study focuses primarily on improvised musicality with a primary objective in algorithmic composition with real-time or instantaneous output. This study borrows technical concepts from imagery and pattern recognition, specifically, the concept of objectness. Objectness as a concept is described as, within a specific image, objects can be detected even without prior training by an image or pattern recognition system [2,3,[27][28]. As this concept does not require training, and therefore does not require large datasets, the application of this concept becomes an essential component in this study's objective of music composition that does not require large datasets. This study takes musical objects and uses them as foundation for algorithmic music composition based on evolutionary algorithms. In this study, extracted musical objects are considered important style objects in the evolutionary algorithm. This study has designed the algorithm to consider specific detected musical objects as important stylistic choices by the human composer and attempts to re-apply these stylistic objects into its own computer-generated compositions. This helps ensure that the new piece still stays inspired by the original input music. As an example, a certain guitarist may like the use specific hammer-on and pull-off techniques which other musicians who aim to emulate his style, will incorporate into their own. To assess the similarity of the overall style of the output musical pieces, musical features are extracted from both the original piece and the algorithmically composed musical pieces and distance is compared between feature values. This method has been used in prior research in musical composition [4]. This study serves as a proof of concept of a model in which this seemingly human-only skill of being able to quickly compose music after hearing only one song is emulated in algorithmic composition. This study does not claim to perfectly reproduce the musical style of a specific input musical piece but to rather quickly approximate the style, outputting as input has ended, "with immediacy." Methodology Explained This study undertook the following four steps in achieving the end goal of immediate computer composition: 1. Construct and define the concept of musical objectness. Determine how to extract these musical objects from given melodies or composition. 2. Define quantifiable musical distance or closeness. Similarity and distance were the primary focus of this step. 3. Develop an automatic or algorithmic method that can accept musical objects as input and will output musical compositions. 4. Conduct a test based on the Turing Test to determine if human-like algorithmic composition has indeed been achieved. Musical Objectness There were two types of musical objectness determined in this study, Large Musical Objects and Micro Musical Objects. The concept of large musical objects takes directly from visual concepts of pattern recognition and objectness [2,3] while micro musical objectness takes from musicality and musical hooks [5,6]. The following function illustrates the heuristic of extracting large musical objects from musical input: Micro musical object extraction is based on string-finding and string searching algorithms, specifically suffix trees as implemented by Ukkonen. This was specifically used as it explicitly runs in linear time, an important requirement in minimizing processing time [7,8,[23][24][25]. Figure 2 illustrates the suffix tree being constructed from a short string of musical notes. Musical Mathematical Distance Musical mathematical distance was needed to be determined Through Machine Learning and Passing the Turing Test in order to confirm and assess the relative human-ness of computer generated music. A statistical information extraction was used for input musical pieces. The tool used was Cory Mckay's jSymbolic which extracts statistical information from input music pieces [9,15]. The output statistical data for this was then used as input in the mathematical concepts of Taxicab Geometry and Manhattan distance. The following formula illustrates Manhattan distance: The variables p and q were treated as input vectors based on features extracted by jSymbolic. The Manhattan distance was used as the basis in a normalized distance metric of any two musical pieces [10,[18][19][20][21][22]. Another concept explored in determining musical distance was a novel approach in the use of Density Degree Theory. The original author of Density Degree Theory, Dr. Orlando Legname, asserts that there is a mathematical complexity that is proportional to two notes' consonance or dissonance to each other [11]. This complexity can be represented by drawing the Lissajous curve of any two notes' frequency calculation. As illustrated below on Figure 3 and Figure 4, there is a distinct difference in mathematical complexity in the graphical representation of the relationship between two consonant notes such as the root and the 5 th , versus two dissonant notes such as the root and its minor 2 nd . The use of distance calculation based on the mathematical and statistical distance as well as the distance that can be extrapolated from the use of Density Degree Theory was important in assessing the closeness to human-like generated computer pieces were. Algorithms and Methods The algorithmic approach in musical composition is taken directly from previous research done in evolutionary algorithmic musical composition [4,16,17]. The implementation in this study necessitated the use of the elements discussed in the previous section and was directly applied to the musical composition system in [4]. Previous research done [12] illustrates a proof of concept that the use of this method is promising in achieving the goals of this study. For the purpose of this study, a simple application was constructed that functioned as the interface between a human musician and the computer algorithmic composer. Figure 5 shows the basic interface of the developed application. The process of data capture starts with a human musician playing music input such as a musical melody of any length. As the human musician starts his input (this is detected by audio input), the algorithms implemented immediately iterate to calculate and construct the output musical composition. When the musician finishes playing, the system automatically detects when the musician has finished and immediately outputs the audio of the composition. This computer composition is intended to be the closest-approximation to the style of the original input achievable in the timeframe before output is expected. The outputs of this application were part of the answer to the original question of the ability of a computer to generate human-like compositions. The outputs were promising but a way to validate human-ness was needed to further give proof to the success of the computational composition model. Validation and Human Testing One of the goals of this study was to compose human-like algorithmic compositions. Music composition was proved indeed possible, but a qualification was needed to be conducted to confirm if the compositions were indeed human-like. A qualification survey was constructed in three parts. All parts of this survey required the output of the musical system. Several computer-generated pieces were produced based on the described music-composition methods described earlier. These were used as testing material for the qualification survey. The three parts of this survey are as follows: Priming and Musicianship-Level-Metric In this first part of the survey, respondents are tasked with listening to three pieces of music. The respondents are not informed that these musical pieces are composed by a computer and are instead made to believe that the musical pieces are composed by humans of varying proficiency levels. The respondents are asked to assess the probable number of years of musicianship the composer of each musical piece. The purpose of this section was to first determine the perceived level of proficiency of the algorithmic composer. The question at the end is to determine how believably human the composed pieces were. Respondents were not informed of the computer compositions as prescribed by previous research involving Turing-like indistinguishability tests [13]. Computer Assisted Composition The second part of the survey tasked respondents with listening to three musical samples composed in-part by a human and in-part by computer. Respondents assessed the composition to determine how much of it they perceived to be composed by a computer and how much of it was composed by a human. The expected answers to this section of the survey were a percentage between 0 to 100 of how much the composition was constructed by a computer. All samples used in this survey were made 50% by a computer and 50% by a human and respondents were not informed of this equal composition. This section was constructed with the aim of determining if there is a noticeable bias to either computer or human when all samples are equally composed by computers and humans. Human-Like Composition Believability In the third part of the survey, respondents are tasked with listening to musical samples composed by either a human or computer. Respondents assessed if each given piece is composed by a human or automatically generated by a computer. This part of the survey aims to see the general believability of human-ness of the composed pieces by asking respondents if the musical piece is composed entirely by a human or computer. Of all the musical samples presented, only one was composed by an actual human, all the rest were computer generated. Respondent Profile and Comments Finally, the respondents were simply asked to state the number of years of musicianship they have, as well as the number of years of formal music education they have undergone. The respondents were asked for their comments, methods, and insights into how they are able to determine if a piece is composed by a human or computer. Overall, the survey was intended to be the validation method as well as the basis for success metrics for this study. Results and Analysis Overall, the results of creating the computer composer were promising and yielded compositions. Figure 6 and Figure 7 below are examples of computer compositions generated by the algorithms and systems that were developed throughout this study. To validate the extent of success that we have achieved human-like composition, a survey was administered to test the level of perceived human-likeness of the computer compositions. Table 1 shows the variables that were retrieved from the survey and Table 2 shows the results that were retrieved from each of these variables as well as relevant averages for some of the items on the survey. Average years of perceived computer composer musicianship (section 1 of survey) C Human-ness believability of specimens in section 1 D Average perceived percentage of specimens composed by computer E1 Perceived human-ness of computer-composed piece 1 E2 Perceived human-ness of computer-composed piece 2 E3 Perceived human-ness of human-composed piece E4 Perceived human-ness of computer-composed piece 3 E5 Average Perceived human-ness of computer-composed piece 1-3 *these sections and variables are described previously in section 2.5 It was discovered through a one-way analysis of variance (ANOVA) that there is no significant statistical difference in the results of the survey regarding groups with greater years of musicianship and less years of musicianship (variable A1), as well there is no difference with more years of formal music education and less (variable A2). This now leads to an analysis of the entire body of respondents as a whole rather than initially intended to be a segmented analysis based on the proficiency and years of musical exposure of the respondents. Perceived Average Number of Years of Musicianship In this first part of the survey, respondents are tasked with listening to three pieces of music. The respondents are not informed that these musical pieces are composed by a computer and are instead made to believe that the musical pieces are composed by humans of varying proficiency levels. The respondents are asked to assess the probable number of years of musicianship the composer of each musical piece. After the questions on this part but before the second survey part, the respondents are informed that the compositions were composed by a computer. The respondents are then asked to answer a question detailing if they believed that the pieces were composed by a human, prior to finding out that the pieces were composed by a computer. The general average years of perceived musicianship of the pieces were calculated to be at the mean of 3.16 years. This means that the average perception of the participant group of the computer-composer was that it was a composer of around 3.16 years of experience in music. If taking into consideration that the algorithmic composition could only study the direct input of music, it was expected that this section would yield a smaller average number of years of perceived musicianship. It was a welcome surprise that the average perception was 3.16 years. It may be of importance to note that 28 of the 155 participants had rated the computer-composer as having 0 years of musical experience. This section was designed without informing of the perceiver of the existence of the computer composer. This was intended to pattern the tests conducted by Colby, Hilf and Weber [13] wherein they assert that the Turing test be conducted without informing the perceivers of the computer. In this section, it was determined that the average of 3.16 is indicative that the perceivers were not able to discern that the composer for these pieces were not human. Figure 8 shows the distribution of the perception of the number of years of experience the computer has in music. Perceived Percentage of Computer The second part of the survey tasked respondents with listening to three musical samples composed in-part by a human and in-part by computer. Respondents assessed the composition to determine how much of it they percieved to be composed by a computer and how much of it was composed by a human. The expected answers to this section of the survey was a percentage between 0 to 100, of how much the composition was constructed by a computer. All samples used in this survey were made 50% by a computer and 50% by a human and respondents were not informed of this equal composition. This section was constructed with the aim of determining if there is a noticeable bias to either computer or human when all samples are equally composed by computers and humans. In summary, an average that skews more towards 0 would mean that the general perception would be that the pieces were more composed by a computer and a score that skews more towards 100 would mean the perception is that the pieces were composed by a computer. 50 would mean a general perception that they were in-fact 50-50 composed by a computer and human. The actual result of this section is 57.51, which means that it is close to the expected number of 50 with a slight skew in the perception that the pieces were composed by a computer. Figure 9 illustrates the results and skew of this section of the survey. Human-like Composition Believability In the third part of the survey, respondents are tasked with listening to musical samples composed by either a human or computer. Respondents assessed if each given piece is composed by a human or automatically generated by a computer. This part of the survey aims to see the general believability of human-ness of the composed pieces by asking respondents if the musical piece is composed entirely by a human or computer. Of all the musical samples presented, only one was composed by an actual human, all the rest were computer generated. Item scores are from 1 to 5 where 1 is human and 5 is computer. Individual item score averages are listed below. It is important to note that the middle value is 3. Figure 10 illustrates the histogram and skew of the results of this section of the survey. This study interprets the values of each result under the context that a score of 1 is full believability that the piece is human-made (equivalent to 0% perceived artificial) and 5 is full-believability that the piece is made by a computer (equivalent to 100% perceived artificial) and 3 is interpreted as complete uncertainty (equivalent to 50% perceived artificial). All individual scores of the computer-generated pieces were very similar in that they are very near the uncertainty point of 3 (50%). This is interesting to note as it means that statistically, for these specific three compositions, there is an uncertainty or vagueness in the perception of the human-ness of these compositions. The general average of the computer-generated pieces scored a 3.2 or an equivalent of 55% believability that it is artificially constructed. A very clearly human-piece was also presented and even this human composed piece did not achieve a score closer to 1 and is closer to a middle ground between uncertainty and full-certainty. This further supports that there is still a range in the perception of if a piece is composed by a human or not. With the statistical results of this section, it is interpreted that there is a difficulty in assessing if the pieces of music were composed by a human or computer. This vagueness or indistinguishability is the primary requirement in satisfying the Turing Test and it can then be safely said that statistically, in this study's scope, this system has passed the Turing Test. Results Summary The results of the study and its several parts has proven that while it does require a complex set of components to compose human-like music, it is indeed possible with a consistent level of success. The use of borrowing concepts from imagery and pattern recognition to determine and extract large and micro musical objects proved a reliable way of finding the defining features and sections of musical pieces. A mathematical descriptor of musical similarity developed for this study also showed that a similarity metric is useful in constructing algorithms that compose music that rely on comparing with a real-human composition. Finally, the use of an evolutionary approach in designing the algorithm proved effective when considering the many modules and requirements that were determined needed in the development of a computer composer. Statistical evidence showed that the computer composer was deemed near human-like with an ambiguous result when humans were asked to assess if the composed pieces were human or not. This, in addition to the average perception that the computer composer had 3 years of musical experience shows that the study was successful in producing a proof-of-concept of a computer composer that does not need a large dataset to learn human-like composition. Conclusion This study has proven that a computer is indeed capable of composing musical pieces in perceived real time or immediately after musical input. This study also serves to take away some of the perceived importance of large data-sets for the purpose of musical modeling through machine-learning. Moreover, human-like composition was proven possible even when applied in a context of limited-composition time. This study has succeeded in creating a foundational model for algorithmically composing music using only direct learning and without the need for big-datasets but which can also generate pieces quickly without sacrificing a human-ness that is often associated with music composition. This was done through the combination of multiple pre-existing technologies and concepts, as well as novel ideas and approaches that were directly created for the use in the development of this study. This study exhibits an approach of using the concepts from visual pattern recognition and objectness applied to sound and music. This study also shows a novel approach in musical distance and dissonance assessment using geometry and mathematical formulae. This study has accomplished its set objectives of implementing a musical object extraction method and then using a similarity metric to compare similarity of these objects. This was accomplished using multiple methods and concepts together. Concepts of objectness and saliency and methods based on visual pattern recognition were used to extract musical objectness while a musical similarity was developed using mathematical concepts such as taxicab geometry as well as the novel Density Degree Theory. This study has also accomplished its objective of producing a proof-of-concept system that can algorithmically compose music based on these extracted music that can approximate the style of human-composed pieces. An application was able to be constructed that can accept a live musical input and then immediately respond with an algorithmic musical output. Overall, all objectives have been accomplished. Results showed that algorithmically generating music in real-time may result in a musical pieces that pass the Turing test of ambiguity. The original test designed by Alan Turing only required the computer system to fool judges 30% of the time, a value that was arbitrarily selected. More modern tests have set this requirement to 50%. This study's results come close to this 50% believability in two sections of testing. One of the test sections were conducted with pieces of half-and-half music composed by a computer and human, and scored a 57.51% average or a 7.51% skew towards computer-like. Another test section conducted had participants gauge if a piece was human or computer-made which scored a 55%, skewing 5% towards computer-made. The first section of testing also asked participants to rate the perceived number of years of musicianship the composer of select pieces were. These pieces were composed by a computer but participants were not informed of this until later in the text. This was intended to pattern the tests conducted by Colby, Hilf and Weber [13] and had results that showed an average of 3.16 years of perceived musical experience for the computer composer. This section of the study also showed only 28 (18%) out of the 155 participants scoring the computer-composer with 0 years of musical experience. From the Colby et. al. study, it can be seen as a success that only 18 percent of participants consider the pieces as being composed by someone of no years of musical experience. Overall, the scientific development and the results of this study show very promising results in this model for algorithmic musical computation. However, this study also does not claim to have perfect emulation of human-like composition as there are more dimensions that need to be explored for this to come close to perfect human-ness. This study concedes that models that make use of big datasets come closer to human emulation. It is important to state that the objective of this study is to develop a model that aims not to remove the requirement of big-data-based learning and composition but rather aims to compliment and supplement these models by providing a different perspective and method to musical composition, independent of big-data. Lastly, it is hoped that the methods, results, and insights discussed in this study could provide knowledge that would be useful in and serve as basis for future algorithmic music generation studies.
6,024.2
2021-05-26T00:00:00.000
[ "Computer Science", "Art" ]
Application of Biodegradable Polyhydroxyalkanoates as Surgical Films for Ventral Hernia Repair in Mice The cytotoxicity and biosafety of poly-(3-hydroxybutyrate) (P3HB) and poly-(3-hydroxybutyrate-co-3-hydroxyvalerate) (P3HBV) films were investigated in vitro using 3T3 fibroblast cells and in vivo through subcutaneous implantation of the film in mice.The in vitro test revealed that endotoxin-free P3HB and P3HBV films allowed cell attachment and growth. Film-soaked conditional media showed no significant inhibitory or cytotoxic effects on cell proliferation. The in vivo absorption test showed that both the P3HB and P3HBVfilms slowly degraded and that P3HBhad a slower degradation rate than that of P3HBV.Applying a P3HBfilm in hernia repair demonstrated a favorable outcome: the film was able to correct the abdominal ventral hernia by inducing connective tissue and fat ingrowth and exhibited an extremely slow rate of degradation. Furthermore, the P3HB film demonstrated the advantage of lower intestinal adhesion to the ventral hernia site compared with the P3HBV and PP commercial films. Introduction Various types of synthetic and biologic film have been developed as meshes for hernia repair [1,2].The key characteristics of an ideal mesh include favorable repair ability and excellent biocompatibility [1,3].Currently, the most widely used materials are polypropylene-(PP-) based meshes of various weights, filament sizes, pore sizes, and weaving structures [4][5][6].Such PP meshes offer the advantage of high burst strength; they are thus highly rigid, and mesh migration and shrinkage seldom occur during repair.Although hernia recurrence is an uncommon complication when PP meshes are used, they tend to induce a chronic inflammatory response to fibrous and tissue ingrowth into the mesh architecture.This provides an opportunity for the adhesion of enteric loops onto the mesh implant, resulting in progressive hardness, abdominal pain, or severe consequences depending on the extent of adhesion [1,3,5,7].Certain biologic meshes that have been introduced into clinical use in hernia repair are produced using human-or animal-originated tissuegraft [3,8] and are manufactured from collagen-rich tissues such as ligament and dermal grafts that have had their cellular contents completely removed.The resulting cellular collagen matrix provides an alternative device for hernia repair.Compared with synthetic meshes, biologic meshes are softer and the matrix scaffold can gradually be replaced with the patient's own tissue [8,9].This solves the foreign body sensation and reaction problems that arise from using PP meshes.Nevertheless, the burst strength of biologic meshes is relatively lower and they are more fragile, especially when undergoing a sudden increase in abdominal pressure.This lack of integrity and the problem of recurrence have been reported as clinical complications in the use of these biological materials [3,8].An ideal mesh that meets all of the requirements for permanent hernia repair, namely, high biocompatibility, low levels of foreign body sensation, and long-term biomechanical support, remains to be discovered. Polyhydroxyalkanoates (PHAs) are emerging materials for producing medical devices [10][11][12][13].PHAs are natural products of microorganisms and serve as carbon and energy storage materials under conditions of limited nitrogen and phosphorous sources.Most PHAs exhibit the structures of aliphatic polyesters in terms of carbon, oxygen, and hydrogen.With various lengths of carbon backbone and a broad range of function groups, PHA polymers comprise more than 150 constituents that feature diverse characteristics [14].Among the various PHA polymers, poly-(3-hydroxybutyrate) (P3HB) and poly-(3-hydroxybutyrate-co-3-hydroxyvalerate) (P3HBV) are the most well-known biomaterials, characterized by their low bioreactivity and slow biodegradation rate [12,[14][15][16].Although most efforts in manufacturing these PHA polymers have been undertaken in the plastics industry, emerging needs for tissue engineering have prompted numerous studies on the medical application of PHA polymers.The medical use of P3HB and P3HBV has been extensively investigated in the development of various types of surgical material.A P3HB-composed patch was tested to prevent adhesion between the heart and sternum in heart surgery [17][18][19].In both animal and human studies, P3HB patches have greatly lowered the incidence of postoperative adhesions observed over short and long terms [17,18].P3HB and P3HBV sheets have been used as bridging and guiding materials for regenerating tissues such as nerve fibers [20] and bone [21,22].A P3HB patch was used as a scaffold material in repairing the atrial septal defect in calves, demonstrating evident regeneration of the atrial septal wall with gradual degradation of the patch by macrophages [23].Other tested functions of P3HB-and P3HBV-based materials include cardiovascular stents, barrier films for dental treatment, and microparticulate carriers for drug delivery [11][12][13][14].However, no study has investigated P3HB or P3HBV as hernia repair films. In this study, we assessed the application of P3HB and P3HBV films in ventral hernia repair.The biocompatibility of P3HB and P3HBV films was tested in vitro, and their bioabsorption and hernia repair abilities were evaluated in vivo over duration of 9 mo. Materials and Methods 2.1.Polymer Films.P3HB (98%) was produced using recombinant Escherichia coli XL1 blue in our laboratory [24].P3HBV (5 wt% 3-hydroxyvalerate) was purchased from Sigma-Aldrich (St. Louis, MO, USA).The film production procedure was described in our previous study [24].In brief, the polymer films were prepared using a chloroform casting method.The polymer solution (1.63 wt%) was poured into a Φ10 cm glass petri dish and dried in a fume hood to obtain polymer-thin films.The resulting films were dried at ambient temperature in the hood for 3 d to remove residual chloroform.The physics-chemical properties of P3HB and P3HBV polymer films are listed in Table 1.To ensure that the films were endotoxin-free, they were further treated with 35% hydrogen peroxide (H 2 O 2 ) at 80 ∘ C for 1 h [25].Endotoxin concentration was determined using a Toxin-Sensor Chromogenic LAL Endotoxin Assay Kit (GenScript, Piscataway, NJ, USA).The films were washed with sterilized water and dried in a laminar flow cabinet for experimental use.The commercially available PP film (BARD Soft Mesh) was purchased from Davel Inc. (Warwick, RI, USA) and had a pore size of 1-1.5 mm.The growth of 3T3 cells on the films was visualized using crystal violet staining and quantified using microscopic images.The final result was averaged from 3 independent experiments. Indirect Contact Cytotoxicity Test.The films were soaked in the culture medium and gently shaken in an orbital shaker at 37 ∘ C. Simultaneously, the cells were seeded in the 6-well plate at a cell density of 4 × 10 5 /well with the culture medium.Twenty-four hours later, the conditioned medium was harvested and centrifuged at 200 ×g for 3 min and the supernatant was preserved.The culture medium of the 3T3 cells was aspirated, and the cells were washed with PBS buffer twice.Subsequently, the conditioned medium was added into the indicated wells.The cells were incubated in a humidified 37 ∘ C CO 2 incubator for another 48 h.To observe cell growth, cells in each well were fixed with a 1% formalin solution followed by crystal violet staining.Cell density was quantified using microscopic images.The final result was averaged from 3 independent experiments. Animals. The animal experiments in this study were approved by the Committee on Laboratory Animal Research of the Far Eastern Memorial Hospital, Taiwan, and conducted according to the guidelines of the Laboratory Animal Center of the Far Eastern Memorial Hospital.Five-week-old male balb/c mice weighing 15-20 gm were used for the experiments.The mice were provided food and water ad libitum on a 12:12 h day-night cycle (lights on from 0600 to 1800) with room temperature maintained at around 20 ∘ C. Bioabsorption Test. Five-week-old male balb/c mice were purchased from the National Laboratory Animal Center, Taipei, Taiwan.The mice were weighed and anesthetized with Ketamine/Xylazine (100 mg/Kg and 10 mg/Kg, resp.).The polymer films were cut into 1 cm × 1 cm pieces, approximately 2.5 mg each, and were implanted into the subcutaneous region of the abdominal wall.At various time points (0.5, 1, 3, and 9 mo), the implanted films with the adjacent skin and muscular tissue were excised and fixed in 10% formalin for 24 h.The fixed tissue was then embedded in paraffin and sliced into 5 m thick sections.After hematoxylin and eosin (H&E) staining, the sections were used for histologically evaluating the tissue response to various films.The remaining film thickness was measured using Image J software (developed by the United States National Institutes of Health).At each time point, 6 mice were used in each group. Hernia Repair Ability Test. Five-week-old male balb/c mice were purchased from the National Laboratory Animal Center, Taipei, Taiwan.The mice were weighed and anesthetized with Ketamine/Xylazine (100 mg/Kg and 10 mg/Kg, resp.).The polymer films were cut into 1 cm × 1 cm pieces, approximately 2.5 mg.Ventral hernias were introduced by creating a 0.5 cm × 0.5 cm puncture on the muscle layer of the ventral abdominal wall.The film was then used to cover the puncture, and the 4 corners of the film were sutured to the muscle tissue around the hernia region.At various time points (0.5, 1, 3, and 9 mo), the mice were laparotomized through the midline, and the adhesion with adjacent organs was observed.The repair of the abdominal hernia was photographed, and the implanted films with the adjacent skin and abdominal muscle layer were excised and fixed in 10% formalin for 24 h.The fixed tissue was then embedded in paraffin and sliced into 5 m thick sections, which were stained with H&E to evaluate histologically the residual film areas and nearby tissue response.At each time point, 6 mice were used in each group. Immunohistochemistry for Macrophages. The tissue sections were deparaffinized and rehydrated, followed by autoclaving in a pH 6.0 citrate buffer (121 ∘ C, 15 min) for antigen retrieval.Endogenous peroxidase activity was blocked using the DAKO peroxidase-blocking reagent (DAKO, Denmark).The primary anti-mouse of anti-CD68 monoclonal antibody (ab31630, Abcam, Cambridge, UK) was used at a dilution of 1 : 50 and incubated with the sections at 4 ∘ C overnight. A rabbit anti-mouse secondary antibody (DAKO) was then added, and the slides were incubated at room temperature for 1 h.The color-developing agent 3,3 -diaminobenzidine (DAB) (Abcam) was added and incubated with the sections for 10 min.The sections were then counterstained with hematoxylin to promote visualization of the tissue. Statistical Analysis. Student's -test was used to evaluate the differences among various groups, and the statistical significance was accepted only when < 0.05. Results The physical properties and thermal behavior of P3HB and P3HBV films used in this study were in Table 1, reported in our previous work [24]. 3.1.Preparation of Endotoxin-Free P3HB and P3HBV Films.Bacterial endotoxins, which are found in the outer films of gram-negative bacteria, such as E. coli, are members of a class of phospholipids called lipopolysaccharides.Because P3HB is produced using E. coli, the endotoxin must be removed before the films can be used in medical applications.The endotoxin concentration in medical devices approved by the FDA is 0.005 EU/g [26].The original P3HB powder can contain an endotoxin concentration as high as 21 100 EU/g.Once a chloroform casting procedure is used to produce a membranous form, the endotoxin concentration within the P3HB film can be reduced to 1790 EU/g.To achieve FDA-approved standards, we further eliminated traces of endotoxin in the films by using a traditional H 2 O 2 -soaking method [25], successfully reducing endotoxin levels to less than 0.001 EU/g.Similarly, the endotoxin within the P3HBV film (460 EU/g) was removed to a level under the detection threshold (less than 0.005 EU/g). In Vitro Cytotoxicity Assays for P3HB and P3HBV Films. To investigate the biocompatibility of the P3HB film, we performed the in vitro cytotoxicity assays specified in the ISO10993-5 standards, including direct and indirect contact tests [27].Mouse fibroblast cell line 3T3 cells were used in both tests.The P3HB and P3HBV films before and after endotoxin removal were tested for comparison. For the direct contact test, the films were placed in the bottom of the well, and 3T3 cells were directly seeded onto the films with the culture medium.Forty-eight h later, the cells on the films and in the control well (no film) were fixed and stained with crystal violet.Cell growth on each film was photographed (Figure 1(a)), and the relative cell density was calculated by dividing the number of nuclei on each film by that in the control well.Few cells were able to grow on the endotoxin-containing P3HB and P3HBV films; the relative cell densities on these films were 0.72% ± 0.35% and 8.86% ± 3.06%, respectively (Figure 1(b)).By contrast, the growth of 3T3 cells was much more abundant on the endotoxin-free P3HB and P3HBV films; the relative cell densities on endotoxin-free P3HB and P3HBV films were 37.84% ± 2.99% and 60.60% ± 7.76%, respectively, and both of which were significantly higher than those on endotoxincontaining P3HB and P3HBV films ( = 0.00003 in P3HB group and 0.0004 in P3HBV group) (Figure 1(b)).This result demonstrated that the 3T3 cells were able to attach and grow on both the P3HB and P3HBV films that underwent endotoxin removal. For the indirect contact test, 3T3 cells were incubated with the conditioned media, which was previously soaked with the indicated films for 48 h.After another 48 h of incubation, cell growth in each well was photographed (Figure 2 number of nuclei in each well by that in the control well. Either no cells or infrequently viable cells were observed growing in the medium on the endotoxin-containing P3HB and P3HBV films (0% and 29% ± 9.68% of relative cell density, resp.)(Figure 2(b)).By contrast, comparable growth rates were observed in the conditioned medium on the endotoxinremoved P3HB and P3HBV films (92.27% ± 8.38% and 96.78% ± 8.16% of relative cell density, resp., and both of which were significantly higher than those on endotoxincontaining P3HB and P3HBV films ( = 0.00004 in P3HB group and 0.0007 in P3HBV group)) (Figure 2(b)).This implied that neither the endotoxin-free P3HB nor the P3HBV film can release toxic factors that are significantly harmful to 3T3 cells.Collectively, these in vitro contact assays demonstrated that, after endotoxin removal, the P3HB and P3HBV films Figure 2: Indirect contact assay using P3HB, P3HBV, and polypropylene materials with mouse 3T3 cells.(a) The film was soaked in a culture medium and gently shaken in an orbital shaker at 37 ∘ C for 24 h.Twenty-four hours later, the conditioned medium was harvested and used to culture 3T3 cells, which were incubated in a humidified 37 ∘ C CO 2 incubator for another 48 h.To observe the cell growth, cells in each well were fixed with 1% formalin solution and stained with 1% (w/v) crystal violet solution.(b) Six images in different fields were photographed from each sample, and the nuclei were counted using Image J software.The cell number in the control well was set as 100%.The bars represent the mean values of three independent experiments, and the error bars represent the standard deviation.* Statistically significant difference between endotoxin-containing and endotoxin-free films. demonstrated low cytotoxicity and high biosafety in facilitating the attachment and growth of fibroblast cells. In Vivo Tissue Response and Bioabsorption of P3HB and P3HBV Films.To investigate the in vivo cellular interaction of the P3HB and P3HBV films, the endotoxin-free P3HB film, P3HBV film, and PP mesh were implanted into the subcutaneous region of mice.The implant size of the film was 1 cm × 1 cm.At various time points (0.5, 1, 3, and 9 mo) after implantation, the implants, adjacent skin, and abdominal International Journal of Polymer Science wall regions were excised and fixed.Using tissue sectioning and H&E staining, we observed the tissue response to each film and the change in film thickness over time.The P3HB and P3HBV implants showed lamina structures with a lining of neutrophils surrounding the film at 0.5 to 1 mo.At 3 mo, the P3HB and P3HBV films were already enveloped by layers of connective tissue, and the tissue layer had thickly accumulated, especially in the P3HBV films.At 6 and 9 mo, neutrophil infiltration was significantly reduced (Figure 3(a)).The PP film filament was observed as the circles and was soon completely covered by impact layers of connective tissue after implantation (0.5 mo).The neutrophil infiltration surrounding the fibers was evident from 0.5 to 6 mo and only slightly lessened at 9 mo (Figure 3(a)).observations demonstrated that the P3HB and P3HBV films elicited a tissue response similar to that of the PP film but to a lesser extent and for a shorter duration.The P3HBV film began dividing into smaller fragments at early time points (from 1 mo), and the number of fragments significantly increased afterwards.By contrast, the P3HB film remained structurally intact and broke into a few pieces after 3 mo (Figure 3(a)).To compare the in vivo bioabsorption of the P3HB and P3HBV films, the relative percentages of film remnants at various time points were calculated by measuring the film thickness in cross-section.As shown in Figure 3(b), the thicknesses of both films slowly decreased over time.The thicknesses of the P3HB film were 81.82% ± 5.95% at 3 mo and 74.14% ± 9.82% at 9 mo.The relative thicknesses of the P3HBV film were 72.37% ± 3.94% at 3 mo and 53.18% ± 4.46% at 9 mo (Figure 3(b)).The absorption of P3HBV film was relatively faster than that of P3HB film ( = 0.0475).According to these results, we concluded that both the P3HB and P3HBV films are bioabsorbable and differ in degradation speed and pattern. In Vivo Ventral Hernia Repair Ability of the P3HB and P3HBV Films.To investigate the ventral hernia repair ability of P3HB and P3HBV films in vivo, we designed a ventral hernia model in mice by excising a 0.5 cm × 0.5 cm region from the abdominal muscle wall to create a perforated hernia.The endotoxin-free films were used to cover the perforated region and were sutured using the point-fixed method.All of the animals survived the surgical operation and exhibited no signs of infection or rejection.Most crucially, no hernia protrusion was observed during the 9 mo experimental period.At various time points (0, 0.5, 1, 3, and 9 mo) after surgery, mice from each group were euthanized, and the status of hernia fixation and tissue adhesion was recorded.Tissues from the repair site, including the film, adjacent skin, and abdominal wall, were harvested for fixation, tissue sectioning, and H&E staining.At 0.5 mo, we observed that both the P3HB and P3HBV films became transparent and that blood vessels grew in the hernia-covered region (Figure 4(a)).At 1 and 3 mo, the films were covered with more tissue ingrowth, and the transparent windows of the films were significantly smaller.At 9 mo, the whole films were completely embedded in the growing tissue and vessels (Figure 4(a)). Postoperative adhesion is one of the critical factors in determining ideal hernia repair materials.Each film was graded according to the probability of adhesion strength from 0 to 3, where 0 = "no adhesions, " 1 = "adhesions that can be freed easily with gentle tension, " 2 = "adhesions that can be freed with blunt dissection, " and 3 = "adhesions that require sharp dissection to separate" [28].As shown in Figure 4(b), the average strength of different films at various time points was compared.The results indicated that the P3HB film induced less adhesion than P3HBV film or PP mesh at the studied time points ( < 0.05), whereas P3HBV had a similar adhesion grade to that of PP mesh ( > 0.05). Histological Examination for the Hernia Repair Site. We observed microscopic changes at the hernia repair site based on H&E staining and the extent of chronic inflammatory response by using CD68-positive macrophage staining at 9 mo (Figure 5(a)).Consistent with the bioabsorption test (Figure 3), the P3HBV film had a higher degradation rate than that of the P3HB film.The inflammatory response to the P3HB film was limited, and only a thin lining of connective tissue was under the film, with a thick layer of fat cells facing the peritoneal cavity.The inflammation response induced by the P3HBV film was not absent or reduced, as in the bioabsorption test at the same time point indicated.Highly degraded fragments of the P3HBV film were surrounded by lymphocytes and a thick connective tissue layer at the repair site.The number of macrophages was low, and they were located only near the breaking fragments (Figure 5(a)).The PP film also resulted in a heavy inflammatory response and even exhibited a granuloma formed at the repair site (Figure 5(a)). We compared tissue thicknesses at the repair site for various materials groups.The P3HB, P3HBV, and PP groups exhibited thicknesses of 316.85 ± 119.87 m, 843.10 ± 173.61 m, and 929.13 ± 163.74 m, respectively, at 9 mo (Figure 5(b)).These data showed that hernia repair using the P3HB film caused less tissue thickening compared with using the P3HBV film ( = 0.0001) or PP mesh ( = 0.00002). Discussion Adhesion-related complications after ventral hernia repair surgery have been a key problem in the use of PP meshes.Biologic meshes do not demonstrate consequential immunological interaction with tissue but have exhibited relatively weaker repair strength compared with synthetic meshes [3,7,9].The search for next-generation materials offering the advantages of PP and synthetic meshes is ongoing.New polymers and mixed-type synthetic materials have been tested and evaluated for their antiadhesive qualities [29][30][31][32][33].In this research, we demonstrated that PHA-derived materials are a potential choice for hernia repair.PHA materials were tested for their medical use in surgical repair, particularly regarding adhesion prevention [34].This is the first study to use PHA materials as films for hernia repair.Our results demonstrate that 2 types of PHA film, P3HB in particular, are qualified for use in hernia repair.Figure 3: In vivo bioabsorption of endotoxin-free P3HB and P3HBV films.The polymer films were cut to 1 cm × 1 cm pieces and were implanted into the subcutaneous region of the abdominal wall.After different time points (0.5, 1, 3, and 9 mo), the implanted films with the adjacent skin and muscular tissue were excised and subjected to tissue sectioning.(a) Through H&E staining, the sections were used for histologically evaluating the tissue response to different films.The scale bar represents 400 m.(b) The remaining thickness of the indicated endotoxin-free film was measured using Image J software.Three slides from each tissue sample were observed and photographed.The film thickness was calculated using Image J software.The cell number in the control well was set at 100%.The plots represent the mean values of three independent experiments, and the error bars represent the standard deviation.* Statistically significant difference between P3HB and P3HBV groups. International Journal of Polymer Science The biocompatibility of P3HB and P3HBV has been extensively studied both in vitro and in vivo.Korsatko et al. reported no significant impact of P3HB on the cell growth of mouse fibroblasts [35].Saito et al. used P3HB film to conduct an inflammatory test on the chorioallantoic film of an egg and reported that the polymer film did not cause significant inflammation [36].Chaput et al. evaluated the cellular response to P3HBV by using a direct contact assay and reported that solid polymers had a mild effect on cells [15].Dang tested the cytotoxic effects of an extract of P3HBV by using a mouse fibroblast cell culture and reported that the extract only slightly suppressed cell activity [37].For in vivo tests, Doyle et al. demonstrated that P3HB scaffolds did not provoke a chronic inflammatory response after implantation in rabbits after up to 12 mo.Chaput et al. observed the tissue response to P3HBV film in sheep for up to 90 wk, revealing an acute inflammatory reaction 1 wk after implantation that was lessened at 11 wk.Furthermore, the films were eventually encapsulated with oriented fiber tissue and a large number of fatty cells [15,38].Similarly, Gogolewski et al. (1993) monitored the tissue response to P3HB and reported that the fibrous capsule around the polymer appeared thickest at 1 mo and became gradually thinner by 6 mo [39].These studies have revealed that both P3HB and P3HBV are biocompatible, nontoxic materials that can be considered potential candidates for use in medical devices. The degradation of P3HB was evaluated in vitro in earlier research [14], revealing that P3HB films degraded extremely slowly both in a phosphate buffer and human serum at 37 ∘ C and that the film sustained weight loss of only 5% in the first 6 mo.In our study, the in vivo biodegradation of the P3HB film was faster and its thickness decreased by 19% and 26% by 3 and 9 mo, respectively (Figure 3(b)).Loss of thickness and structural breakdown occurred earlier in the P3HBV film than in the P3HB film (Figure 3(b)). The P3HB film exhibited superior performance in affecting low levels of tissue response and had a low adhesion rate compared with P3HBV and PP (Figures 3(a) and 4(b)).At 9 mo after hernia repair, a thin layer of connective tissue with a layer of abundant fat cells was at the repair site.The reason for a high accumulation of fat in the P3HB film was unclear.However, these fat cells may have been a crucial factor contributing to the extremely low adhesion rate of the P3HB film (Figures 4 and 5).Compared with the P3HB film, the P3HBV film demonstrated relatively higher levels of tissue response, possibly because of its higher degradation rate and greater number of fragments.Consequently, P3HBV repair induced a compact and thicker layer of connective tissue.The interface between the connective tissue and peritoneal cavity was considerably smooth and exhibited fat cells (Figure 5).The adhesion rate of the P3HBV film was slightly lower than that of the PP mesh (Figure 4(b)).The International Journal of Polymer Science repair site of the PP mesh was irregularly shaped, and the mesh fibers were blended with immune cells, connective tissue, and some membranous structures in the abdominal cavity (Figure 5(a)).The immune response toward all 3 films lasted longer in the hernia repair experiment than in the bioabsorption experiment.This may have resulted from the hernia wound and the interaction with the peritoneal cavity instead of the abdominal wall.Nevertheless, the most foreign body reaction, especially against the PP mesh, resulted in progressive tissue growth.This could be the main reason for its high level of postoperative complications. Conclusions The results of this study demonstrate that PHA films, particularly P3HB, are potential materials for hernia fixation.Such films can be modified further, by using P3HBV with various percentages of 3-hydroxyvalerate, for example.Alternatively, a dual-layer film in which P3HB faces the peritoneal cavity can lower the adhesion grade; P3HBV facing the dermis side can thicken tissue growth at the repair site.In conclusion, PHA-based films are potent and promising materials for future hernia repair. Figure 1 : Figure1: Direct contact assay using P3HB, P3HBV, and polypropylene materials with mouse 3T3 cells.(a) The film was placed at the bottom of the indicated well followed by seeding 3T3 cells into the well with culture medium.Forty-eight hours later, the culture medium was discarded and the cells growing in the empty well (no film control) or on different films were fixed with 1% formalin solution.The cells were stained with 1% (w/v) crystal violet solution.(b) Six images in different fields were photographed from each sample, and the nuclei were counted using Image J software.The cell number in the control well was set at 100%.The bars represent the means of three independent experiments, and the error bars represent the standard deviation.* Statistically significant difference between endotoxin-containing and endotoxin-free films. Figure 4 : Figure 4: P3HB and P3HBV films at the hernia repair site.(a) Images of P3HB and P3HBV films and surrounding tissue at different time points after hernia repair.(b) The adhesion grade of P3HB, P3HBV, and polypropylene (PP) films for hernia repair was evaluated in different materials groups.The bars represent the mean values of 6 mice, and the error bars represent the standard deviation.* Statistically significant difference between the PP group and P3HB groups.# Statistically significant difference between the P3HBV and P3HB groups. Figure 5 : Figure 5: Tissue reaction to the P3HB and P3HBV films at the hernia repair site.(a) Histological images of P3HB, P3HBV, and polypropylene (PP) films and surrounding tissue at different time points after hernia repair.The slides were stained using the H&E and immunohistochemistry methods for CD68-positive macrophage.S = skin; P = peritoneal cavity.The scale bars represent different magnification levels indicated below the bottom row.(b) The thickness of tissue below the film was calculated using Image J software.The bars represent the mean values of 6 mice, and the error bars represent the standard deviation.* Statistically significant difference between the PP and P3HB groups.# Statistically significant difference between the P3HBV and P3HB groups.
6,497.4
2014-11-05T00:00:00.000
[ "Medicine", "Materials Science" ]
Kidney Transplant Outcomes in Recipients Over the Age of 70 Background: Patients older than 70 years are the fastest-growing age group of patients requiring renal replacement therapy. This has resulted in a corresponding increase in the number of elderly transplant recipients. We hypothesized that graft survival in this population would be comparable to that seen in the literature on kidney transplant recipients under 70 years of age. Methods: This was a retrospective, single-center review of outcomes of kidney transplant recipients aged ≥70 years. Patients were dichotomized based on whether their allograft originated from a living or deceased donor. Results: A total of 59 recipients aged ≥70 years underwent kidney transplantation. Of these, five (8.5%) were lost to follow-up within the first year post transplant and excluded from the analysis. History of cerebrovascular accident (p = 0.003), coronary artery disease (p = 0.03), postoperative return to the operating room (p = 0.03), and readmission within one year of transplant were predictive of graft loss (p = 0.003). Overall graft survival in our cohort declined from 92.6% at one year to 53.8% at five years. Death-censored graft survival was 100% at one year and decreased to 80.8% at five years. There were no differences seen in patient, graft, or death-censored graft survival based on donor type. Conclusions: Kidney transplant patients over 70 years, as seen in our cohort, had good short-term outcomes. Graft survival is similar to rates seen in younger cohorts but the decline in this rate over time is steeper in the older age group, possibly due to decreased patient survival. These findings could be validated further in larger multi-center studies. Introduction The scientific and technological advances over the last century have supplemented increasing public health awareness to increase longevity. This has slowly led to a rising population of older adults that are living significantly longer than their predecessors [1]. It is estimated that by 2060, the median age of the United States population will be 43 (up from 38 today) and that the percentage of the population above 65+ years will surpass the percentage that is below 18 [2]. This will significantly affect the health system as it will bring along with it an increased burden of the disease processes that accompany the aging body. One such ageassociated burden is a decline in renal function. Chronic kidney disease (CKD) is prevalent in 44% of people above the age of 70 [3], and this number has been increasing due to an increased incidence of type 2 diabetes mellitus (DM), and hypertension (HTN), which lend themselves to the development of CKD. Together, the number of elderly patients requiring renal replacement therapy has resulted in their increasing representation on kidney transplant wait lists [3,4]. About half of these patients die while on the waitlist in the United States [5]. Previous studies have shown improved quality of life and a 41% reduction in mortality risk in elderly patients who received a kidney transplant when compared to dialysis-dependent wait-listed patients [3][4][5][6]. Most studies in the past that have addressed outcomes in elderly kidney transplant recipients have pooled all patients above the age of 65 into a single group [6][7][8][9]. Significant decline is noted in kidney function between the ages of 65 and 70 [10]. Thus, there remains a need to address the clinical outcomes in 70+-yearold recipients separately to ascertain the transplant benefit to this sub-population. In this study, we investigated the short-and long-term clinical outcomes of kidney transplant recipients aged 70 years or greater at the time of their transplant. This was a single-center, retrospective cohort study approved by the institutional review board (IRB) of Tulane University, New Orleans, Louisiana, United States (approval number: 2022-1518) and a waiver of informed consent was granted by the IRB. All kidney transplant recipients who were aged 70 years or more at the time of their transplant at Tulane Medical Center, New Orleans, Louisiana between January 1, 2006, and December 31, 2019, were identified using data submitted to the United Network for Organ Sharing (UNOS) [11]. Patients were excluded if lost to follow-up within the first year post transplant ( Figure 1) Data collection and study elements Donor and recipient characteristics were identified using the site-specific UNOS database [11]. Recipient data were collected from hospital electronic medical records. Data collection was independently done by two authors (JM and ON) and rechecked by a senior author (AV). The variables examined included donor characteristics such as age, sex, race, blood type, BMI, kidney donor profile index (KDPI) [12], terminal creatinine of donor prior to donation, preservation/cold ischemic time (time when the organ is preserved in a hypothermic state prior to transplantation into the recipient), and donor status (donation after brain death (DBD)/donation after cardiac death (DCD)). Demographic and clinical outcome descriptors of transplant recipients, such as age, sex, race, BMI, blood type, education level, and waiting list time (waiting time is calculated from the date when the patient starts dialysis or the date the transplant center listed the patient, whichever comes first), comorbidities, polypharmacy (defined as the use of five or more drugs before transplant), functional status at listing (defined as the performance of activities of daily living), anticoagulation/antiplatelet therapy use before transplant, and need for midodrine pre-transplant for hypotension (midodrine is used in dialysis patients to treat hypotension) were surveyed. Operative and perioperative variables collected included warm ischemic time (WIT), hospital length of stay (LOS), 30-day hospital readmission, delayed graft function (DGF, defined as requiring dialysis within seven days post transplant), and postoperative complications including atrial fibrillation, hypotension, myocardial infarction, and return to the operating room. Immunologic characteristics included ABO incompatibility, human leukocyte antigen (HLA) mismatches, donor-recipient (DR) mismatch, and induction and maintenance immunosuppression regimen used. The primary outcomes of interest were patient survival, graft survival, and death-censored graft survival at one, three, and five years following the kidney transplant. Graft failure was defined as a return to dialysis, repeat kidney transplant, or death with a functioning graft. Death-censored graft survival was used to estimate the probability of graft loss only. In the event of death with a functioning graft, the date of death is taken as the date of the last follow-up. The fatal event is thereby handled as a case lost to follow-up, not as a graft failure. Secondary outcomes of interest included perioperative complications, hospital LOS, readmissions, and incidence of rejection, infection, or cancer in the follow-up period. Statistical analysis Patients were dichotomized by donor type (living vs deceased). Descriptive statistics were used to examine the cohorts. An examination of factors affecting one-year and three-year graft survival was also performed with patients categorized by graft survival or loss at the specified time frame. Categorical values were reported as frequencies, represented as n (%), and examined by Chi-square analysis. Continuous variables were reported as median (interquartile range (IQR)25 -IQR75)) and examined by the Mann-Whitney U test. The level of statistical significance was set at p < 0.05. All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 28.0 (Released 2021; IBM Corp., Armonk, New York, United States). Results A total of 54 kidney transplant recipients were included in the analysis, of which 17 (31.5%) had living donors and 37 (68.5%) deceased donors. The overall cohort had a mean age of 72 years and a BMI of 27.1, was predominantly of white race (64.8%), and had a high school education or lower (64.8%) ( There were no differences between groups in any of these demographic factors. There were more male recipients than female recipients, and a higher proportion of male patients received living donors than deceased donor organs (p = 0.03). The majority of the recipients had blood group A or O, with the remainder being either AB or B type, but these proportions did not differ between cohorts. The most common comorbidity present at the time of listing was HTN, followed by DM, coronary artery disease (CAD), and a history of cerebrovascular accident (CVA); these proportions did not differ between the cohorts. Most (87%) recipients were taking five or more pharmacological agents at the time of transplant, and half of the total cohort was on anticoagulation/antiplatelet therapy. Most recipients (46.3%) were documented as having a functional status at the listing of 90%, while only 3.7% had a functional status below 70%. The distribution of the degree of functional ability did not differ between cohorts. Living donor recipients had a significantly shorter waiting list time compared to deceased donor recipients (p <0.001). Donor characteristics are detailed in Table 2. Table 3 details patient outcomes and immunosuppression regimens. Basiliximab was the preferred induction immunosuppressive agent in the cohort and was utilized in 50% of patients, followed by alemtuzumab (37%) and Thymoglobulin (7.4%). The proportions of patients receiving these regimens did not differ between groups. Most (81.5%) were on calcineurin inhibitors and mycophenolate for maintenance immunosuppression. Mycophenolate was discontinued in 18.5% of recipients owing to mycophenolate intolerance/side effects. There were no differences in human leukocyte antigen -DR isotype (HLA-DR) mismatch based on donor type, with 50% overall with a mismatch of 1, 24.1% with 2, and 25.9% with 0. Patient outcomes based on donor type were compared ( Table 3). Of the postoperative complications examined, atrial fibrillation was seen in 8% of the cohort, hypotension in 4%, and myocardial infarction in 2%. A total of seven patients (14% of cases) required a return to the operating room within that hospital stay, 26.9% were readmitted to a hospital within 30 days of the transplant, and 57.7% had at least one hospital readmission after 30 days but before one year after transplant. There were no differences in readmission rates based on donor organ type. DGF was observed at a higher rate in the deceased donor group (24.3%) when compared to the living donor group (11.8%), but this difference was not significant (p = 0.29). The median LOS postoperatively was six days and was comparable between both living donor and deceased donor groups. Parameter The incidence of infection in the first year post transplant was 51.9%, and this decreased to 39.1% after one year. There were no differences in the incidence of infectious complications in the first year post transplant between deceased and living donor groups. The most frequently occurring pathogen types were bacterial, seen in 59.3% of patients (through common UTIs), and viral, seen in 48.2% of patients (cytomegalovirus (CMV), BK virus (BKV)). Cancer was seen in 3.8% of the total cohort within one year of transplant, and in 33.3% after one year. The incidence of rejection in the first year post transplant was 9.6%. Acute cellular rejection (ACR) was the most common type of rejection (three patients), followed by combined ACR and antibody-mediated rejection (AMR) in one patient. Rates of one, three, and five-year graft survival, death-censored graft survival, and patient survival are presented in Figures 2, 3, 4. The one, three, and five-year rates of patient survival, graft survival, and deathcensored graft survival decreased over each time period assessed. Factors impacting graft survival were then examined. Specifically, recipient characteristics, comorbidities, peri-operative complications, incidence of rejection, cancer, and infections were correlated with one and three-year graft survival ( Table 4). Patients with a history of cerebrovascular disease were observed to have a higher (50% vs 6%) incidence of graft loss at one year (p = 0.003) and at three years post transplant (p = 0.04). History of CAD also affected graft function, with 46.7% of those with graft loss at three years having a history of CAD (p = 0.03). Out of those with graft failure at one year, 75% had at least one episode of infection and 33.3% had a diagnosis of cancer within the first year post transplant. Cancer within one year after transplant adversely affected 1 and 3-year graft survival with an incidence of 33.3% (p = 0.006) and 15.4% (p = 0.02) in patients who incurred graft loss. 3-year graft status Post-operative incidence of atrial fibrillation, hypotension, myocardial infarction, and length of initial hospital stay did not adversely affect one-year or three-year graft survival. Patients that required a return to the operating room after their transplant faired poorer than those who did not-50% of patients who had graft failure at one year were taken back to the operating room postoperatively, compared with 10.9% with a functioning graft (p = 0.03). Readmission after 30 days but within the first year post transplant was seen in 86.7% of patients with a failed graft at three years whereas only 40.6% in those with surviving grafts (p= 0.003). DGF was seen in 75% of those with a failed graft within the first year (p = 0.005). Discussion Kidney transplantation is the preferred modality for the treatment of end-stage renal disease (ESRD) owing to improved survival benefits and quality of life [13,14]. However, this benefit diminishes with increasing recipient age [13]. Owing to the variability in age cut-off used in studies that have examined outcomes in elderly kidney transplant recipients, it remains unclear at what age the predicted transplant advantage is lost. The present study aimed to describe outcomes after kidney transplantation in recipients 70 years and older. Elderly waitlisted dialysis-dependent candidates have a 50% mortality rate while waiting for a kidney transplant, and this has been on the rise since 2019 [15,16]. There have been only a handful of studies that have looked at outcomes in transplant patients over the age of 70 years [14,[17][18][19]. Rao et al. in 2007 reported a 41% lower overall risk of death in transplanted candidates when compared to their counterparts who remained on the waitlist [20]. Heldal et al in 2010 reported lower mortality in transplanted 70+-year-old dialysis-dependent recipients in comparison to those that stayed on dialysis [18]. In our study cohort, we witnessed excellent one-year graft survival of 92.6% and patient survival of 94.3%, which is comparable to other reported literature [13,[16][17][18]. There is limited data regarding transplant outcomes in this subpopulation of elderly patients compared to their younger counterparts. In 2004, Fabrizii et al. reported a higher five-year patient survival amongst kidney transplant recipients in the age group of 50-59 years when compared to patients older than 60 [21]. Huang and colleagues at UCLA in 2010 stratified over 31,000 patients by age and reported two-year graft survival at 85% for the 60-69 years group, 81% for the 70-79 years group, and 69% for the 80 years and older group [22]. More recently, in 2021, Doucet et al. compared outcomes in transplant recipients between the ages of 18 and 69 years and those 70 years and above. They found that patients in the 18-69 years group who received a deceased donor organ had a graft survival of 93% at one year, 88% at three years, and 82% at five years, whereas patients aged 70 years and above who received a deceased donor organ had a graft survival of 93% at one year, 80% at three years, and 74% at five years [23]. Lemoine et al., in 2019, reported patient survival of 68.1% at five years [19]. Boesmueller and colleagues in 2011 reported similar five-year patient survival at 67%. In this study, the above 70 years group had a 27% lower graft survival rate at five years in comparison to the younger age group [18]. Our study witnessed the same trend of decreased graft and patient survival at the three-year and five-year follow-ups. Three-year and five-year patient survivals in our cohort were 76.6% and 62.9%, respectively, and three-year and five-year graft survivals were 69.4% and 53.8%, respectively. One-year, three-year, and five-year patient and graft survivals in living donor recipients were comparable to deceased donor recipients in our study cohort. As of 2021, there are over 90,000 patients in the United States waiting for a kidney transplant, with 13 patients dying every day on the waitlist [16]. The one, three, and five-year death-censored graft survival in our study was 100%, 91.9%, and 80.8%, respectively ( Figure 4). Death with a functioning allograft in the older recipient transplant population can rationale the stark difference between long-term graft survival and death-censored graft survival rates. This observation brings forward the concern of whether elderly transplant recipients are denying the transplantation chances of younger recipients who are expected to have a greater life expectancy. Aging can be a reliable proxy for stronger predictors such as functional status, physiologic organ reserve, or comorbidity; however, an age cut-off for the rationale to not transplant can raise several ethical concerns. Studies have reported higher perioperative surgical complications in the elderly [24]. From our cohort, 14% needed to be taken back to the operating room and 26.9% had a readmission within 30 days of the procedure; 57.7% of recipients had a hospital re-admission in the first year post transplant. Post-transplant incidence of infection in our study, measured as any infection within one year after transplant, was 51.9%. Similar studies have reported widely varying infection rates, depending on their accounting for all infections or severe infections only [18,19,23,25]. Other comparative studies, done mostly in the 2000s, looked at infectious death rates and found that older transplant patients tended to do significantly worse than younger recipients [17,26], with deaths being more often due to opportunistic agents. The incidence of malignancy at one year and 10 years post transplant was 3.8% and 33.3%, respectively, in our study. This is comparable to prior studies, where the incidence of malignancy increases with time after transplant [19,27]. Post-transplant incidence of rejection in our study was 9.6%, with no differences based on donor status. Huang et al., in 2010, compared outcomes in three cohorts of ages 60-69, 70-79, and above 80 years and reported no difference in rates of rejection (4.7% versus 4.2% versus 4.5%, respectively) between the three groups [22]. Similarly, Boesmueller and colleagues found a comparable rate of acute rejection in their comparison groups of recipients older than 70 years with younger recipients (15.8% versus 17.8%) [18], as did Bharagava et al., who looked at kidney transplant recipients older than 60 years with younger recipients (11.3% versus 10.2%) [24]. Owing to increased infectious complications noted in earlier studies, centers may opt to wean immunosuppression in elderly recipients, and this may lead to variable incidences of reported rejection in different study cohorts [18,22,24]. In our study group, the only predictive factor for early graft loss based on recipient comorbidity was a history of a cerebrovascular event and CAD. DGF and return to the operating room were the only perioperative factors contributing to early graft loss. There are several limitations to the study, which must be addressed. First, the retrospective study design has well-known limitations. Second, this was a single-center study, which allowed consistency in the way patients were evaluated, waitlisted, transplanted, and managed; however, generalizability is limited and the present findings may not be applicable to other institutions or patient populations. The small sample size introduces the possibility of a Type II error. There is a need for other similar studies that look at outcomes in kidney transplant patients solely above 70 years of age in order to confirm our results are not limited to our institution. Owing to the retrospective nature of the study, post-transplant changes in functional independence were not able to be known. Lastly, there are other potential factors such as frailty syndrome, dementia, delirium, and malnutrition that may impact graft survival, which were not accounted for in this analysis. Conclusions Our study reports acceptable short-term outcomes in a sample of kidney transplant recipients over the age of 70 years. Graft survival is similar to rates seen in studies that looked at younger cohorts but the decline in this rate over time is steeper in the older age group and may possibly be due to decreased patient survival. Thus, the expected transplant advantage in terms of long-term patient survival is not realized in elderly patients included in our study. Considering the shortage of kidney donors, anticipated life expectancy has to be equated with expected outcomes. More studies on quality-of-life benefits and post-transplant changes in functional independence are needed to validate appropriate resource utilization among older recipients. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Tulane University, New Orleans, Louisiana, United States issued approval 2022-1518. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
4,849.6
2023-01-01T00:00:00.000
[ "Economics" ]
Effect of National Senior Certificate maths on the pass rate of first-year Engineering physics How to cite this article: Govender J, Moodley M. Effect of National Senior Certificate maths on the pass rate of first-year Engineering physics. S Afr J Sci. 2012;108(7/8), Art. #830, 5 pages. http:// dx.doi.org/10.4102/sajs. v108i7/8.830 There has been much controversy about the mathematics results of the 2008 National Senior Certificate examinations – the first to be written by pupils following the outcomesbased curriculum. This article examines the impact of the new high school mathematics curriculum on the performance in physics by first-year Engineering students at the University of KwaZulu-Natal. The first-year physics results of the Engineering students who wrote the 2008 National Senior Certificate (NSC) examinations were compared with the physics results of the Engineering students of the previous 4 years who wrote the Senior Certificate Examinations (SCE). Analysis of variance was used to compare the average physics marks of the NSC and SCE groups. Correlation analysis was performed to determine the relationship between performance in high school mathematics with performance in first-year physics in Engineering for both the 2008 NSC group and the 2007 SCE group. The results showed a lower physics pass rate for the NSC students compared with that of the SCE students. There was also a significant difference in the average marks obtained in physics between the NSC students and the SCE students. The new high school mathematics curriculum has fallen short in providing essential skills and techniques for students who wish to study physics at university. Furthermore, the high school mathematics results of the NSC students are an indication of considerable grade inflation. There has been much controversy about the mathematics results of the 2008 National Senior Certificate examinations -the first to be written by pupils following the outcomesbased curriculum.This article examines the impact of the new high school mathematics curriculum on the performance in physics by first-year Engineering students at the University of KwaZulu-Natal.The first-year physics results of the Engineering students who wrote the 2008 National Senior Certificate (NSC) examinations were compared with the physics results of the Engineering students of the previous 4 years who wrote the Senior Certificate Examinations (SCE).Analysis of variance was used to compare the average physics marks of the NSC and SCE groups.Correlation analysis was performed to determine the relationship between performance in high school mathematics with performance in first-year physics in Engineering for both the 2008 NSC group and the 2007 SCE group.The results showed a lower physics pass rate for the NSC students compared with that of the SCE students.There was also a significant difference in the average marks obtained in physics between the NSC students and the SCE students.The new high school mathematics curriculum has fallen short in providing essential skills and techniques for students who wish to study physics at university.Furthermore, the high school mathematics results of the NSC students are an indication of considerable grade inflation. Introduction Prior to 2008, the Senior Certificate Examination (SCE) was the culmination of South African high school education, the results of which were used for entrance into tertiary institutions.In 1998, a new outcomes-based curriculum was introduced and the first National Senior Certificate (NSC) examination set on this curriculum was written in 2008.Unlike in the past, when subjects were offered at both higher and standard grades, all the subjects in the NSC are offered at one level.Even though the Department of Education has put considerable effort into its implementation, the outcomes-based curriculum has had its fair share of criticism.A major concern amongst tertiary educators in the mathematics and science fields has been the relegation of Euclidean geometry to an optional section of the NSC mathematics syllabus.When the results of the first 2008 NSC examination were released, they were, not surprisingly, greeted with much scepticism.A newspaper article titled 'New maths curriculum does not add up' 1 laments the fact that the new mathematics curriculum denies pupils a satisfactory grounding to enable them to pursue post-matriculation studies in mathematics-dominated degrees such as Engineering and Natural Sciences.Taylor, in a newspaper article titled 'It's OBE, but not as it should be', 2 argues that although contextualisation (such as calculating the height of a tree) is useful, in order for learners to learn enough trigonometry to study for an Engineering degree, they need to focus on the concepts, equations and graphs that make up the discipline.Smetherham 3 , in a newspaper article titled 'Varsity students lack essential skills', mentions that of the students who wrote the mathematics tests of the National Benchmarks Tests Project in February 2009, only 7% were found to be academically proficient. The impact of the NSC curriculum on student performance at universities has been the feature of some recent research articles.Wolmarans et al. 4 studied the effect of NSC mathematics on student performance in mathematics in first-year Engineering programmes, whilst Nel and Kistner 5 researched the implications of the NSC on access to higher education.In this study, we investigated the influence of the 2008 outcomes-based mathematics curriculum on the physics pass rates of first-year Engineering students at the University of KwaZulu-Natal (UKZN).We first outline the reason for choosing mathematics as a predictor for success in first-year university physics. The minimum requirements for entry into the BSc Engineering Programme (except Chemical Engineering) offered by UKZN are a C-symbol in both higher-grade mathematics and higher-S Afr J Sci 2012; 108(7/8) http://www.sajs.co.za grade physical science with a total of 33 matriculation points for SCE students, and a Level 6 pass in both mathematics and physical science with a total of 35 matriculation points for NSC students.Physics is a compulsory module in each semester of the first-year curriculum of the BSc Engineering degrees.All Engineering students, except those registered for Chemical Engineering, register for PHYS151, a 16-credit calculus-based physics module in the first semester.The topics studied in this module are: motion in one and two dimensions, Newton's Laws, work and energy, momentum, rotation of rigid bodies, elasticity, fluid mechanics, periodic motion, mechanical waves, sound, temperature and heat, and the thermal properties of matter.As with all modules in physics, a good grasp of mathematics is essential for students to succeed in this module.Landau, the Nobel prize winning physicist, often expressed the following sentiment when he advised students wishing to study physics: [acquire good] 'mathematical techniques, that is, the ability to solve concrete mathematical problems' 6 . All South African tertiary institutions require a pass in high school mathematics as a prerequisite for entrance into their Science and Engineering faculties.Various studies have been made on the use of mathematics as a predictor of success in first-year university science courses.Eiselen et al. 7 undertook a study amongst a set of bridging programme students at the University of Johannesburg to determine how basic mathematical skills acquired at high school can serve as predictors of success in first-semester mathematics.They found that the probability of being successful in firstsemester mathematics increased with increasing performance in high school mathematics.Leopold and Edgar 8 designed a calculator-free mathematics assessment for second-semester chemistry students at the University of Minnesota.This test consisted of 20 multiple choice questions on logarithms, scientific notation, graphs and algebra, and was administered as a surprise test.The chemistry course grades obtained by the students showed significant correlation with the scores obtained in a subset of the mathematics assessment test.Hudson and Liberman 9 used a pretest of computational skills in algebra and trigonometry in an algebra-based introductory physics course at the University of Houston, Texas.This test, together with an instrument to measure abstract reasoning, was used to predict more than 25% of the variance in the final physics grade.Cohen et al. 10 randomly chose students from four introductory physics courses at the University of Vermont to correlate their verbal and mathematics scores in the Scholastic Aptitude Tests to their performance in Piagetian tasks with their final course grades.They found that the mathematics score was the most successful in predicting success. We will show in this article that the mark for mathematics obtained by the 2009 cohort of NSC students enrolled for first-year Engineering at UKZN is not a true representation of their mathematical skills.Whether the NSC students' high mathematics marks are as a result of an overly simplified school mathematics syllabus or 'grade inflation' will also be addressed. Method Students' performances in the first-semester Engineering physics module, PHYS151, were analysed for the period 2005 to 2009.Only students who had registered for this module for the first time and had written the final examination for this module were included in the sample.In other words, students repeating the module and those coming from access programmes were excluded.The sample of the 2009 cohort of new students included only those who had written their matriculation examination in 2008 (the NSC students).The annual sample size ranged between 250 and 300 students.A comparison was made of the pass rates of these students in the PHYS151 module for the years 2005 to 2009.The average physics marks for the years 2005 to 2009 were compared using analysis of variance (ANOVA, Microsoft Excel).The mathematics marks obtained in the matriculation examinations (hereafter referred to as matric maths mark) by the students in the 2009 sample were correlated with their physics marks obtained in PHYS151. The 2008 and 2009 cohorts of students were further categorised into four bands according to the quality of their matric maths marks.The number of students in each band for both 2008 and 2009 are shown in Table 1.It must be noted that Band 4 did not apply to 2009 students, because the entrance qualification for entry into the Engineering Faculty is now a minimum of a Level 6 pass in maths (70% to 79%) compared to a minimum of a C-symbol (60% to 69%) for the years prior to 2009. A comparison of the pass rates for each of these bands was made for the years 2009 and 2008.The average marks obtained by each band were compared using ANOVA.Finally, the results of the students from the 2009 sample who had written the second-semester physics module, PHYS152, were then analysed and compared with their matric maths marks.The numbers of students in each band for PHYS152 were: 82 students in Band 1, 74 students in Band 2 and 8 students in Band 3. Comparison of physics pass rates for the period 2005-2009 The pass rates for first-time students who qualified to write the final examination for the Engineering physics module PHYS151 for the years 2005 to 2009 are illustrated in Figure 1.These pass rates include students who passed after writing the supplementary examinations.Students qualify for the Page 2 of 5 Comparison of average physics marks for the period 2005-2009 Data used in the ANOVA of the average mark (expressed as a percentage) in PHYS151 obtained by the cohorts of the years from 2005 to 2008 (SCE students) and 2009 (NSC students) are shown in Table 2.The statistical results of these analyses are shown in Table 3. Because the F-value (2.51205) is less than the critical value F crit (2.612641), we concluded that there was no significant difference in the average marks obtained by PHYS151 students during the years 2005 to 2008 (Table 3a).This finding implies that the quality of the SCE students did not change over this period. When the cohort of 2009 (NSC) students was included in the analysis, the results were somewhat different, as shown in Table 3b.Because the F-value (7.832996) is greater than the critical value F crit (2.37833), we concluded that there was a significant difference between the marks obtained by PHYS151 students during 2009 and those obtained in the previous years, 2005 to 2008. First-semester pass rates compared with matric maths mark The scatter plot in Figure 2 shows the correlation between the matric maths mark and the marks obtained in PHYS151 for the 2009 cohort of students.For illustration purposes, the graph was drawn by binning the average of the PHYS151 marks of all the students who had the same matric maths mark (ranging from 2 to 16 students per point). The analysis shows that there is a strong statistical correlation between the matric maths mark and the average physics marks obtained by the students (r = 0.875).When a similar analysis was done for the 2008 cohort, the correlation coefficient was 0.865.The first-semester (PHYS151) pass rates for each band (banded according to matric maths mark) for the years 2008 and 2009 are illustrated in Figure 3. It is evident that the most successful students in the 2009 physics examinations, with a pass rate of 85%, were the ones who had obtained matric maths passes of 90% and above.Only 42% of the students who had matric maths passes of between 80% and 89% passed physics in 2009, whilst the physics pass rate for those with matric maths passes between 70% and 79% was a very low 19%.This trend is also evident with the 2008 students, but with significantly higher pass rates in each band.The 2008 students in Bands 1 and 2 had pass rates above 90%, whilst those in Bands 3 and 4 had pass rates above 60%. Degrees of freedom Mean square Degrees of freedom Mean square Second-semester physics pass rates compared with matric maths mark Of the 242 NSC students who wrote PHYS151 in the first semester, 168 sat for the examination for the second-semester physics module (PHYS152).These 168 students included those who had failed PHYS151, but who had achieved above 40%, which is the minimum requirement for entry into the second-semester module.The pass rate for the PHYS152 module was 86%. Figure 4 shows the pass rates in PHYS152 for the different bands of matric maths for the 2009 cohort. The disparities in the pass rates for the three bands in the PHYS152 module were not as great as in the PHYS151 module.This trend is consistent with that seen historically for students doing a second semester of physics after successfully passing the first semester of physics.The number of students in each band who passed first-semester (PHYS151) or secondsemester (PHYS152) physics are summarised in Table 4. Band 1 (consisting of 82 students) had 10 students who failed PHYS151 in June and of these 7 passed PHYS152.This result means that 70% of the students failing in June passed at the end of the year.Band 1 also included three students who passed PHYS151 in June but who failed PHYS152.Band 2 (74 students) included 25 students who failed PHYS151 in June, of whom 14 passed at the end of the year (a pass rate of 56%).Band 2 also included four students who passed PHYS151 in June but who failed the second physics module.Band 3 (12 students) included two students who failed PHYS151 in June, one of whom passed at the end of the year (a pass rate of 50%).Band 3 also included one student who passed PHYS151 in June but who failed the second physics module. Discussion and concluding remarks In the 4-year period studied, our results show that the pass rate for PHYS151 prior to 2009 averaged 67%, with the average mark being fairly consistent.However, in 2009 -the first year to be undertaken by students who wrote the new NSC examination -there was a dramatic decrease in both the pass rate and the average module mark for PHYS151. As the core of the lecturing staff of PHYS151 has remained fairly stable for the period 2005 to 2009 and the teaching and examining of this module has been reasonably consistent, we can conclude that the new NSC curriculum produced students less prepared for university study than the previous SCE curriculum.This result is also supported by the studies made by Nel and Kistner 5 and Wolmarans et al. 4 Despite the noble intentions of outcomes-based education to make knowledge more accessible and more relevant to the lives of learners, the NSC mathematics curriculum does not seem to have equipped students with the fundamental skills and techniques necessary for success in post-matriculation studies in the sciences. The 2008 NSC mathematics results also reinforce the notion of grade inflationin which the matric marks obtained by the students are not matched by their actual performance.Grade inflation was particularly noticeable in the performance of the 2009 cohort of students who had obtained between 69% and 89% in their NSC mathematics.Only 38% of these students passed PHYS151, whereas 74% of the students of the 2008 cohort in this range passed PHYS151. Umalusi knowledge and routine calculations and 48% based on performing complex procedures and solving problems.Based on this finding, a student writing the 2008 NSC mathematics examination would get 80% by correctly answering all the questions based on factual knowledge and routine calculations and only correctly answering 8% of the questions based on complex procedures and solving problems.Students who wrote the 2005, 2006 or 2007 SCE mathematics examinations would have needed to get 28% of the questions based on complex procedures and solving problems correct in addition to answering all the factual knowledge and routine calculation questions correctly to get 80%. Matric maths mark Percentage passed physics Grade inflation is not unique to this country, and is a phenomenon that is under much discussion in the USA and UK.In 2003, the Programme for International Assessment found that 15-year-old students from the USA ranked 23 out of 43 countries in mathematics, 12 although they were performing quite well in their own national standardised tests.In the UK, Smithers 13 found that the percentage of A-symbols in A-level examinations had increased from under 10% in 1995 to over 20% in 2005.Grade inflation is normally associated with falling standards, but can also be explained by any number of factors such as a change in curriculum, improvements in the manner of examining (for example, greater structuring of questions in examinations) and increased use of continuous assessment. Our results showed a strong correlation between the matric maths mark and the physics mark obtained in PHYS151.This correlation has not diminished in the changeover from SCE to NSC, as can be seen by the correlation coefficients of 0.58 in 2008 and 0.55 in 2009.The matric maths mark continues to be a fairly good predictor of success in first-year Engineering physics. The students' performance in the second-semester physics module PHYS152 in 2009 showed an improvement from that of the first semester.Although the PHYS152 pass rate for the NSC students was 86%, because only 168 of the original 242 wrote this module, the pass rate based on the initial enrolment was 60% (compared to 55% for PHYS151).Although students generally perform better in the second semester of their first year, having adapted to the rigours of university life, the 2009 cohort's improved performance in physics can be partly attributed to their increased mathematics fluency.The deficiencies in their mathematics knowledge would have been eliminated by their study of the first-semester and second-semester mathematics modules, which are taken as part of the Engineering curriculum. The findings of this study do not necessarily mean that the new NSC mathematics curriculum is inherently flawed and should be completely overhauled.It is evident that in 2008, the year of the first NSC examinations and the first year in which mathematics was examined as a single grade, the examiners set a much easier paper than those of previous years.However, another factor which could have led to the inflated marks was the exemplar papers.These sample papers were set by the Department of Education to prepare teachers and learners for the new examinations of 2008, and Umalusi's 2008 Maintaining Standards Report 11 found that the cognitive levels of the exemplar mathematics papers were similar to those of the final papers.Further studies over a longer period of time need to be conducted to determine the validity and reliability of the NSC mathematics curriculum.It must also be noted that at the time of this publication, the NSC mathematics curriculum was undergoing some changes, with some topics such as Euclidean geometry, exponents and logarithms being reintroduced into the curriculum. FIGURE 1 : FIGURE 1: Pass rates in PHYS151 for the periods 2005 to 2009. FIGURE 2 : FIGURE 2: Average marks in PHYS151 correlated with matric maths mark for 2009. FIGURE 3 : FIGURE 3: Pass rates in first-semester physics in 2008 and 2009 according to matric maths mark. FIGURE 4 : FIGURE 4: Pass rates in second-semester physics in 2009 according to matric maths mark. TABLE 1 : The number of students in each band, divided according to their matric maths mark, in 2008 and 2009.supplementaryexam if they obtain a mark between 39% and 50% in the main examination.The average physics pass rate for the years 2005 to 2008 was 67%, whereas the pass rate for 2009 was 55%.It is evident that the physics pass rate of the NSC students (2009) was dramatically lower than those of the SCE students in 2005 to 2008. TABLE 2 : Average marks obtained in PHYS151 for the periods 2005 to 2009 and data calculated in the analysis of variance of these marks. TABLE 3a : Statistical results of the analysis of variance for the average marks obtained in PHYS151 in 2005 to 2008. TABLE 3b : Statistical results of the analysis of variance for the average marks obtained in PHYS151 in 2005 to 2009. TABLE 4 : The number of students in each band, divided according to their matric maths marks, who passed first-semester (PHYS151) or second-semester (PHYS152) physics.
4,946.2
2012-07-02T00:00:00.000
[ "Engineering", "Physics", "Education" ]
CRNDE mediated hnRNPA2B1 stability facilitates nuclear export and translation of KRAS in colorectal cancer Development of colorectal cancer (CRC) involves activation of Kirsten rat sarcoma viral oncogene homolog (KRAS) signaling. However, the post-transcriptional regulation of KRAS has yet to be fully characterized. Here, we found that the colorectal neoplasia differentially expressed (CRNDE)/heterogeneous nuclear ribonucleoprotein A2/B1 (hnRNPA2B1) axis was notably elevated in CRC and was strongly associated with poor prognosis of patients, while also significantly promoting CRC cell proliferation and metastasis both in vitro and in vivo. Furthermore, CRNDE maintained the stability of hnRNPA2B1 protein by inhibiting E3 ubiquitin ligase TRIM21 mediated K63 ubiquitination-dependent protein degradation. CRNDE/hnRNPA2B1 axis facilitated the nuclear export and translation of KRAS mRNA, which specifically activated the MAPK signaling pathway, eventually accelerating the malignant progression of CRC. Our findings provided insight into the regulatory network for stable hnRNPA2B1 protein expression, and the molecular mechanisms by which the CRNDE/hnRNPA2B1 axis mediated KRAS nucleocytoplasmic transport and translation, deeply underscoring the bright future of hnRNPA2B1 as a promising biomarker and therapeutic target for CRC. By hindering hnRNPA2B1 from binding to the E3 ubiquitin ligase TRIM21, whose mediated ubiquitin-dependent degradation was thereby inhibited, CRNDE protected the stability of hnRNPA2B1’s high protein expression in CRC. Supported by the high level of the oncogenic molecule CRNDE, hnRNPA2B1 bound to KRAS mRNA and promoted KRAS mRNA nucleus export to enter the ribosomal translation program, subsequently activating the MAPK signaling pathway and ultimately accelerating the malignant progression of CRC. INTRODUCTION Colorectal cancer (CRC) is one of the most common solid malignancies and the second leading cause of cancer-related mortality worldwide [1].Due to the delay in diagnosis and the lack of effective therapeutic targets [2], chemotherapy and targeted therapies have not fundamentally led to the desired 5-year survival outcomes for CRC patients [3].Therefore, it is imperative to further study the molecular mechanisms involved in the development of CRC to find more reliable therapeutic strategies for CRC. Heterogeneous nuclear ribonucleoproteins (hnRNPs), a principal class of classical RNA-binding proteins (RBPs), occupy a considerable proportion in eukaryotic cells and serve as the key participants mediating the entire cycle of nucleic acid metabolism [4,5].The long non-coding RNAs (lncRNAs)/hnRNPs interaction network has been known to play significant roles in diverse biological processes involved in cancer initiation and development [6,7].Recent emerging insights into the interplay between lncRNAs and RBPs in cancer have also led to a clear understanding of the value of lncRNA/RBP in exploring novel ideas for cancer therapy [8,9].As score member of hnRNPs, heterogeneous ribonucleoprotein A2/B1 (hnRNPA2B1) has been reported to be involved in RNA transcription, splicing, trafficking, stability and translation as lncRNA/hnRNPA2B1 [10].For example, Linc01232/ hnRNPA2B1 promotes pancreatic cancer metastasis via regulating the alternative splicing of A-Raf [11], miR503HG/hnRNPA2B1 inhibits hepatocellular carcinoma progression by reducing the stability of p52 and p65 mRNA [12], and NEAT1/hnRNPA2B1 facilitates fatty acid metabolism through increasing RPRD1B mRNA stability in gastric cancer [13].Meanwhile, Some studies have noted the biological contribution of lncRNA/hnRNPA2B1 to CRC progression, such as MIR100HG/hnRNPA2B1 [14], H19/hnRNPA2B1 [15] and RP11/hnRNPA2B1 [16].However, the role of colorectal neoplasia differentially expressed (CRNDE)/hnRNPA2B1 axis in CRC and how CRNDE modulates hnRNPA2B1 are less well understood. Kirsten rat sarcoma viral oncogene homolog (KRAS) is a wellknown small GTPases protein.Approximately 40% of CRC patients harbor activating missense mutations in KRAS, and they tend to have a poorer prognosis [17,18].Unfortunately, KRAS has been deemed a challenging therapeutic target and was once even considered "undruggable" [19], so the treatment of KRAS-mutant CRC patients remains fraught with difficulties.Additionally, it is worth mentioning that aberrant activation of KRAS is responsible for dysregulation of the RAS/MAPK pathway, which is a pivotal and classical cascade response and contributes to the malignant biological behavior of cancer cells [20].CRNDE and hnRNPA2B1 have been reported as the two activators of MAPK signaling pathway in cancer progression [11,21,22].Nevertheless, how MAPK signaling pathway is regulated by CRNDE/hnRNPA2B1 axis remains to be elucidated. In this paper, we elucidated the critical oncogenic roles of the CRNDE/hnRNPA2B1 axis in CRC.CRNDE protected hnRNPA2B1 from TRIM21-mediated K63 ubiquitin-dependent degradation.Subsequently, CRNDE cooperated with hnRNPA2B1 to induce nuclear export and translation of KRAS mRNA, which then activated MAPK signaling cascade.Our study provides a strong support for CRNDE/hnRNPA2B1 as a tumor-driver in CRC and a promising therapeutic target for CRC treatment in the future. Patient specimen Human CRC tissues and corresponding adjacent noncancerous tissues were collected from the surgical resection of CRC patients at the Affiliated Cancer Hospital of Nanjing Medical University.The specimens were quickly frozen and stored in liquid nitrogen.The diagnosis of CRC was confirmed based on clinical manifestation and pathological examination.None of the patients had received neoadjuvant therapy. RNA immunoprecipitation (RIP) assay RIP assays were carried out with Magna RIP RNA-Binding Protein Immunoprecipitation Kit (Millipore, MA, USA) following the manufacturer's manual.Retrieved RNAs were purified and detected by qPCR to confirm the signals.The relative levels of immunoprecipitated RNAs were normalized to the corresponding input RNA level. SUnSET assay To study the effect of hnRNPA2B1 on mRNA translation, a nonradioactive method based on puromycin, called the SUnSET assay, was applied according to the protocols reported previously [25].In detail, the cells that to be tested were incubated with puromycin at a final concentration of 10 μg/mL for 10-30 min, after which the total protein lysates were extracted and aligned, and subsequently western blot was performed using anti-puromycin monoclonal antibody to measure relative changes in protein synthesis. Statistical analysis The quantitative data independently collected in triplicate were presented as mean ± standard deviation (SD).Appropriate statistical methods were applied with GraphPad Prism 7.0 software (GraphPad Software, Inc.).The Student's t-test and one-way analysis of variance were used to calculate differences between groups, chi-square test was used to evaluate the relationship between hnRNPA2B1 and the clinicopathological characteristics in CRC cases, and log-rank test was used for survival analysis.Values of P < 0.05 were considered statistically significant. RESULTS hnRNPA2B1 as an oncogene is closely associated with poor outcomes in CRC To fully understand the clinical significance of hnRNPA2B1 in CRC.The expression level of hnRNPA2B1 in CRC was obtained from the detections of CRC specimens (Fig. S1A, B), cell lines (Fig. S1C, D) and TCGA database (Fig. S1E), which showed that hnRNPA2B1 was significantly upregulated in CRC.Moreover, the patients with deeper tumor invasion depth presented higher hnRNPA2B1 expression (P = 0.015) (Table S1).Furthermore, increased expression of hnRNPA2B1 was also detected in CRC TMA by IHC analysis (Fig. S1F, I).According to our TMA data, higher hnRNPA2B1 expression was confirmed to be strongly correlated with more malignant pathological grade (Fig. 1A, P = 0.0099), deeper tumor invasion (Fig. S1G, P = 0.0440), more severe lymph node metastasis (Fig. S1H, P < 0.0001), more advanced TNM stage (Fig. 1B, P < 0.0001) (Table 1), and worse survival burden of CRC (Fig. 1C, P = 0.0403).Thus, we demonstrated that upregulated hnRNPA2B1 was closely associated with clinicopathological characteristics and survival prognosis in CRC. To investigate the biological functions of hnRNPA2B1 in CRC, the stably knockdown or overexpressed hnRNPA2B1 cell lines were constructed (Fig. S2A, H).Knockdown of hnRNPA2B1 significantly inhibited CRC cell proliferation, invasion and migration (Fig. S2B-G), while ectopic expression of hnRNPA2B1 dramatically enhanced these malignant behaviors (Fig. S2I-N).Considering that hnRNPA2 and hnRNPB1 differ by 12 amino acids in the N-terminal region, they were suspected of possibly having distinct biological features [26].We observed that the reintroduction of hnRNPA2 or hnRNPB1 restored the malignant biological behaviors of hnRNPA2B1 knockout (KO) cells, but no statistical difference was found between the two isoforms (Fig. 1D, S3A-C).The similar oncogenic functions of hnRNPA2 and hnRNPB1 in CRC were firstly demonstrated.Consistent with the findings in vitro, we observed that the growth of subcutaneous tumors of nude mice in sh-hnRNPA2B1 group was significantly slower (Fig. 1E, F), lighter (Fig. 2G) and smaller (Fig. 2H) than that in the control group.Meanwhile, the nude mice with KO cells showed a decrease in lung metastasis (Fig. 1I-K).However, no significant differences were captured in the weight of nude mice (Fig. S3D, E).Taken together, these results provided comprehensive evidence that hnRNPA2B1 could significantly contribute to the tumor growth and metastasis of CRC cells in vitro and in vivo, which suggested that hnRNPA2B1 was a clinically crucial oncogene in CRC. CRNDE promotes hnRNPA2B1 protein stability The lncRNA/hnRNP interaction pattern is a key form of regulating various biological programs involved in cancer progression that cannot be ignored.We thus speculated that whether the dysregulation of hnRNPA2B1 in CRC was controlled by lncRNAs.Therefore, the differentially expressed lncRNAs in CRC were analyzed with GEO [27] and TCGA database [28], and the hnRNPA2B1-interacting lncRNAs were identified with ENCORI [29] and POSTAR3 [30].We turned attention to CRNDE, the only enriched and significantly upregulated lncRNA in several databases (Fig. 2A, B), which has been named as colorectal neoplasia differentially expressed since its discovery and was a wellacknowledged carcinogenic lncRNA in CRC [31].Moreover, the hnRNPA2B1 protein level was positively correlated with the level of CRNDE (P = 0.024) (Fig. 2C).The results of RNA pull-down (Fig. 2D) and the RIP assays (Fig. 2E) unveiled that hnRNPA2B1 indeed bound to CRNDE.Next, based on the predicted secondary structure of CRNDE, a series of CRNDE truncations were constructed (Fig. S4A), and the 401-800nt region of CRNDE was identified as the major interacting domain (Fig. S4B).These results showed that hnRNPA2B1 was a bona fide interacting partner of CRNDE in CRC cells. Notably, CRNDE was found to positively affect the protein expression of hnRNPA2B1 (Fig. 2F, S4C) rather than mRNA level (Fig. 2G), while hnRNPA2B1 did not cause changes in CRNDE expression (Fig. 2H).We also observed that the protein stability and half-life of hnRNPA2B1 were significantly repressed by CRNDE knockdown (Fig. 2I, S4D).In addition, the decreased hnRNPA2B1 protein was slightly re-upregulated by treatment with the proteasome inhibitor MG132 (Fig. 2J).The ubiquitination of hnRNPA2B1 was significantly enriched after CRNDE knockdown (Fig. 2K).To ascertain that hnRNPA2B1 was modified with polyubiquitination, a set of ubiquitin mutants were used in which either only K63 or K48 Lys residue was retained, or only K63 or K48 Lys residue was mutated, and eventually K63 rather than K48 polyubiquitin was clearly detected (Fig. 2L).Moreover, hnRNPA2B1 was selective polyubiquitinated with K63 but not K48 ubiquitin (Figure S4E).Together, these data indicated that CRNDE protected hnRNPA2B1 from K63 ubiquitin-dependent protein degradation.K63-linked polyubiquitin usually functioned as a regulator of proteins or signaling pathways, instead of mediating proteasome-dependent degradation, but was connected to some selfdegradative mechanisms, such as autophagy [32].In order to explore the specific mechanisms involved, we additionally introduced the specific proteasome inhibitor Bortezomib, as well as the autophagy inhibitors chloroquine and ammonium chloride (NH 4 Cl).Our results revealed that Bortezomib did not affect the K63-linked polyubiquitination of hnRNPA2B1 (Fig. S4F).However, chloroquine and NH 4 Cl obviously blocked the ability of CRNDE knockdown to induce the degradation of hnRNPA2B1, whereas Bortezomib and MG132 were only weakly performed (Fig. S4G).These findings suggested that CRNDE was likely to stabilize hnRNPA2B1 protein level primarily by obstructing the K63 ubiquitin-dependent autophagic degradation pathway. CRNDE stabilizes hnRNPA2B1 by preventing TRIM21mediated ubiquitination To further investigate the molecular mechanism of CRNDE stabilized hnRNPA2B1 via K63 polyubiquitination, hnRNPA2B1immunoprecipitation (IP) and mass spectrometry (MS) were performed (Fig. S5A).The results indicated that TRIM21 was one of the top-ranked E3 ubiquitin ligase (Table S2).We then demonstrated the interactions between hnRNPA2B1 and TRIM21 using endogenous and exogenous co-IP immunoblots (Fig. 3A, B, S5B).We found that TRIM21 knockdown markedly increased the abundance of hnRNPA2B1 (Fig. 3C), and enhanced the interaction of hnRNPA2B1 with CRNDE (Fig. 3D, E), as well as dramatically decreased ubiquitination level of hnRNPA2B1 (Fig. 3F).However, TRIM21 neither bound to CNRDE (Fig. 3D, S5C) nor influence the level of CRNDE (Fig. 3G).Meanwhile, both hnRNPA2B1 and CRNDE did not conversely affect TRIM21 expression either (Fig. 3H, I).Moreover, knockdown of TRIM21 attenuated the elevation of K63 polyubiquitin of hnRNPA2B1 induced by silencing CRNDE (Fig. 3J), and overexpression of TRIM21 remarkably increased K63-linked polyubiquitination of hnRNPA2B1 and decreased the hnRNPA2B1 protein level (Fig. S5D), which indicated that TRIM21 was a key mediator of CRNDE protection of hnRNPA2B1 from K63 ubiquitin-dependent degradation.Additionally, it was worth noting that the enhancement of K63 ubiquitin level of hnRNPA2B1 by TRIM21 was independent of the proteasome system (Fig. S5D).Meanwhile, chloroquine and NH 4 Cl treatments could reverse the effect of TRIM21 on the degradation of hnRNPA2B1 protein, which suggested that the TRIM21-mediated K63 ubiquitin-dependent autophagy pathway might indeed be the primary process regulating hnRNPA2B1 (Fig. S5E).Intriguingly, silencing CRNDE restored the binding between hnRNPA2B1 and TRIM21 (Fig. 3J), while exogenous overexpression of CRNDE attenuated their interactions (Fig. 3K).Therefore, these results strongly demonstrated that CRNDE maintained the protein stability of hnRNPA2B1 by preventing its interaction with TRIM21, and inhibiting TRIM21-mediated K63 polyubiquitination, thereby impeding the process of ubiquitin-dependent protein degradation. hnRNPA2B1 is required for CRNDE regulation of CRC cell proliferation and metastasis To further elucidate the effect of CRNDE/hnRNPA2B1 in CRC progression, we firstly confirmed the specific upregulation of CRNDE in CRC (Fig. S5F) and its negative correlation with favorable prognosis (Fig. S5G).Subsequently, we found that the inhibitory effect of CRC cell proliferation and metastasis resulting from CRNDE knockdown was reversed by hnRNPA2B1 upregulation (Fig. 4A-D), and the depletion of hnRNPA2B1 weakened the promotion of CRC malignant biological properties caused by increased CRNDE (Fig. 4E-H).These data suggested that hnRNPA2B1 was a critical link in the function of CRNDE and the CRNDE/hnRNPA2B1 axis was an important driver of accelerating CRC progression. CRNDE/hnRNPA2B1 axis regulates MAPK pathway by targeting KRAS Previous studies have reported that hnRNPA2B1 was inextricably associated with the MAPK signaling pathway in cancer development [15,22].Moreover, KRAS, the core upstream regulator of the MAPK pathway [33], has recently been found to be interacted with hnRNPA2B1 in PDAC cells, but the exact mechanisms underlying hnRNPA2B1-mediated KRAS-MAPK remain to be elucidated.We found that the protein level of KRAS, p-ERK1/2 and p-P38 was decreased after inhibition of CRNDE and hnRNPA2B1 (Fig. 5A), and conversely increased with their elevation (Fig. 5B), which indicated that the KRAS/MAPK pathway was the downstream signaling cascade of CRNDE/hnRNPA2B1 axis.However, it is noteworthy that neither hnRNPA2B1 nor CRNDE altered KRAS mRNA levels (Fig. 5C, D), and KRAS knockdown did not affect the expression of hnRNPA2B1 and CRNDE in turn (Fig. 5E-G).These results demonstrated that hnRNPA2B1 regulated KRAS at the post-transcriptional level. Furthermore, the ectopic expression of KRAS impaired the inhibitory effect of hnRNPA2B1 depletion on proliferation, invasion and migration of CRC cells (Fig. 5H-L).To clarify whether the role of CRNDE/hnRNPA2B1/KRAS axis in CRC is KRAS mutationdependent, we introduced the KRAS wild-type CRC cell line RKO and HT29.The results showed that silencing hnRNPA2B1 and CRNDE similarly downregulated KRAS protein levels in RKO and HT29 (Fig. S6A, D), and inhibited cell proliferation and metastasis (Fig. S6B, C, S6E, F).These results suggested that the function of CRNDE/hnRNPA2B1 axis controlled MAPK pathway by regulating KRAS protein expression, which was not dependent on KRAS mutations. CRNDE cooperates with hnRNPA2B1 to facilitate KRAS mRNA nuclear export and translation We next aimed to clarify the post-transcriptional mechanisms by which hnRNPA2B1 affected KRAS.Previously, hnRNPA2B1 was reported to interact directly with KRAS proteins as a partner without regulating KRAS expression in pancreatic ductal adenocarcinoma (PDAC) [33].However, we did not detect any KRAS protein signal in the immunoprecipitated samples of Flag-hnRNPA2B1 (Fig. 6A), implying that hnRNPA2B1 might carry distinct functional missions in different cancer species.Since hnRNPA2B1 is a well-recognized classical RBP that is a key driver switch for target molecule splicing, translocation and translation, we sought to investigate whether hnRNPA2B1 regulated the translation of bound KRAS mRNA. Firstly, we confirmed the protein-RNA binding relationship between hnRNPA2B1 and KRAS (Fig. 6B, C).Subsequently, knockdown or overexpression of CRNDE, respectively, revealed that the binding between hnRNPA2B1 and KRAS was weakened (Fig. 6D, F) or strengthened (Fig. 6E).Moreover, hnRNPA2B1 or CRNDE did not regulate KRAS splicing (Fig. 6G, S7A).Meanwhile, we also found that there was no binding between CRNDE and KRAS RNA (Fig. 6H, I).Based on these results we hypothesized that CRNDE/hnRNPA2B1 axis promoted KRAS protein expression by translational control of KRAS mRNA. To gain further insight into its mechanism, the SUnSET assays were employed, and the results unveiled that hnRNPA2B1 strongly influenced global protein synthesis (Fig. 6J, S7B).Next, puromycinlabeling assays were utilized to monitor the synthesis of nascent KRAS protein and, as expected, a significant reduction in the puromycin labeling of KRAS was shown following hnRNPA2B1 deletion (Fig. 6K).Furthermore, we used polysome profiling and qRT-PCR to examine the distribution of endogenous hnRNPA2B1targeted KRAS mRNA in the ribosome fractions to quantify the proportion that was translated.We found that the relative distribution of KRAS mRNA shifted from the polysome to the subpolysome fraction in hnRNPA2B1 silencing cells (Fig. 6L).These results demonstrated that hnRNPA2B1 facilitated the translation of KRAS mRNA. Intriguingly, we unexpectedly detected that CRNDE promoted hnRNPA2B1 nucleocytoplasmic translocation (Fig. 6M).We thus speculated that hnRNPA2B1 regulated nuclear transport of KRAS mRNA and subsequently provided translational control of KRAS.The mRNA distribution of KRAS following hnRNPA2B1 depletion or elevation was then examined, and the results showed that knockdown of hnRNPA2B1 reduced cytoplasmic aggregation of KRAS mRNA (Fig. 6N), while conversely overexpression of hnRNPA2B1 boosted KRAS mRNA nuclear export (Fig. 6O).Concurrently, we observed that overexpression of CRNDE expedited hnRNPA2B1-KRAS complex translocation to cytoplasm (Fig. 6P-Q).Collectively, our data indicated that CRNDE/ hnRNPA2B1 axis enhanced nuclear export and its subsequent translational control of KRAS mRNA in CRC cells. DISCUSSION CRC involves aberrant expression of numerous gene regulatory networks, making it a struggle to understand and attack this most common and deadly of human malignancies at the forefront [34,35].However, the critical molecules modulating CRC progression are still largely unknown.Here, we revealed a model for nuclear export and translational control of CRDNE/hnRNPA2B1 oncogenic axis-mediated KRAS expression in CRC cells and identified a novel E3 ligase TRIM21 in regulating K63 ubiquitindependent degradation of hnRNPA2B1.hnRNPA2B1 plays a vital role in a variety of diseases including amyotrophic lateral sclerosis [36,37], pulmonary arterial hypertension [38], obesity [39] and innate immune response [40], particularly in cancer, where hnRNPA2B1 has been recommended as a candidate marker for cancer screening [7].hnRNPA2B1 was identified to distinguish early-stage lung cancer with sensitivity of 84.8% in brushing, 80.8% in biopsies and 72.2% in serum [41,42].Interestingly, the intracellular localization of hnRNPA2B1 was found to alter during the transition from hepatitis virus infection Fig. 2 CRNDE binds to hnRNPA2B1 and protects its protein stability in CRC.A LncRNAs differentially expressed in CRC were screened in the GSE9348 dataset and TCGA database at P < 0.01 and with a fold change>2, which were then intersected with hnRNPA2B1-bound ncRNAs that obtained from the online databases ENCORI and POSTAR3.CRNDE and LINC00294 were enriched.B Comparison of the differential expression of CRNDE and LINC00294 in CRC in TCGA and various GEO databases, as well as their binding scores to hnRNPA2B1 in ENCORI and POSTAR3.C Correlation analysis between expression levels of hnRNPA2B1 and CRNDE in CRC patients (R = 0.46, P = 0.024, n = 24).D RNA pulldown assays were performed using biotin-labeled CRNDE and its antisense, followed by western blot analysis.E RIP assays were performed using hnRNPA2B1 specific antibody and IgG, and the co-precipitated RNA was subjected to qRT-PCR to measure the enrichment of CRNDE.n = 3 independent biological replicates.F Western blot detected the expression changes of hnRNPA2B1 protein in SW480 and DLD1 cells with stable CRNDE knockdown or overexpression.The effects of CRNDE (G) and hnRNPA2B1 (H) alterations on each other's RNA levels were examined by qPCR.I Western blot detection of hnRNPA2B1 in CRNDE silencing cells treated with CHX (20 μg/mL) for the indicated times.n = 3 independent biological replicates.J CRNDE knockdown cells that were treated with MG132 (25 μmol/L) for 6 h and then the hnRNPA2B1 levels were tested by western blot.K Immunoprecipitated hnRNPA2B1 in extracts of CRC cells with stable CRNDE deletion, DUB inhibitor WP1130 (5 μM for 6 h) treatment, or USP33 knockdown was immunoblotted with an anti-Ubiquitin antibody.WP1130 and si-USP33 were employed as the positive controls.L Ubiquitin plasmids with either only K48 or K63 retained, or with only K48 or K63 mutated, along with their wild-type ubiquitin controls, were transfected into CRC cells.Ubiquitin levels in hnRNPA2B1-IP samples were analyzed by immunoblotting.A two-tailed unpaired Student's t-test and Pearson correlation analysis were used for statistical analysis, respectively.** P < 0.01, *** P < 0.001, n.s.not significantly.Data represent mean ± SD. The lncRNA/hnRNPA2B1 interaction network has been proposed to play a non-negligible role in CRC, but none of these studies illuminated the upregulation of hnRNPA2B1.Some studies have reported the transcriptional and post-transcriptional regulation of hnRNPA2B1 in cancer, that c-Myc [50] and BRCA1 [51] promoted hnRNPA2B1 transcription, while activation of Fyn [52] and Nm23-H1 [53] stabilized hnRNPA2B1 protein expression.Fig. 3 CRNDE stabilizes hnRNPA2B1 by preventing TRIM21-mediated ubiquitination.Co-IP for hnRNPA2B1 (A) and TRIM21 (B) were performed in indicated CRC cells.IgG was used as negative control.The co-IPs were analyzed by western blot using anti-hnRNPA2B1 or anti-TRIM21 antibodies.C The influence of knocking down TRIM21 on the expression levels of hnRNPA2B1 was examined by western blot.The binding of hnRNPA2B1 to CRNDE in protein extracts from indicated cells transiently transfected with si-TRIM21 or si-NC was measured by CRNDE-pulldown (D) and hnRNPA2B1-RIP (E).n = 3 independent biological replicates.F Immunoprecipitated endogenous hnRNPA2B1 from cells with TRIM21 depletion was immunoblotted with anti-Ubiquitin antibody.G The levels of CRNDE were detected by qPCR after knockdown of TRIM21.n = 3 independent biological replicates.Protein expression of TRIM21 detected by western blot after down-regulation of hnRNPA2B1 (H) or CRNDE (I).J TRIM21 was attenuated in CRNDE-silenced cells and IP of hnRNPA2B1 was carried out.Protein changes of total ubiquitin, k63 and K48 ubiquitin, and TRIM21 in immunoprecipitates were identified by western blot, respectively.K IP assays were conducted by immobilizing antibodies against hnRNPA2B1 or IgG on Protein A magnetic beads, and the precipitated TRIM21 and hnRNPA2B1 proteins from cell lysates with or without CRNDE overexpression were detected by western blot.A two-tailed unpaired Student's t-test was used for statistical analysis.** P < 0.01, *** P < 0.001, n.s.not significantly.Data represent mean ± SD.Several lncRNAs were also identified to modulate the ubiquitindependent degradation of hnRNPA2B1, such uc002mbe.2[54] and miR503HG [12] to accelerate its degradation, and conversely Linc01232 [11] protected hnRNPA2B1 from ubiquitin-mediated proteasomal degradation.VHL is currently the only known E3 ligase to mediate ubiquitination of hnRNPA2B1 in renal cancer cells [55,56], but we did not find interactions of them in CRC cells (data not shown).Therefore, there are no convincible answers to explain the mechanism of hnRNPA2B1 protein upregulation.In this study, we recognized that CRNDE was the only candidate lncRNA, which stabilized hnRNPA2B1 protein.CRNDE, an oncogenic lncRNA originally identified for the first time in CRC, possesses a pivotal role in the biological process of CRC [57].Moreover, we also identified a new E3 ubiquitin ligase TRIM21 of hnRNPA2B1 by co-IP-MS, which promoted K63-linked ubiquitindependent degradation of hnRNPA2B1.K63-linked ubiquitination was also functioned in autophagic degradation [32].Additionally, TRIM21 was discovered to interact autophagy regulators and effectors, which then promoted autophagic degradation of targeted proteins [58].Moreover, TRIM21 was decreased in colitis-associated cancer and inhibited intestinal epithelial carcinogenesis [59].Meanwhile, hnRNPA2B1 was detected as colocalized with selective autophagy receptor SQSTM1 [59,60].Therefore, based on our experimental findings, we hypothesized that CRNDE might have a partial impact on the ubiquitinproteasome system of hnRNPA2B1, but the hnRNPA2B1 expression was primarily regulated by TRIM21-mediated K63 ubiquitindependent autophagic degradation pathway.Nevertheless, further investigation is required to fully elucidate this mechanism. KRAS/MAPK oncogenic signaling plays a central role in CRC progression.A protein interaction between hnRNPA2B1 and KRAS has been reported in PDAC cells [33], but we did not recapitulate this pattern in CRC cells.Furthermore, both CRNDE and hnRNPA2B1 have been suggested to be activators of the MAPK signaling pathway.However, the mechanism by which the MAPK signaling pathway is regulated by the CRNDE/hnRNPA2B1 axis remains inconclusive.Here, we demonstrated that CRNDE/ hnRNPA2B1 axis drove CRC development through activation of KRAS/MAPK signals.Intriguingly, we found that the CRNDE/ hnRNPA2B1 axis controlled KRAS protein, but did not affect KRAS mRNA expression or splicing, suggesting that hnRNPA2B1 regulated KRAS protein expression at the posttranscriptional level. hnRNPA2B1 could promote cap-dependent translation of RNA trafficking signal sequence from myelin basic protein mRNA [61], and also could bind Sp1 [53], ABCC2 [62], nmMYLK [63], HIF-1α [64] and VHLα [56] mRNA to enhance their translation, respectively.The translation of KRAS mRNA was also influenced by eIF4A [65], RNA G-quadruplexes [66] or eIF5A-PEAK1 loop [67].We then sought to investigate whether hnRNPA2B1 governed the translation of bound KRAS mRNA.In consistence with our hypothesis, our results confirmed that hnRNPA2B1 protein was directly bound to and contributed to the translation of KRAS mRNA.In addition, upregulation of CRNDE enhanced the binding capacity between them, but there was no physical interaction of CRNDE with KRAS RNA.Unexpectedly, we also found that overexpression of CRNDE facilitated nucleocytoplasmic transport of hnRNPA2B1.Given the importance of aberrant expression and localization of hnRNPA2B1 in digestive cancer [43,45], and the powerful ability of hnRNPA2B1 to regulate RNA trafficking [68,69], we subsequently demonstrated that hnRNPA2B1 enhanced KRAS nuclear export and that upregulated CRNDE potently encouraged the cytoplasmic accumulation of the hnRNPA2B1-KRAS complex.Collectively, our findings unveiled that the CRNDE/hnRNPA2B1 axis drove the coupled mRNA transport-translation processes in CRC cells.However, the partners of hnRNPA2B1 in regulating mRNA nucleocytoplasmic transport and translation need further investigation. In summary, our paper elucidated that upregulation of CRNDE enhanced hnRNPA2B1 protein stability via inhibiting TRIM21mediated K63 ubiquitin-dependent degradation, and then promoted nuclear export and translation of KRAS mRNA, which subsequently activated MAPK oncogenic signaling in CRC cells.Thus, our study revealed the CRNDE/hnRNPA2B1/KARS axis played a critical driving role in CRC malignant progression, reserving a promising prognostic biomarker and potential therapeutic target for CRC. Fig. 6 CRNDE cooperates with hnRNPA2B1 to promote KRAS mRNA nuclear export and translation.A Flag-IP assays were performed in CRC cells with Flag-tagged hnRNPA2B1 overexpression plasmids transfection.B KRAS-pulldown assays were conducted and then were analyzed by western blot.C hnRNPA2B1-RIP-qPCR assays were undertaken to examine the enrichment of KRAS mRNA.KRAS-pulldown assays were performed in cells stably silenced (D) or overexpressing (E) CRNDE to evaluate the altered binding efficiency of hnRNPA2B1.F The hnRNPA2B1-RIP assays were executed after knockdown of CRNDE.G qPCR detection of KRAS variant KRAS4A and KRAS4B.H, I The pulldown products of CRNDE and KRAS were identified by qPCR.J SUnSET assays were employed to detect the puromycin-labeled proteins in CRC cells with decreasing or increasing of hnRNPA2B1 followed by treatment with 10 μg/mL puromycin for 15 min.K hnRNPA2B1-deficient cells with Biotin-dC-puromycin treatment for 24 h were collected for precipitation with streptavidin beads, and the products were measured by western blot.L The ribosomal fractions were separated on a sucrose gradient and their RNA was extracted.Amount of KRAS mRNA was analyzed by qRT-PCR.M Western blot detection of hnRNPA2B1 levels in the nuclear and cytoplasmic fractions of CRC cells stably overexpressing CRNDE.N-O Cytoplasm and nucleus of CRC cells with stable hnRNPA2B1 knockdown and overexpression were isolated, of which KRAS mRNA levels were subsequently detected via qPCR.P, Q Cytoplasm and nuclei of CRC cells with CRNDE upregulation were isolated and extracted, and then hnRNPA2B1-RIP-qPCR analysis for KRAS enrichment were performed to determine the effect of CRNDE on the distribution and efficiency of hnRNPA2B1-KRAS mRNA interaction.n = 3 independent biological replicates.A two-tailed unpaired Student's t-test and one-way ANOVA were used for statistical analysis, respectively.* P < 0.05, ** P < 0.01, *** P < 0.001.n.s.not significantly.Data represent mean ± SD. Fig. 1 Fig. 1 Upregulation of hnRNPA2B1 is associated with CRC poor outcomes.Correlation analysis of hnRNPA2B1 expression with pathological grade (A) and clinical TNM stage (B) in TMA cohort (n = 87).C Kaplan-Meier survival curves of CRC patients on TMA grouped by hnRNPA2B1 expression (P = 0.0403).D EdU assays examined the difference in the role of hnRNPA2 and hnRNPB1 variants on the proliferative capacity of CRC cells.n = 3 independent biological replicates.E Representative images of mice-bearing tumors derived from xenograft nude mouse model injected subcutaneously with sh-Ctrl or sh-hnRNPA2B1 DLD1 cells.Growth curve (F), weight (G) and size (H) of subcutaneous xenograft tumors were indicated.n = 6 biologically independent samples.I Representative images of metastatic loci in the lung derived from mouse model that tail vein injected with WT or KO HCT116 cells.Black arrows indicate the tumor nodules on lung surfaces.J Representative HE staining of lung metastatic lesion.Scale bar, 500 μm (left×2panel); 20 μm (right×40 panel).K Quantification of lung metastatic foci from nude mice.n = 5 biologically independent samples.A two-tailed Student's t-test and one-way ANOVA were used for statistical analysis, respectively. Fig. 5 Fig. 5 CRNDE/hnRNPA2B1 axis targets KRAS and MAPK pathway in CRC.Western blot analysis for the expression changes of MAPK signaling pathway-related marker P38, ERK1/2, JNK and their corresponding phosphorylated proteins in SW480 and DLD1 cells with stably silencing or overexpressing hnRNPA2B1 (A) or CRNDE (B).C, D qPCR measure of changes in KRAS mRNA levels following alterations in hnRNPA2B1 or CRNDE.n = 3 independent biological replicates.E Detection of hnRNPA2B1 protein expression by western blot in CRC cells after knockdown of KRAS.F-G Quantification of hnRNPA2B1 (F) and CRNDE (G) RNA levels in KRAS-silenced CRC cells by qPCR, respectively.n = 3 independent biological replicates.H-L Transfection of KRAS overexpression plasmids and their controls into stably silenced hnRNPA2B1 cells for functional rescue experiments.n = 3 independent biological replicates.Scale bar, 100 μm (I); 200μm (K).A two-tailed unpaired Student's t-test and one-way ANOVA were used for statistical analysis, respectively.* P < 0.05, ** P < 0.01, *** P < 0.001.n.s.not significantly.Data represent mean ± SD. Table 1 . Correlation between hnRNPA2B1 expression and Y.Lu et al.
6,771.8
2023-09-01T00:00:00.000
[ "Medicine", "Biology" ]
What effect do substorms have on the content of the radiation belts? Abstract Substorms are fundamental and dynamic processes in the magnetosphere, converting captured solar wind magnetic energy into plasma energy. These substorms have been suggested to be a key driver of energetic electron enhancements in the outer radiation belts. Substorms inject a keV “seed” population into the inner magnetosphere which is subsequently energized through wave‐particle interactions up to relativistic energies; however, the extent to which substorms enhance the radiation belts, either directly or indirectly, has never before been quantified. In this study, we examine increases and decreases in the total radiation belt electron content (TRBEC) following substorms and geomagnetically quiet intervals. Our results show that the radiation belts are inherently lossy, shown by a negative median change in TRBEC at all intervals following substorms and quiet intervals. However, there are up to 3 times as many increases in TRBEC following substorm intervals. There is a lag of 1–3 days between the substorm or quiet intervals and their greatest effect on radiation belt content, shown in the difference between the occurrence of increases and losses in TRBEC following substorms and quiet intervals, the mean change in TRBEC following substorms or quiet intervals, and the cross correlation between SuperMAG AL (SML) and TRBEC. However, there is a statistically significant effect on the occurrence of increases and decreases in TRBEC up to a lag of 6 days. Increases in radiation belt content show a significant correlation with SML and SYM‐H, but decreases in the radiation belt show no apparent link with magnetospheric activity levels. Introduction Earth's radiation belts consist of trapped electrons and protons at MeV energies drift bouncing around the Earth at radial distances between 1000 km and 6 R E . The flux of particles in the radiation belts is the result of competing enhancement and loss mechanisms and can vary by orders of magnitude. Enhancements in the radiation belt population occur through direct injection of energized particles, radial diffusion, and energization through conservation of adiabatic invariants, or wave-particle interactions, whereas losses from the radiation belts generally occur via pitch angle scattering, adiabatic diffusion, or particle drift paths intersecting the magnetopause (see reviews by Millan and Thorne [2007], Ebihara and Miyoshi [2011], and Ukhorskiy and Sitnov [2012]). While the loss and acceleration mechanisms have been long studied, the phenomenological processes which lead to radiation belt increases and losses remain unclear [e.g., Reeves et al., 2003] and thus are a key target for the Van Allen Probes mission [Mauk et al., 2012]. One mechanism that is thought to be a major contributor to increases in the radiation belt is substorm particle injections. Rather than directly injecting MeV energy particles into the radiation belts, substorms are thought to provide a low-energy population of keV electrons which are subsequently accelerated to higher energies [e.g., Baker et al., 1998;Horne and Thorne, 1998;Fok et al., 2001;Meredith et al., 2002Meredith et al., , 2003aMeredith et al., , 2003bMeredith et al., , 2003c. SUBSTORM EFFECTS ON RADIATION BELTS 6292 energy to significantly enhance the relativistic particle population. Although the injection of MeV particles directly into the outer radiation belts by substorm dipolarizations has been reported [e.g., Dai et al., 2014], Baker et al. [1979] found that more than 80% of substorms did not result in an injection of >0.3 MeV protons at geosynchronous orbit. Instead, the substorm injection is considered to provide a "seed" population of keV particles [Baker et al., 1998] to the outer radiation belt. This seed population is anisotropic and unstable to the generation of whistler mode chorus waves [Li et al., 2010, and references therein]. The seed population is subsequently locally accelerated through wave-particle interactions with these whistler mode chorus waves [Horne and Thorne, 1998;Summers et al., 1998;Horne et al., 2005aHorne et al., , 2005bLi et al., 2007;Jaynes et al., 2015] up to relativistic energies. Hence, substorms are thought to be the source of the electron seed population and the source of the wave growth that provides the acceleration of these particles to relativistic energies. Reeves et al. [2003] have shown that the radiation belts do not show a consistent response to storm activity, with the outer belt relativistic electron fluxes increasing, decreasing or remaining invariant for storms with a similar Dst profile. In this paper we ask whether a similar result can be found for substorms. To that end, we assess the extent to which substorms enhance the content of the electron radiation belts by comparing times of increases and decreases in the radiation belts with substorm activity. We also examine how the increase and decrease of the radiation belt content compare with a measure of the size of the substorm. Our observations show that 50% of substorm intervals are followed by an increase in radiation belt content and 50% by a decrease. To fully understand variations in the radiation belt fluxes, any phenomenological framework or physical model must explain both the enhancements and reductions of the radiation belts following substorms. Instrumentation and Methodology The Van Allen Probes [Mauk et al., 2012] are a pair of identical spacecraft in 500 × 30,600 km near-equatorial orbits of the Earth. The orbits of the two spacecraft are slightly different, such that the separation between the spacecraft changes with time. Each spacecraft has an identical suite of five instruments designed to measure the radiation belt plasma and electromagnetic fields. We use data from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) [Blake et al., 2013] instrument in the Energetic Particle, Composition, and Thermal Plasma suite (ECT) [Spence et al., 2013] from 1 January to 31 December 2013. From these data, and building on the ideas of Baker et al. [2004], the total radiation belt electron content (TRBEC) has been calculated. The calculation is detailed in Boyd [2016] and Huang et al (Spatial, temporal and energy dependence of total radiation belt electron content, GRL, manuscript in preparation, 2016). In summary, TRBEC is calculated using the Jacobian determinant calculated using the three action integrals of the electrons' three quasiperiodic motions, J1, J2, and J3, with respect to gyration, bounce motion, and drift motion [Schulz, 1991]. From this, the number of electrons can be calculated as By integrating across an appropriate range of the first adiabatic invariant (μ), pitch angle (K), and L * along each half orbit of the spacecraft, the TRBEC for different energies of electrons can be calculated. In this case, the distributions are integrated between μ = 1000-2000 MeV/G to give an estimate of the number of particles in the "core" radiation belt population. This corresponds to particles in the energy range 0.3-6 MeV at L = 3-6. This provides an estimate of the radiation belt content approximately every 3 h. These data are then interpolated onto a regular 3 h timeline. In order to determine changes in TRBEC as a function of substorm occurrence, we require a reliable estimate of substorm expansion and recovery phase times that is continuous over the period covered by the Van Allen Probes TRBEC data. We define substorm intervals using the SOPHIE technique [Forsyth et al., 2015]. This technique provides the onset times of expansion and recovery phases, as well as substorm intensifications (expansion phases directly following recovery phases), based on the SuperMAG AL (SML) index [Newell and Gjerloev, 2011]. In summary, the technique identifies the following: (1) expansion phase as a negative rate of change in smoothed SML below a determined threshold, (2) recovery phase as a positive rate of change in smoothed SML above a different determined threshold, and (3) possible growth phase at any other time. The SOPHIE technique also determines those intervals in which SML shows substorm-like characteristics, but in which the SML variations are mirrored by the SuperMAG AU (SMU) index. These are interpreted as being Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 intervals of enhanced magnetospheric convection and not substorm events. The thresholds for the expansion and recovery phase identifications are calculated from a specified percentile of the rates of change in SML. Forsyth et al. [2015] found that the expansion phase onsets determined using the 75th percentile gave good agreement with existing auroral-based [Frey and Mende, 2006;Liou et al., 2001] and magnetometerbased [Newell and Gjerloev, 2011] substorm onset lists. Thus, we use this percentile for all phase identifications in this study. In order to compare substorm activity with TRBEC and changes in TRBEC, we identify whether one or more expansion or recovery phase onsets occurred within the 3 h window of the TRBEC data. In the following, these intervals are described as "substorm intervals." If there was no expansion or recovery phase onset and no evidence of an enhanced convection event within the time window, the interval is considered to be a "quiet interval." We also compare TRBEC and changes in TRBEC with SML and SYM-H. SML provides a measure of substorm activity and the strength. Similarly, SYM-H provides a measure of storm activity and strength. In order to compare these data with the TRBEC data, we downsample SML and SYM-H by calculating the mean of the data over the 3 h corresponding to each TRBEC data point. Comparison Between Radiation Belt Content and Substorms Figure 1 provides an overview of the data used in this study and shows (a) the total radiation belt electron content, (b) the 3 h mean of the SML index, and (c) the 3 h mean of SYM-H between 1 January and 31 December 2013 inclusive. It is these data that we will use throughout this paper. Figure 1d shows the cross correlation of these data using the square of the Pearson's correlation coefficient (r 2 ) plotted against lag times. The correlation between TRBEC and SML is shown in black, the correlation between TRBEC and SYM-H is in red, and the correlation between SYM-H and SML is in blue. Positive lags indicate that changes in TRBEC follow changes in SML or SYM-H and that changes in SYM-H follow changes in SML. The time of the maximum cross correlation is given in the figure. Figure 1b shows that SML tends to vary on shorter time scales, as we would expect for substorms that occur, on average, four times per day, and Figure 1c shows that SYM-H varies on a slightly longer period, as expected for storm activity. Figure 1d shows that there is a weak (<15%) anticorrelation between TRBEC and both SML and SYM-H, with the cross correlations maximizing for TRBEC lagging SML by 45 h and lagging SYM-H by 33 h. This is consistent with the framework in which particles injected during a substorm take some time to be energized to relativistic energies, but the time scales are longer than the 24 h time scale predicted by Horne et al. [2005a]. Figure 1d also shows that the 3 h mean SYM-H and SML are 45% correlated with SYM-H lagging SML by 3 h. It is therefore difficult to deconvolve substorm effects from storm effects on a 3 h time scale. In the analysis presented later in this paper, we account for this by comparing changes in TRBEC with SML and SYM-H in parallel. In order to determine the extent to which substorms influence the electron content of the radiation belts, we determine the change in TRBEC following a substorm or quiet interval. The change in TRBEC is calculated as ΔTRBEC = TRBEC(t 0 + T) À TRBEC(t 0 ), where T is the differencing time lag and t 0 is the reference time (the time of the substorm or quiet window from which we wish to know the change in TRBEC). This change is thus the net or time-integrated change in the radiation belt content from the reference event. The data at each time step form a 2 × 2 contingency table. An example contingency table is shown in Table 1, showing the number of increases (column 1) and decreases (column 2) in TRBEC 24 h following substorms (row 1) or quiet intervals (row 2). By dividing each row element by that row's total, we are able to determine the proportion of increases or decreases following substorm or quiet intervals. In this case, approximately 50% of substorms are followed by an increase in TRBEC, whereas only 30% of quiet intervals are followed by an increase in TRBEC. Alternatively, we can rearrange the data to examine the proportion of increases or decreases preceded by a substorm or quiet interval, as shown in Table 2. This shows, from the same data, that~75% of increases in TRBEC are preceded by a substorm interval 24 h beforehand, whereas only 50% of decreases are preceded by a substorm. In order to determine whether the ratios of increases to decrease of TRBEC following substorms or quiet intervals are statistically significantly different, we need to compare them to a null Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 hypothesis that substorms have no effect on increases or decreases in the radiation belts. The expected values from this hypothesis for each cell in the table are the product of the row total and column total divided by the total number of observations. We can also thus use the χ 2 statistic to assess whether the observed occurrences are statistically significantly different from expected values assuming the null hypothesis. Figure 2a shows the proportion of substorm intervals (black) and quiet intervals (red) that were followed by an increase (solid line) or decrease (dashed line) in TRBEC, for different time lags (T), following the presentation of data in Table 1. It shows that for each 3 h interval up to 33 h following substorm intervals there was añ 50% chance that TRBEC was increased above the level during the substorm interval, after which time the likelihood of a decrease in TRBEC increased, tending toward 55% after 144 h. In contrast, the likelihood of a decrease in TRBEC following a quiet interval was 55% in the 3 h following the quiet interval but steadily increased to~75% at 45 h following the quiet interval. Using the Z statistic (Z = (a À b)/(a + b) 0.5 , not shown), we find that there were statistically significantly more decreases than increases (at the 99.9% level) following quiet intervals up to a lag of 219 h. Following substorms, the difference between increases and decreases in TRBEC was statistically insignificant up to a lag of 69 h, in keeping with the 50/50 split shown, after which time there were significantly more decreases. In summary, our results show that there are statistically significantly no more increases than decreases following substorms up to 69 h after the reference interval but statistically significantly more decreases following quiet intervals up to 219 h (9 days) after the reference interval. Using the contingency table analysis, we calculate the χ 2 statistic at each time lag (Figure 2b). The χ 2 statistic was much greater than the 99.9% significance level for lags between 6 h and 144 h and had a broad peak (χ 2 > 100) between 0.5 and 2.5 days. Over a lag of~0-1 day, the increase in χ 2 is driven by an increasing proportion of decreases following quiet intervals while the proportion of increases following substorms was constant. Over a lag of 1-2 days, the proportion of decreases following quiet intervals is approximately constant, while the proportion of increases following substorms decreases, resulting in a decrease in χ 2 . Finally, between over a lag of 2-6 days, the change in χ 2 is driven by a decrease in the proportion of decreases following quiet intervals. We are also able to test whether substorms are a good predictor of increases (and conversely that quiet intervals are good predictors of decreases) by calculating the Heidke Skill Score (HSS) [Heidke, 1926] and the accuracy of the prediction. For the interested reader, these are described in the supporting information. Under the premise that substorms lead to increases in TRBEC and that quiet intervals lead to decreases in TRBEC, we can take the occurrence of substorm and quiet intervals to be a forecast of increases or decreases in TRBEC and assess the skill of this forecast in terms of the HSS and accuracy as a function of time lag. In this analysis, we are only assessing the occurrence of increases or decreases, not the size of the change in TRBEC. The HSS (black) and accuracy (red) of using substorms or quiet times to predict increase or decreases in TRBEC is shown in Figure 2c. For HSS > 0, substorm intervals have some skill in predicting increases in TRBEC (the maximum possible HSS is 1). Figure 2c shows that the calculated HSS is greater than 0 for all time lags up to 216 h intervals and peaks at 0.21 after 27 h, in keeping with the maximum in the χ 2 statistic. Thus, using substorm and quiet intervals to predict radiation belt increases and decreases has some skill over that interval. The accuracy of the prediction peaks at 58% for a time lag of 30 h but remains above 50% up to a lag of 183 h. Overall, the analysis shown in Figure 2 indicates that there is a statistically significantly higher likelihood of an increase in the radiation belts up to 6 days following a substorm than in the same period following a quiet interval. The difference in the occurrence of increases or decreases in TRBEC following substorms or quiet intervals is most significant 0.5-2.5 days following the substorm or quiet interval, in keeping with the 48 h cross correlation lag between SML and TRBEC. Up to half of the substorm intervals were followed by an Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 increase in TRBEC, and up three quarters of quiet intervals were followed by a decrease in TRBEC. Similarly, up to three quarters of increases in TRBEC were preceded by a substorm, as were half the decreases in TRBEC. The results of Figure 2 show that there is a statistically significant difference in the response of the radiation belts to substorms and quiet intervals but does not show which has the greater influence. If we take assume that the magnetosphere is normally quiet times and that substorms are a perturbation to this quiet system, then the results of Figure 2 can be interpreted as showing that substorm activity doubles the likelihood of the radiation belts increasing. However, the "memory" in the radiation belts of substorms and quiet times is far longer than the initial 3 h interval. For up to 6 days after substorms and quiet times, the proportion of increases and decreases in TRBEC is statistically significantly different from those expected if substorms and quiet intervals have no effect on the radiation belts. Figure 3b shows the probability that the mean (black) or median (blue) change in TRBEC following substorms or quiet intervals is statistically significantly similar. These are the P values resulting from Student's T test of the difference in the means and the Wilcoxon-Mann-Whitney Rank Sum Test of the difference in the medians. Figure 4 shows the probability density of a change in TRBEC following substorm intervals (black) and quiet intervals (red) at lags of (a) 3 h, (b) 24 h, (c) 48 h, and (d) 144 h. Figures 3 and 4 show that the radiation belts are inherently lossy: the median changes in TRBEC are negative at all times following both substorm and quiet intervals (Figure 3), which is replicated in the highest probability densities occurring for negative changes in TRBEC (Figure 4). However, Figure 3a shows that the mean Heidke Skill Score (black) and accuracy (blue) obtained for using substorms or quiet times to predict increases or decreases in TRBEC. The data show that up to 33 h after a substorm interval, 50% of events show an increase in TRBEC and 50% show a decrease, whereas following a quiet interval, there is up to a 75% chance of TRBEC decreasing. Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 change in TRBEC following substorm periods is positive up to a lag of 144 h. As such, the distribution of increases in TRBEC following substorms has a much longer tail than the distribution of decreases. The effect of this can be seen as the sawtooth-like profile of TRBEC in Figure 1a; the increases occurred over short periods, whereas the decreases extend over several days. Following substorm intervals, the mean change in TRBEC is positive, peaking 48 h after the substorm interval. Similarly, the mean change in TRBEC following quiet intervals is negative, minimizing 48 h after the quiet interval, although we note that these peak and trough of the mean changes are broad, as per the results of Figure 2. This is consistent (to within one data point) with the cross correlation between SML and TRBEC shown in Figure 1. In contrast, while the median change in TRBEC following quiet intervals also shows a decrease, the median change in TRBEC following substorms shows little or no increase. The significance tests shown in Figure 3b show that the difference in the mean changes in TRBEC following substorms or quiet interval is statistically significant beyond the 99.9% level up to a lag of 90 h, whereas the median changes are statistically significantly different up to a lag of 129 h. Comparing the distributions of changes in TRBEC, we see that the distributions of positive changes in TRBEC are elevated following substorms, with respect to the following quiet intervals, at lags of 3, 24, and 48. After 144 h, the distributions of changes in TRBEC following substorms and quiet intervals are more similar, in keeping with the lack of statistically significant differences in the averages and occurrences shown above. The probability distributions shown indicate that the losses from the radiation belt are slightly elevated following quiet intervals as compared to following substorm intervals, but not to the extent that the increases differ following substorms compared to quiet times. The vertical dotted lines in Figure 4 show the median increases and decreases of TRBEC following substorms (black) and quiet intervals (red). Dividing each change in TRBEC by the time lag gives a mean rate of change, from which we convert the dotted lines to the median mean rates of change (shown in Table 3). There is no statistically significant difference in median mean rate of decrease in TRBEC following substorms and quiet intervals at lags of 3, 24, and 48 h. In contrast, the median mean rate of increase in TRBEC is statistically Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 significantly greater following substorms than following quiet times. It should be noted that the analysis we have presented effectively low-pass filters the variations in TRBEC with increasing window length. As such, directly comparing the average rates of the changes is somewhat problematic. For a window length of 3 h, the rates of change can be dominated by the uncertainty in the data, whereas larger windows may smooth out significant but short-term variations. Given that substorm occurrence enhances the mean change of TRBEC, we examine whether a measure of substorm activity is linked with the change of TRBEC following substorms. Figure 5 shows ( Figures 5b and 5c show the number of data points in each mean value. Figure 6 presents the decreases in TRBEC against SML and SYM following a substorm in a similar fashion. Figure 5 shows that the increase in TRBEC is greater for higher SML and SYM-H. Figures 5b and 5c show that, on average, the increase in the mean TRBEC averaged over all SYM-H shows a 88% linear correlation with SML, whereas averaging over all SML gives a 76% correlation with SYM-H, although both are significant beyond the 99.9% level from the Student's T test. Correlating the raw data gives correlations of~30% using the Spearman's Rank Order Correlation for both SML and SYM-H, as one would expect from Figure 1. This implies that, on average, the size of substorms is a better indicator of the subsequent change in the radiation belt content than the storm size, although fully deconvolving storm and substorm effects from these data may not be possible. Figure 6 shows the same analysis for decreases in TRBEC following substorms and increases and decreases in TRBEC following quiet intervals. For these data, there is only a 42% correlation between the mean decreases in TRBEC following substorms and SML, which is much lower than the correlation shown above for increases in TRBEC following substorms. Similarly, there is only a correlation of 7% between decreases in TRBEC and SYM-H following substorms. The statistical significance of the SML correlation was slightly below the 99.9% level, but there was no significant correlation between decreases in TRBEC and SYM-H. Changes in TRBEC showed no significant correlations with SML or SYM-H following quiet intervals. For brevity, these data are presented in the supporting information for the interested reader. In summary, our results show that the radiation belts are inherently lossy such that the median change in the radiation belt content of 1000-2000 MeV/G electrons is negative following both quiet and substorm intervals. Following a quiet interval, there is up to a 75% chance that the radiation belt content will decrease; however, following substorms, there is up to a 50% chance that they will increase. The difference in the proportion of increases and decreases following substorm or quiet intervals is significantly different up to 6 days after the fact, with this difference peaking after 0.5-2.5 days. The mean change in TRBEC following substorms shows a broad peak centered on 48 h after the event window, in keeping with the cross correlation between SML and TRBEC. This indicates that there can be a lag between the occurrence of magnetospheric activity and its effect on the radiation belts or that radiation belts have a memory of substorm and quiet activity. Furthermore, the distribution of increases in the radiation belt content is found to be enhanced following substorm intervals with respect to quiet intervals and the increases in the radiation belt content showed a stronger correlation with substorm activity levels than storm activity levels. Substorms are thought to be a key component of radiation belt energization, providing the injection of a seed population of keV electrons and enhancing wave power in the inner magnetosphere that then accelerates this seed population to MeV energies [Baker et al., 1998;Horne and Thorne, 1998;Meredith et al., 2001Meredith et al., , 2002Meredith et al., , 2003aHorne et al., 2005aHorne et al., , 2005bJaynes et al., 2015]. Our results show that the occurrence of increases in the radiation belts is enhanced following substorm times compared to quiet times and that the increase in the radiation belt content is related to the level of substorm activity, although the extent to which losses are dependent on substorm activity levels is much weaker. As such there are nuances in substorm processes that must be taken into account if we wish to understand how substorms affect the radiation belts. Discussion By examining the radiation belt content following both quiet and substorm intervals, we have shown that the radiation belts are inherently lossy. Following quiet intervals, there is up to a 75% chance of there being a loss of particles from the radiation belts with average loss rates from 20 to 200 × 10 25 el d À1 . The effect of substorms is to reduce the likelihood of the radiation belt content decreasing, or alternatively the effect of substorms is to increase the likelihood of the radiation belt content increasing. The result of this can be seen in Figure 1, particularly on long time scales, with relatively short duration increases followed by an extended period of decreasing TRBEC. The average rate of decrease was higher following substorms than following quiet intervals for the first 24 h after the event, implying that substorm activity enhances losses as well as increases in the radiation belt content. This is consistent with increases in plasmaspheric hiss, whistler mode, and electromagnetic ion cyclotron (EMIC) wave activity which can enhance pitch angle scattering rates [e.g., Tsurutani and Smith, 1974;Meredith et al., 2004;Usanova et al., 2012]; however, the average losses showed no correlation with substorm activity levels. Due to the nature of the observations, we do not measure the Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 enhancement and loss rates directly; instead, we examine the net change in the radiation belt content. Hence, we cannot separate out the increase and loss rates and our results should thus be taken as such. Our results presented above clearly show that some substorm intervals result in increases in the radiation belt while others do not. One reason that a substorm may not lead to an increase in the radiation belts is that particles injected toward the inner magnetosphere during substorms do not get sufficiently close to the Earth to provide a seed population for the wave generation and acceleration processes. Boakes et al. [2009] showed that for a subset of 135 events in the Frey et al. [2004] list, only~50% of substorms showed the signature of a clear injection of electrons into geosynchronous orbit and~25% showed no injection signature at all. Sergeev et al. [2012] discussed that the injection of particles into the inner magnetosphere depends on the injected particle population having a similar entropy to the plasma in the inner magnetosphere in order to penetrate into the radiation belt region via the interchange instability. Thus, this does not necessarily mean that 50% of events do not have injections but may indicate that these injections do not penetrate into the inner magnetosphere, and hence, the injected particles are not accelerated up to high levels. Jaynes et al. [2015] discussed that if any of the necessary populations (particle or wave) were not present in the radiation belts, the radiation belts would not be enhanced. Thus, following 50% of the substorm intervals that we studied, either the substorms did not produce the necessary wave population to accelerate the seed population or the seed population was not injected into the radiation belts. Taking the Boakes et al. [2011] results to be a statistically representative subset of events, it is more likely that only 50% of substorms inject particles into the radiation belts. Our results show a number of interesting features with regard to the time scales of acceleration and loss following substorms or quiet intervals. First, the radiation belts appear to be inherently lossy. After 33 h following a substorm interval, and for up to >200 h following a quiet interval, there are statistically significantly more decreases in the radiation belt that increases. This can, in fact, be seen in Figure 1a, with large-scale increases in the radiation belts being sharp and short lived, while the decreases have a much longer period, giving a sawtooth-like profile to the radiation belt content. Second, our analysis shows that on average, changes in the radiation belt content are lagged by~1-3 days following magnetospheric activity or a lack thereof. Examining the occurrence of increases or decreases in TRBEC shows that the effect of substorms or quiet times is most significant at~1 day following the event, whereas the mean changes in TRBEC are greatest after 48 h, as is the cross correlation between SML and TRBEC. Third, our analysis shows that the radiation belts have a memory of the substorm or quiet interval. The effect of substorms is to increase the likelihood of the radiation belts increasing 6 days after their occurrence. This implies that the acceleration mechanism is enhanced by the substorm activity but not limited to it. Conversely, the effect of quiet intervals is to increase the likelihood of the radiation belts decreasing for more than 6 days after the fact. This implies that a relatively short period with no substorm activity can suppress acceleration within the radiation belts for a far longer period afterward. In the context of wave-particle interactions causing enhancements and losses in the radiation belts, it is interesting to ask what the lifetime of the seed population and whistler mode chorus and other waves are following substorm interval and quiet intervals. To date, we are not aware of any study that addresses this question. Previous studies have reported that changes in electron flux at geosynchronous orbit at storm commencement are correlated with the size of the storm. Moon et al. [2004] examined the ratio of the electron fluxes before a storm and following the storm commencement for 50-400 keV electrons during 22 storm events and found that these correlated with the minimum storm time Dst index, with correlation coefficients (r) of 0.64-0.84 corresponding to 40-70% correlation (r 2 ). It is therefore unsurprising that the change in the higher-energy population is also somewhat correlated with SYM-H (a comparable measure to Dst), as shown in this study. However, the complex interplay between particle injection, wave generation, and wave-particle interactions that results in different losses and gains in the high-energy electron population means that the correlation between the changes in higher-energy particle fluxes and SYM-H does not necessarily reflect the correlations seen with lower energy particles. Furthermore, we consider the correlation between the increases in the radiation belt content and SML or SYM-H for a far greater number of events. The loss and acceleration mechanisms in the radiation belts are a complex interplay of different wave-particle interactions as well as plasma transport. Our results show that losses and acceleration occur after both quiet and substorm intervals, although with a greater proportion of loss periods following quiet intervals and a greater proportion of acceleration intervals following substorms. From modeling, the time scale for the Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 radiation belt electron flux to increase by an order of magnitude has been put at~24 h [e.g., Horne et al., 2005a], whereas our results show that the mean change in TRBEC following substorms is positive and statistically significantly different to that following quiet intervals up to 96 h after the fact. Within that time there are likely to be both substorm and quiet intervals. As such, increases or decreases in the radiation belt content may also depend on activity prior to or following the reference interval. Take, for example, a quiet interval following a period of substorm activity: if the loss rate increases, it may not overcome the preexisting acceleration rate; thus, the overall result is an increase, albeit at a lower acceleration rate. However, if we similarly consider a substorm interval following a quiet interval, we would expect that the acceleration of particles would reduce any rate of decrease. This is not seen in the data. Rather than the distribution of changes in TRBEC being shifted toward increases in TRBEC following substorm intervals, the distribution of radiation belt losses is approximately the same following substorm and quiet intervals at the lags examined, with substorms providing an additional population of increases. As such, in order to predict changes in the radiation belts following substorms, one must be able to determine what controls whether or not the substorm provides enhancements in radiation belt. Meredith et al. [2003a] suggested that a prolonged period of substorm activity may be needed to energize the radiation belts. Using data from the extended solar activity minimum at the end of solar cycle 23, Rodger et al. [2015] showed that recurrent substorms from the Newell and Gjerloev [2011] list (separated by less than 82 min) are more efficient that isolated substorms (separated by more than 3 h) in enhancing the radiation belts, although both recurrent and isolated substorms showed increase in the radiation belt electron flux. Within this study, we have considered substorm expansion and recovery phase occurrences within 3 h windows; thus, we do not separate isolated and recurrent substorms; however, we do show that the largest changes in radiation belt content occurred during periods of large SYM-H, suggestive of storm times in which we would expect to see periods of recurrent substorms. It is generally thought that whistler mode chorus waves are the dominant process that accelerates the seed population up to MeV energies [e.g., Horne et al., 2005aHorne et al., , 2005bLi et al., 2007;Thorne et al., 2013;Reeves et al., 2013;Jaynes et al., 2015]. The amplitude of these waves relates to their ability to accelerate particles, and this has been shown to increase with AE and thus substorm activity [Meredith et al., 2001[Meredith et al., , 2003b. However, whistler mode chorus waves are also implicated in pitch angle scattering that can move particles into the loss cone [Horne and Thorne, 2003]. Similarly, EMIC waves can efficiently scatter particles into the loss cone [Meredith et al., 2003c;Bortnik et al., 2006;Hendry et al., 2014;Usanova et al., 2014;Rodger et al., 2015]. A modeling study by Li et al. [2007] suggested that during storms the whistler mode chorus waves should give a net acceleration of particles, while EMIC waves were the dominant loss mechanism. However, both Horne et al. [2005a] and Li et al. [2007] used average wave power distributions observed during substorms determined by a single spacecraft (based on Meredith et al. [2001Meredith et al. [ , 2003b). These spatiotemporal averages can hide important information and are not necessarily representative of any individual event. Given that our results show that increases and decreases in the radiation belt content occur following substorms, the important question is how do the wave populations vary for increases and decreases in radiation belt content and how does this relate to substorm activity? Our results show radiation belt increases and decreases for all levels of substorm activity. However, while radiation belt increases are reasonably well correlated with SML, radiation belt decreases are less dependent on substorm size. This is somewhat counterintuitive if one expects the loss mechanisms to depend on wave amplitudes which, on average, increase with substorm size. A number of studies have examined how the acceleration of radiation belt particles may be related to upstream solar wind conditions, thus giving a way in which to predict future variations in the radiation belt [e.g., Baker et al., 1979;Reeves et al., 2011;McPherron et al., 2009;Li et al., 2015;Kim et al., 2015]. The solar wind can directly influence the radiation belts through the generation of ULF waves and modifying the particle population to generate VLF waves [e.g., Elkington, 2006;Shprits et al., 2008;Ebihara and Miyoshi, 2011;Miyoshi et al., 2013] or can indirectly influence the radiation belts by enhancing the energy of the plasma sheet population prior to substorms that are then subsequently injected [Forsyth et al., 2014;Sergeev et al., 2015]. In this study, we have not considered the impact of the solar wind on the radiation belts but rather statistically examined whether substorm activity alone shows any correspondence to changes in the radiation belt. It is worth noting that many of the studies that have examined the solar wind impact on the radiation belts show that the solar wind conditions that lead to radiation belt acceleration include a prolonged southward interplanetary magnetic field, elevated solar wind speed, but low solar wind density (thus giving a low dynamic pressure). These solar wind conditions are Journal of Geophysical Research: Space Physics 10.1002/2016JA022620 similar to those that are conducive to causing storms and substorms [e.g., Gonzalez and Tsurutani, 1987;Morley and Freeman, 2007], and as such the link between solar wind conditions and increases in the radiation belts may be both direct and indirect. It should not be a surprise that the response of the radiation belts shows a multifaceted response to substorms, given that the radiation belts can be energized, depleted, or unchanged during storms [Reeves et al., 2003]. In fact, our results show some similarity with those of Reeves et al. [2003], in that only 50% of substorms resulted in an increase in the radiation belt content, in keeping with their result that only 50% of storms show an increase in the radiation belt electron fluxes. However, our results cover a large range of SYM-H values showing that both storm time and non-storm time substorms show increases and decreases in the radiation belts. Conclusions We have statistically compared changes in the total radiation belt electron content from the Van Allen Probes over time with substorm activity determined by the SOPHIE algorithm. Substorm activity was broken down into 3 h windows in which substorm expansion or recovery phases began (substorm intervals) or there were no expansion and recovery phase onsets (quiet intervals). Changes in the radiation belt content were calculated as a net change over increasing time intervals. Our results show the following: 1. There is a 50% chance of an increase or decrease in the radiation belt content up to 33 h following a substorm interval. 2. There is up to a 75% chance of a decrease in radiation belt content following quiet intervals. 3. The radiation belts have an apparent memory of substorms and quiet intervals, extending out to 6 days after the event. 4. Substorms and quiet intervals are good predictors of increases and decreases in radiation belt with this skill and accuracy of this prediction maximizing at a lag of between 0.5 and 2.5 days. 5. The increases in radiation belt content 24 h following substorm intervals are correlated with both SML and SYM-H, along the correlation with SML is stronger. Furthermore, we have provided the median increases in TRBEC 24 h following a substorm for given ranges of SML and SYM-H. These results raise important questions for the existing framework for substorms increasing the radiation belt contents, namely, what prevents half of substorm intervals from increasing the radiation belt content and what controls the radiation belt loss rate? These are fundamental questions that must be answered in order to develop accurate modeling of the radiation belts with respect to substorm activity.
9,693.6
2016-07-01T00:00:00.000
[ "Physics" ]